Plaintext import/export (#1850)

* Add crate csv

* Add start of csv importing on backend

* Add Menomosyne serializer

* Add csv and json importing on backend

* Add plaintext importing on frontend

* Add csv metadata extraction on backend

* Add csv importing with GUI

* Fix missing dfa file in build

Added compile_data_attr, then re-ran cargo/update.py.

* Don't use doubly buffered reader in csv

* Escape HTML entities if CSV is not HTML

Also use name 'is_html' consistently.

* Use decimal number as foreign ease (like '2.5')

* ForeignCard.ivl → ForeignCard.interval

* Only allow fixed set of CSV delimiters

* Map timestamp of ForeignCard to native due time

* Don't trim CSV records

* Document use of empty strings for defaults

* Avoid creating CardGenContexts for every note

This requires CardGenContext to be generic, so it works both with an
owned and borrowed notetype.

* Show all accepted file types  in import file picker

* Add import_json_file()

* factor → ease_factor

* delimter_from_value → delimiter_from_value

* Map columns to fields, not the other way around

* Fallback to current config for csv metadata

* Add start of new import csv screen

* Temporary fix for compilation issue on Linux/Mac

* Disable jest bazel action for import-csv

Jest fails with an error code if no tests are available, but this would
not be noticable on Windows as Jest is not run there.

* Fix field mapping issue

* Revert "Temporary fix for compilation issue on Linux/Mac"

This reverts commit 21f8a261408cdae49ec031aa21a1b659c4f66d82.

* Add HtmlSwitch and move Switch to components

* Fix spacing and make selectors consistent

* Fix shortcut tooltip

* Place import button at the top with path

* Fix meta column indices

* Remove NotetypeForString

* Fix queue and type of foreign cards

* Support different dupe resolution strategies

* Allow dupe resolution selection when importing CSV

* Test import of unnormalized text

Close  #1863.

* Fix logging of foreign notes

* Implement CSV exports

* Use db_scalar() in notes_table_len()

* Rework CSV metadata

- Notetypes and decks are either defined by a global id or by a column.
- If a notetype id is provided, its field map must also be specified.
- If a notetype column is provided, fields are now mapped by index
instead of name at import time. So the first non-meta column is used for
the first field of every note, regardless of notetype. This makes
importing easier and should improve compatiblity with files without a
notetype column.
- Ensure first field can be mapped to a column.
- Meta columns must be defined as `#[meta name]:[column index]` instead
of in the `#columns` tag.
- Column labels contain the raw names defined by the file and must be
prettified by the frontend.

* Adjust frontend to new backend column mapping

* Add force flags for is_html and delimiter

* Detect if CSV is HTML by field content

* Update dupe resolution labels

* Simplify selectors

* Fix coalescence of oneofs in TS

* Disable meta columns from selection

Plus a lot of refactoring.

* Make import button stick to the bottom

* Write delimiter and html flag into csv

* Refetch field map after notetype change

* Fix log labels for csv import

* Log notes whose deck/notetype was missing

* Fix hiding of empty log queues

* Implement adding tags to all notes of a csv

* Fix dupe resolution not being set in log

* Implement adding tags to updated notes of a csv

* Check first note field is not empty

* Temporary fix for build on Linux/Mac

* Fix inverted html check (dae)

* Remove unused ftl string

* Delimiter → Separator

* Remove commented-out line

* Don't accept .json files

* Tweak tag ftl strings

* Remove redundant blur call

* Strip sound and add spaces in csv export

* Export HTML by default

* Fix unset deck in Mnemosyne import

Also accept both numbers and strings for notetypes and decks in JSON.

* Make DupeResolution::Update the default

* Fix missing dot in extension

* Make column indices 1-based

* Remove StickContainer from TagEditor

Fixes line breaking, border and z index on ImportCsvPage.

* Assign different key combos to tag editors

* Log all updated duplicates

Add a log field for the true number of found notes.

* Show identical notes as skipped

* Split tag-editor into separate ts module (dae)

* Add progress for CSV export

* Add progress for text import

* Tidy-ups after tag-editor split (dae)

- import-csv no longer depends on editor
- remove some commented lines
This commit is contained in:
RumovZ 2022-06-01 12:26:16 +02:00 committed by GitHub
parent 48642f25d0
commit 42cbe42f06
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
110 changed files with 4373 additions and 364 deletions

1
Cargo.lock generated
View File

@ -54,6 +54,7 @@ dependencies = [
"chrono",
"coarsetime",
"criterion",
"csv",
"env_logger",
"flate2",
"fluent",

View File

@ -48,6 +48,9 @@ compile_data_attr = "glob([\"**/*.rsv\"])"
[package.metadata.raze.crates.unic-ucd-category.'*']
compile_data_attr = "glob([\"**/*.rsv\"])"
[package.metadata.raze.crates.bstr.'*']
compile_data_attr = "glob([\"**/*.dfa\"])"
[package.metadata.raze.crates.pyo3-build-config.'*']
buildrs_additional_environment_variables = { "PYO3_NO_PYTHON" = "1" }

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -171,6 +171,16 @@ def raze_fetch_remote_crates():
build_file = Label("//cargo/remote:BUILD.block-buffer-0.10.2.bazel"),
)
maybe(
http_archive,
name = "raze__bstr__0_2_17",
url = "https://crates.io/api/v1/crates/bstr/0.2.17/download",
type = "tar.gz",
sha256 = "ba3569f383e8f1598449f1a423e72e99569137b47740b1da11ef19af3d5c3223",
strip_prefix = "bstr-0.2.17",
build_file = Label("//cargo/remote:BUILD.bstr-0.2.17.bazel"),
)
maybe(
http_archive,
name = "raze__bumpalo__3_9_1",
@ -361,6 +371,26 @@ def raze_fetch_remote_crates():
build_file = Label("//cargo/remote:BUILD.cssparser-macros-0.6.0.bazel"),
)
maybe(
http_archive,
name = "raze__csv__1_1_6",
url = "https://crates.io/api/v1/crates/csv/1.1.6/download",
type = "tar.gz",
sha256 = "22813a6dc45b335f9bade10bf7271dc477e81113e89eb251a0bc2a8a81c536e1",
strip_prefix = "csv-1.1.6",
build_file = Label("//cargo/remote:BUILD.csv-1.1.6.bazel"),
)
maybe(
http_archive,
name = "raze__csv_core__0_1_10",
url = "https://crates.io/api/v1/crates/csv-core/0.1.10/download",
type = "tar.gz",
sha256 = "2b2466559f260f48ad25fe6317b3c8dac77b5bdb5763ac7d9d6103530663bc90",
strip_prefix = "csv-core-0.1.10",
build_file = Label("//cargo/remote:BUILD.csv-core-0.1.10.bazel"),
)
maybe(
http_archive,
name = "raze__derive_more__0_99_17",
@ -1931,6 +1961,16 @@ def raze_fetch_remote_crates():
build_file = Label("//cargo/remote:BUILD.regex-1.5.5.bazel"),
)
maybe(
http_archive,
name = "raze__regex_automata__0_1_10",
url = "https://crates.io/api/v1/crates/regex-automata/0.1.10/download",
type = "tar.gz",
sha256 = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132",
strip_prefix = "regex-automata-0.1.10",
build_file = Label("//cargo/remote:BUILD.regex-automata-0.1.10.bazel"),
)
maybe(
http_archive,
name = "raze__regex_syntax__0_6_25",

View File

@ -161,6 +161,15 @@
"license_file": null,
"description": "Buffer type for block processing of data"
},
{
"name": "bstr",
"version": "0.2.17",
"authors": "Andrew Gallant <jamslam@gmail.com>",
"repository": "https://github.com/BurntSushi/bstr",
"license": "Apache-2.0 OR MIT",
"license_file": null,
"description": "A string type that is not required to be valid UTF-8."
},
{
"name": "bumpalo",
"version": "3.9.1",
@ -287,6 +296,24 @@
"license_file": null,
"description": "Common cryptographic traits"
},
{
"name": "csv",
"version": "1.1.6",
"authors": "Andrew Gallant <jamslam@gmail.com>",
"repository": "https://github.com/BurntSushi/rust-csv",
"license": "MIT OR Unlicense",
"license_file": null,
"description": "Fast CSV parsing with support for serde."
},
{
"name": "csv-core",
"version": "0.1.10",
"authors": "Andrew Gallant <jamslam@gmail.com>",
"repository": "https://github.com/BurntSushi/rust-csv",
"license": "MIT OR Unlicense",
"license_file": null,
"description": "Bare bones CSV parsing with no_std support."
},
{
"name": "digest",
"version": "0.10.3",
@ -1556,6 +1583,15 @@
"license_file": null,
"description": "An implementation of regular expressions for Rust. This implementation uses finite automata and guarantees linear time matching on all inputs."
},
{
"name": "regex-automata",
"version": "0.1.10",
"authors": "Andrew Gallant <jamslam@gmail.com>",
"repository": "https://github.com/BurntSushi/regex-automata",
"license": "MIT OR Unlicense",
"license_file": null,
"description": "Automata construction and matching using regular expressions."
},
{
"name": "regex-syntax",
"version": "0.6.25",

83
cargo/remote/BUILD.bstr-0.2.17.bazel vendored Normal file
View File

@ -0,0 +1,83 @@
"""
@generated
cargo-raze crate build file.
DO NOT EDIT! Replaced on runs of cargo-raze
"""
# buildifier: disable=load
load("@bazel_skylib//lib:selects.bzl", "selects")
# buildifier: disable=load
load(
"@rules_rust//rust:defs.bzl",
"rust_binary",
"rust_library",
"rust_proc_macro",
"rust_test",
)
package(default_visibility = [
# Public for visibility by "@raze__crate__version//" targets.
#
# Prefer access through "//cargo", which limits external
# visibility to explicit Cargo.toml dependencies.
"//visibility:public",
])
licenses([
"notice", # MIT from expression "MIT OR Apache-2.0"
])
# Generated Targets
# Unsupported target "graphemes" with type "example" omitted
# Unsupported target "graphemes-std" with type "example" omitted
# Unsupported target "lines" with type "example" omitted
# Unsupported target "lines-std" with type "example" omitted
# Unsupported target "uppercase" with type "example" omitted
# Unsupported target "uppercase-std" with type "example" omitted
# Unsupported target "words" with type "example" omitted
# Unsupported target "words-std" with type "example" omitted
rust_library(
name = "bstr",
srcs = glob(["**/*.rs"]),
crate_features = [
"default",
"lazy_static",
"regex-automata",
"serde",
"serde1",
"serde1-nostd",
"std",
"unicode",
],
crate_root = "src/lib.rs",
data = [],
compile_data = glob(["**/*.dfa"]),
edition = "2018",
rustc_flags = [
"--cap-lints=allow",
],
tags = [
"cargo-raze",
"crate-name=bstr",
"manual",
],
version = "0.2.17",
# buildifier: leave-alone
deps = [
"@raze__lazy_static__1_4_0//:lazy_static",
"@raze__memchr__2_4_1//:memchr",
"@raze__regex_automata__0_1_10//:regex_automata",
"@raze__serde__1_0_136//:serde",
],
)

135
cargo/remote/BUILD.csv-1.1.6.bazel vendored Normal file
View File

@ -0,0 +1,135 @@
"""
@generated
cargo-raze crate build file.
DO NOT EDIT! Replaced on runs of cargo-raze
"""
# buildifier: disable=load
load("@bazel_skylib//lib:selects.bzl", "selects")
# buildifier: disable=load
load(
"@rules_rust//rust:defs.bzl",
"rust_binary",
"rust_library",
"rust_proc_macro",
"rust_test",
)
package(default_visibility = [
# Public for visibility by "@raze__crate__version//" targets.
#
# Prefer access through "//cargo", which limits external
# visibility to explicit Cargo.toml dependencies.
"//visibility:public",
])
licenses([
"unencumbered", # Unlicense from expression "Unlicense OR MIT"
])
# Generated Targets
# Unsupported target "bench" with type "bench" omitted
# Unsupported target "cookbook-read-basic" with type "example" omitted
# Unsupported target "cookbook-read-colon" with type "example" omitted
# Unsupported target "cookbook-read-no-headers" with type "example" omitted
# Unsupported target "cookbook-read-serde" with type "example" omitted
# Unsupported target "cookbook-write-basic" with type "example" omitted
# Unsupported target "cookbook-write-serde" with type "example" omitted
# Unsupported target "tutorial-error-01" with type "example" omitted
# Unsupported target "tutorial-error-02" with type "example" omitted
# Unsupported target "tutorial-error-03" with type "example" omitted
# Unsupported target "tutorial-error-04" with type "example" omitted
# Unsupported target "tutorial-perf-alloc-01" with type "example" omitted
# Unsupported target "tutorial-perf-alloc-02" with type "example" omitted
# Unsupported target "tutorial-perf-alloc-03" with type "example" omitted
# Unsupported target "tutorial-perf-core-01" with type "example" omitted
# Unsupported target "tutorial-perf-serde-01" with type "example" omitted
# Unsupported target "tutorial-perf-serde-02" with type "example" omitted
# Unsupported target "tutorial-perf-serde-03" with type "example" omitted
# Unsupported target "tutorial-pipeline-pop-01" with type "example" omitted
# Unsupported target "tutorial-pipeline-search-01" with type "example" omitted
# Unsupported target "tutorial-pipeline-search-02" with type "example" omitted
# Unsupported target "tutorial-read-01" with type "example" omitted
# Unsupported target "tutorial-read-delimiter-01" with type "example" omitted
# Unsupported target "tutorial-read-headers-01" with type "example" omitted
# Unsupported target "tutorial-read-headers-02" with type "example" omitted
# Unsupported target "tutorial-read-serde-01" with type "example" omitted
# Unsupported target "tutorial-read-serde-02" with type "example" omitted
# Unsupported target "tutorial-read-serde-03" with type "example" omitted
# Unsupported target "tutorial-read-serde-04" with type "example" omitted
# Unsupported target "tutorial-read-serde-invalid-01" with type "example" omitted
# Unsupported target "tutorial-read-serde-invalid-02" with type "example" omitted
# Unsupported target "tutorial-setup-01" with type "example" omitted
# Unsupported target "tutorial-write-01" with type "example" omitted
# Unsupported target "tutorial-write-02" with type "example" omitted
# Unsupported target "tutorial-write-delimiter-01" with type "example" omitted
# Unsupported target "tutorial-write-serde-01" with type "example" omitted
# Unsupported target "tutorial-write-serde-02" with type "example" omitted
rust_library(
name = "csv",
srcs = glob(["**/*.rs"]),
crate_features = [
],
crate_root = "src/lib.rs",
data = [],
edition = "2018",
rustc_flags = [
"--cap-lints=allow",
],
tags = [
"cargo-raze",
"crate-name=csv",
"manual",
],
version = "1.1.6",
# buildifier: leave-alone
deps = [
"@raze__bstr__0_2_17//:bstr",
"@raze__csv_core__0_1_10//:csv_core",
"@raze__itoa__0_4_8//:itoa",
"@raze__ryu__1_0_9//:ryu",
"@raze__serde__1_0_136//:serde",
],
)
# Unsupported target "tests" with type "test" omitted

View File

@ -0,0 +1,58 @@
"""
@generated
cargo-raze crate build file.
DO NOT EDIT! Replaced on runs of cargo-raze
"""
# buildifier: disable=load
load("@bazel_skylib//lib:selects.bzl", "selects")
# buildifier: disable=load
load(
"@rules_rust//rust:defs.bzl",
"rust_binary",
"rust_library",
"rust_proc_macro",
"rust_test",
)
package(default_visibility = [
# Public for visibility by "@raze__crate__version//" targets.
#
# Prefer access through "//cargo", which limits external
# visibility to explicit Cargo.toml dependencies.
"//visibility:public",
])
licenses([
"unencumbered", # Unlicense from expression "Unlicense OR MIT"
])
# Generated Targets
# Unsupported target "bench" with type "bench" omitted
rust_library(
name = "csv_core",
srcs = glob(["**/*.rs"]),
crate_features = [
"default",
],
crate_root = "src/lib.rs",
data = [],
edition = "2018",
rustc_flags = [
"--cap-lints=allow",
],
tags = [
"cargo-raze",
"crate-name=csv-core",
"manual",
],
version = "0.1.10",
# buildifier: leave-alone
deps = [
"@raze__memchr__2_4_1//:memchr",
],
)

View File

@ -0,0 +1,56 @@
"""
@generated
cargo-raze crate build file.
DO NOT EDIT! Replaced on runs of cargo-raze
"""
# buildifier: disable=load
load("@bazel_skylib//lib:selects.bzl", "selects")
# buildifier: disable=load
load(
"@rules_rust//rust:defs.bzl",
"rust_binary",
"rust_library",
"rust_proc_macro",
"rust_test",
)
package(default_visibility = [
# Public for visibility by "@raze__crate__version//" targets.
#
# Prefer access through "//cargo", which limits external
# visibility to explicit Cargo.toml dependencies.
"//visibility:public",
])
licenses([
"unencumbered", # Unlicense from expression "Unlicense OR MIT"
])
# Generated Targets
rust_library(
name = "regex_automata",
srcs = glob(["**/*.rs"]),
crate_features = [
],
crate_root = "src/lib.rs",
data = [],
edition = "2015",
rustc_flags = [
"--cap-lints=allow",
],
tags = [
"cargo-raze",
"crate-name=regex-automata",
"manual",
],
version = "0.1.10",
# buildifier: leave-alone
deps = [
],
)
# Unsupported target "default" with type "test" omitted

View File

@ -1,6 +1,7 @@
importing-failed-debug-info = Import failed. Debugging info:
importing-aborted = Aborted: { $val }
importing-added-duplicate-with-first-field = Added duplicate with first field: { $val }
importing-all-supported-formats = All supported formats { $val }
importing-allow-html-in-fields = Allow HTML in fields
importing-anki-files-are-from-a-very = .anki files are from a very old version of Anki. You can import them with add-on 175027074 or with Anki 2.0, available on the Anki website.
importing-anki2-files-are-not-directly-importable = .anki2 files are not directly importable - please import the .apkg or .zip file you have received instead.
@ -8,11 +9,14 @@ importing-appeared-twice-in-file = Appeared twice in file: { $val }
importing-by-default-anki-will-detect-the = By default, Anki will detect the character between fields, such as a tab, comma, and so on. If Anki is detecting the character incorrectly, you can enter it here. Use \t to represent tab.
importing-change = Change
importing-colon = Colon
importing-column = Column { $val }
importing-comma = Comma
importing-empty-first-field = Empty first field: { $val }
importing-field-separator = Field separator
importing-field-mapping = Field mapping
importing-field-of-file-is = Field <b>{ $val }</b> of file is:
importing-fields-separated-by = Fields separated by: { $val }
importing-file-must-contain-field-column = File must contain at least one column that can be mapped to a note field.
importing-file-version-unknown-trying-import-anyway = File version unknown, trying import anyway.
importing-first-field-matched = First field matched: { $val }
importing-identical = Identical
@ -36,6 +40,7 @@ importing-notes-that-could-not-be-imported = Notes that could not be imported as
importing-notes-updated-as-file-had-newer = Notes updated, as file had newer version: { $val }
importing-packaged-anki-deckcollection-apkg-colpkg-zip = Packaged Anki Deck/Collection (*.apkg *.colpkg *.zip)
importing-pauker-18-lesson-paugz = Pauker 1.8 Lesson (*.pau.gz)
importing-pipe = Pipe
importing-rows-had-num1d-fields-expected-num2d = '{ $row }' had { $found } fields, expected { $expected }
importing-selected-file-was-not-in-utf8 = Selected file was not in UTF-8 format. Please see the importing section of the manual.
importing-semicolon = Semicolon
@ -87,4 +92,15 @@ importing-processed-notes =
[one] Processed { $count } note...
*[other] Processed { $count } notes...
}
importing-processed-cards =
{ $count ->
[one] Processed { $count } card...
*[other] Processed { $count } cards...
}
importing-unable-to-import-filename = Unable to import { $filename }: file type not supported
importing-existing-notes = Existing notes
importing-duplicate = Duplicate
importing-preserve = Preserve
importing-update = Update
importing-tag-all-notes = Tag all notes
importing-tag-updated-notes = Tag updated notes

View File

@ -1,3 +1,5 @@
notetypes-notetype = Notetype
## Default field names in newly created note types
notetypes-front-field = Front

View File

@ -5,6 +5,7 @@ syntax = "proto3";
package anki.import_export;
import "anki/cards.proto";
import "anki/collection.proto";
import "anki/notes.proto";
import "anki/generic.proto";
@ -14,9 +15,14 @@ service ImportExportService {
returns (generic.Empty);
rpc ExportCollectionPackage(ExportCollectionPackageRequest)
returns (generic.Empty);
rpc ImportAnkiPackage(ImportAnkiPackageRequest)
returns (ImportAnkiPackageResponse);
rpc ImportAnkiPackage(ImportAnkiPackageRequest) returns (ImportResponse);
rpc ExportAnkiPackage(ExportAnkiPackageRequest) returns (generic.UInt32);
rpc GetCsvMetadata(CsvMetadataRequest) returns (CsvMetadata);
rpc ImportCsv(ImportCsvRequest) returns (ImportResponse);
rpc ExportNoteCsv(ExportNoteCsvRequest) returns (generic.UInt32);
rpc ExportCardCsv(ExportCardCsvRequest) returns (generic.UInt32);
rpc ImportJsonFile(generic.String) returns (ImportResponse);
rpc ImportJsonString(generic.String) returns (ImportResponse);
}
message ImportCollectionPackageRequest {
@ -36,7 +42,7 @@ message ImportAnkiPackageRequest {
string package_path = 1;
}
message ImportAnkiPackageResponse {
message ImportResponse {
message Note {
notes.NoteId id = 1;
repeated string fields = 2;
@ -46,6 +52,14 @@ message ImportAnkiPackageResponse {
repeated Note updated = 2;
repeated Note duplicate = 3;
repeated Note conflicting = 4;
repeated Note first_field_match = 5;
repeated Note missing_notetype = 6;
repeated Note missing_deck = 7;
repeated Note empty_first_field = 8;
ImportCsvRequest.DupeResolution dupe_resolution = 9;
// Usually the sum of all queues, but may be lower if multiple duplicates
// have been updated with the same note.
uint32 found_notes = 10;
}
collection.OpChanges changes = 1;
Log log = 2;
@ -56,11 +70,7 @@ message ExportAnkiPackageRequest {
bool with_scheduling = 2;
bool with_media = 3;
bool legacy = 4;
oneof selector {
generic.Empty whole_collection = 5;
int64 deck_id = 6;
notes.NoteIds note_ids = 7;
}
ExportLimit limit = 5;
}
message PackageMetadata {
@ -92,3 +102,87 @@ message MediaEntries {
repeated MediaEntry entries = 1;
}
message ImportCsvRequest {
enum DupeResolution {
UPDATE = 0;
ADD = 1;
IGNORE = 2;
// UPDATE_IF_NEWER = 3;
}
string path = 1;
CsvMetadata metadata = 2;
DupeResolution dupe_resolution = 3;
}
message CsvMetadataRequest {
string path = 1;
optional CsvMetadata.Delimiter delimiter = 2;
optional int64 notetype_id = 3;
}
// Column indices are 1-based to make working with them in TS easier, where
// unset numerical fields default to 0.
message CsvMetadata {
// Order roughly in ascending expected frequency in note text, because the
// delimiter detection algorithm is stupidly picking the first one it
// encounters.
enum Delimiter {
TAB = 0;
PIPE = 1;
SEMICOLON = 2;
COLON = 3;
COMMA = 4;
SPACE = 5;
}
message MappedNotetype {
int64 id = 1;
// Source column indices for note fields. One-based. 0 means n/a.
repeated uint32 field_columns = 2;
}
Delimiter delimiter = 1;
bool is_html = 2;
repeated string global_tags = 3;
repeated string updated_tags = 4;
// Column names as defined by the file or empty strings otherwise. Also used
// to determine the number of columns.
repeated string column_labels = 5;
oneof deck {
int64 deck_id = 6;
// One-based. 0 means n/a.
uint32 deck_column = 7;
}
oneof notetype {
// One notetype for all rows with given column mapping.
MappedNotetype global_notetype = 8;
// Row-specific notetypes with automatic mapping by index.
// One-based. 0 means n/a.
uint32 notetype_column = 9;
}
// One-based. 0 means n/a.
uint32 tags_column = 10;
bool force_delimiter = 11;
bool force_is_html = 12;
}
message ExportCardCsvRequest {
string out_path = 1;
bool with_html = 2;
ExportLimit limit = 3;
}
message ExportNoteCsvRequest {
string out_path = 1;
bool with_html = 2;
bool with_tags = 3;
ExportLimit limit = 4;
}
message ExportLimit {
oneof limit {
generic.Empty whole_collection = 1;
int64 deck_id = 2;
notes.NoteIds note_ids = 3;
cards.CardIds card_ids = 4;
}
}

View File

@ -27,6 +27,7 @@ service NotetypesService {
rpc GetChangeNotetypeInfo(GetChangeNotetypeInfoRequest)
returns (ChangeNotetypeInfo);
rpc ChangeNotetype(ChangeNotetypeRequest) returns (collection.OpChanges);
rpc GetFieldNames(NotetypeId) returns (generic.StringList);
}
message NotetypeId {

View File

@ -22,7 +22,10 @@ ignored-classes=
CustomStudyRequest,
Cram,
ScheduleCardsAsNewRequest,
ExportAnkiPackageRequest,
ExportLimit,
CsvColumn,
CsvMetadata,
ImportCsvRequest,
[REPORTS]
output-format=colorized

View File

@ -33,7 +33,11 @@ OpChangesAfterUndo = collection_pb2.OpChangesAfterUndo
BrowserRow = search_pb2.BrowserRow
BrowserColumns = search_pb2.BrowserColumns
StripHtmlMode = card_rendering_pb2.StripHtmlRequest
ImportLogWithChanges = import_export_pb2.ImportAnkiPackageResponse
ImportLogWithChanges = import_export_pb2.ImportResponse
ImportCsvRequest = import_export_pb2.ImportCsvRequest
DupeResolution = ImportCsvRequest.DupeResolution
CsvMetadata = import_export_pb2.CsvMetadata
Delimiter = import_export_pb2.CsvMetadata.Delimiter
import copy
import os
@ -102,7 +106,12 @@ class NoteIdsLimit:
note_ids: Sequence[NoteId]
ExportLimit = Union[DeckIdLimit, NoteIdsLimit, None]
@dataclass
class CardIdsLimit:
card_ids: Sequence[CardId]
ExportLimit = Union[DeckIdLimit, NoteIdsLimit, CardIdsLimit, None]
class Collection(DeprecatedNamesMixin):
@ -389,19 +398,55 @@ class Collection(DeprecatedNamesMixin):
with_media: bool,
legacy_support: bool,
) -> int:
request = import_export_pb2.ExportAnkiPackageRequest(
return self._backend.export_anki_package(
out_path=out_path,
with_scheduling=with_scheduling,
with_media=with_media,
legacy=legacy_support,
limit=pb_export_limit(limit),
)
if isinstance(limit, DeckIdLimit):
request.deck_id = limit.deck_id
elif isinstance(limit, NoteIdsLimit):
request.note_ids.note_ids.extend(limit.note_ids)
else:
request.whole_collection.SetInParent()
return self._backend.export_anki_package(request)
def get_csv_metadata(self, path: str, delimiter: Delimiter.V | None) -> CsvMetadata:
request = import_export_pb2.CsvMetadataRequest(path=path, delimiter=delimiter)
return self._backend.get_csv_metadata(request)
def import_csv(self, request: ImportCsvRequest) -> ImportLogWithChanges:
log = self._backend.import_csv_raw(request.SerializeToString())
return ImportLogWithChanges.FromString(log)
def export_note_csv(
self,
*,
out_path: str,
limit: ExportLimit,
with_html: bool,
with_tags: bool,
) -> int:
return self._backend.export_note_csv(
out_path=out_path,
with_html=with_html,
with_tags=with_tags,
limit=pb_export_limit(limit),
)
def export_card_csv(
self,
*,
out_path: str,
limit: ExportLimit,
with_html: bool,
) -> int:
return self._backend.export_card_csv(
out_path=out_path,
with_html=with_html,
limit=pb_export_limit(limit),
)
def import_json_file(self, path: str) -> ImportLogWithChanges:
return self._backend.import_json_file(path)
def import_json_string(self, json: str) -> ImportLogWithChanges:
return self._backend.import_json_string(json)
# Object helpers
##########################################################################
@ -1277,3 +1322,16 @@ class _ReviewsUndo:
_UndoInfo = Union[_ReviewsUndo, LegacyCheckpoint, None]
def pb_export_limit(limit: ExportLimit) -> import_export_pb2.ExportLimit:
message = import_export_pb2.ExportLimit()
if isinstance(limit, DeckIdLimit):
message.deck_id = limit.deck_id
elif isinstance(limit, NoteIdsLimit):
message.note_ids.note_ids.extend(limit.note_ids)
elif isinstance(limit, CardIdsLimit):
message.card_ids.cids.extend(limit.card_ids)
else:
message.whole_collection.SetInParent()
return message

View File

@ -70,6 +70,7 @@ MODEL_STD = 0
MODEL_CLOZE = 1
STARTING_FACTOR = 2500
STARTING_FACTOR_FRACTION = STARTING_FACTOR / 1000
HELP_SITE = "https://docs.ankiweb.net/"

View File

@ -0,0 +1,119 @@
# Copyright: Ankitects Pty Ltd and contributors
# License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
"""Helpers for serializing third-party collections to a common JSON form.
"""
from __future__ import annotations
import json
from dataclasses import asdict, dataclass, field
from typing import Union
from anki.consts import STARTING_FACTOR_FRACTION
from anki.decks import DeckId
from anki.models import NotetypeId
@dataclass
class ForeignCardType:
name: str
qfmt: str
afmt: str
@staticmethod
def front_back() -> ForeignCardType:
return ForeignCardType(
"Card 1",
qfmt="{{Front}}",
afmt="{{FrontSide}}\n\n<hr id=answer>\n\n{{Back}}",
)
@staticmethod
def back_front() -> ForeignCardType:
return ForeignCardType(
"Card 2",
qfmt="{{Back}}",
afmt="{{FrontSide}}\n\n<hr id=answer>\n\n{{Front}}",
)
@staticmethod
def cloze() -> ForeignCardType:
return ForeignCardType(
"Cloze", qfmt="{{cloze:Text}}", afmt="{{cloze:Text}}<br>\n{{Back Extra}}"
)
@dataclass
class ForeignNotetype:
name: str
fields: list[str]
templates: list[ForeignCardType]
is_cloze: bool = False
@staticmethod
def basic(name: str) -> ForeignNotetype:
return ForeignNotetype(name, ["Front", "Back"], [ForeignCardType.front_back()])
@staticmethod
def basic_reverse(name: str) -> ForeignNotetype:
return ForeignNotetype(
name,
["Front", "Back"],
[ForeignCardType.front_back(), ForeignCardType.back_front()],
)
@staticmethod
def cloze(name: str) -> ForeignNotetype:
return ForeignNotetype(
name, ["Text", "Back Extra"], [ForeignCardType.cloze()], is_cloze=True
)
@dataclass
class ForeignCard:
"""Data for creating an Anki card.
Usually a review card, as the default card generation routine will take care
of missing new cards.
due -- UNIX timestamp
interval -- days
ease_factor -- decimal fraction (2.5 corresponds to default ease)
"""
# TODO: support new and learning cards?
due: int = 0
interval: int = 1
ease_factor: float = STARTING_FACTOR_FRACTION
reps: int = 0
lapses: int = 0
@dataclass
class ForeignNote:
fields: list[str] = field(default_factory=list)
tags: list[str] = field(default_factory=list)
notetype: Union[str, NotetypeId] = ""
deck: Union[str, DeckId] = ""
cards: list[ForeignCard] = field(default_factory=list)
@dataclass
class ForeignData:
notes: list[ForeignNote] = field(default_factory=list)
notetypes: list[ForeignNotetype] = field(default_factory=list)
default_deck: Union[str, DeckId] = ""
def serialize(self) -> str:
return json.dumps(self, cls=ForeignDataEncoder, separators=(",", ":"))
class ForeignDataEncoder(json.JSONEncoder):
def default(self, obj: object) -> dict:
if isinstance(
obj,
(ForeignData, ForeignNote, ForeignCard, ForeignNotetype, ForeignCardType),
):
return asdict(obj)
return json.JSONEncoder.default(self, obj)

View File

@ -0,0 +1,252 @@
# Copyright: Ankitects Pty Ltd and contributors
# License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
"""Serializer for Mnemosyne collections.
Some notes about their structure:
https://github.com/mnemosyne-proj/mnemosyne/blob/master/mnemosyne/libmnemosyne/docs/source/index.rst
Anki | Mnemosyne
----------+-----------
Note | Fact
Card Type | Fact View
Card | Card
Notetype | Card Type
"""
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Tuple, Type
from anki.db import DB
from anki.decks import DeckId
from anki.foreign_data import (
ForeignCard,
ForeignCardType,
ForeignData,
ForeignNote,
ForeignNotetype,
)
def serialize(db_path: str, deck_id: DeckId) -> str:
db = open_mnemosyne_db(db_path)
return gather_data(db, deck_id).serialize()
def gather_data(db: DB, deck_id: DeckId) -> ForeignData:
facts = gather_facts(db)
gather_cards_into_facts(db, facts)
used_fact_views: dict[Type[MnemoFactView], bool] = {}
notes = [fact.foreign_note(used_fact_views) for fact in facts.values()]
notetypes = [fact_view.foreign_notetype() for fact_view in used_fact_views]
return ForeignData(notes, notetypes, deck_id)
def open_mnemosyne_db(db_path: str) -> DB:
db = DB(db_path)
ver = db.scalar("SELECT value FROM global_variables WHERE key='version'")
if not ver.startswith("Mnemosyne SQL 1") and ver not in ("2", "3"):
print("Mnemosyne version unknown, trying to import anyway")
return db
class MnemoFactView(ABC):
notetype: str
field_keys: Tuple[str, ...]
@classmethod
@abstractmethod
def foreign_notetype(cls) -> ForeignNotetype:
pass
class FrontOnly(MnemoFactView):
notetype = "Mnemosyne-FrontOnly"
field_keys = ("f", "b")
@classmethod
def foreign_notetype(cls) -> ForeignNotetype:
return ForeignNotetype.basic(cls.notetype)
class FrontBack(MnemoFactView):
notetype = "Mnemosyne-FrontBack"
field_keys = ("f", "b")
@classmethod
def foreign_notetype(cls) -> ForeignNotetype:
return ForeignNotetype.basic_reverse(cls.notetype)
class Vocabulary(MnemoFactView):
notetype = "Mnemosyne-Vocabulary"
field_keys = ("f", "p_1", "m_1", "n")
@classmethod
def foreign_notetype(cls) -> ForeignNotetype:
return ForeignNotetype(
cls.notetype,
["Expression", "Pronunciation", "Meaning", "Notes"],
[cls._recognition_card_type(), cls._production_card_type()],
)
@staticmethod
def _recognition_card_type() -> ForeignCardType:
return ForeignCardType(
name="Recognition",
qfmt="{{Expression}}",
afmt="{{Expression}}\n\n<hr id=answer>\n\n{{{{Pronunciation}}}}"
"<br>\n{{{{Meaning}}}}<br>\n{{{{Notes}}}}",
)
@staticmethod
def _production_card_type() -> ForeignCardType:
return ForeignCardType(
name="Production",
qfmt="{{Meaning}}",
afmt="{{Meaning}}\n\n<hr id=answer>\n\n{{{{Expression}}}}"
"<br>\n{{{{Pronunciation}}}}<br>\n{{{{Notes}}}}",
)
class Cloze(MnemoFactView):
notetype = "Mnemosyne-Cloze"
field_keys = ("text",)
@classmethod
def foreign_notetype(cls) -> ForeignNotetype:
return ForeignNotetype.cloze(cls.notetype)
@dataclass
class MnemoCard:
fact_view_id: str
tags: str
next_rep: int
last_rep: int
easiness: float
reps: int
lapses: int
def card_ord(self) -> int:
ord = self.fact_view_id.rsplit(".", maxsplit=1)[-1]
try:
return int(ord) - 1
except ValueError as err:
raise Exception(
f"Fact view id '{self.fact_view_id}' has unknown format"
) from err
def is_new(self) -> bool:
return self.last_rep == -1
def foreign_card(self) -> ForeignCard:
return ForeignCard(
ease_factor=self.easiness,
reps=self.reps,
lapses=self.lapses,
interval=self.anki_interval(),
due=self.next_rep,
)
def anki_interval(self) -> int:
return max(1, (self.next_rep - self.last_rep) // 86400)
@dataclass
class MnemoFact:
id: int
fields: dict[str, str] = field(default_factory=dict)
cards: list[MnemoCard] = field(default_factory=list)
def foreign_note(
self, used_fact_views: dict[Type[MnemoFactView], bool]
) -> ForeignNote:
fact_view = self.fact_view()
used_fact_views[fact_view] = True
return ForeignNote(
fields=self.anki_fields(fact_view),
tags=self.anki_tags(),
notetype=fact_view.notetype,
cards=self.foreign_cards(),
)
def fact_view(self) -> Type[MnemoFactView]:
try:
fact_view = self.cards[0].fact_view_id
except IndexError as err:
raise Exception(f"Fact {id} has no cards") from err
if fact_view.startswith("1.") or fact_view.startswith("1::"):
return FrontOnly
elif fact_view.startswith("2.") or fact_view.startswith("2::"):
return FrontBack
elif fact_view.startswith("3.") or fact_view.startswith("3::"):
return Vocabulary
elif fact_view.startswith("5.1"):
return Cloze
raise Exception(f"Fact {id} has unknown fact view: {fact_view}")
def anki_fields(self, fact_view: Type[MnemoFactView]) -> list[str]:
return [munge_field(self.fields.get(k, "")) for k in fact_view.field_keys]
def anki_tags(self) -> list[str]:
tags: list[str] = []
for card in self.cards:
if not card.tags:
continue
tags.extend(
t.replace(" ", "_").replace("\u3000", "_")
for t in card.tags.split(", ")
)
return tags
def foreign_cards(self) -> list[ForeignCard]:
# generate defaults for new cards
return [card.foreign_card() for card in self.cards if not card.is_new()]
def munge_field(field: str) -> str:
# \n -> br
field = re.sub("\r?\n", "<br>", field)
# latex differences
field = re.sub(r"(?i)<(/?(\$|\$\$|latex))>", "[\\1]", field)
# audio differences
field = re.sub('<audio src="(.+?)">(</audio>)?', "[sound:\\1]", field)
return field
def gather_facts(db: DB) -> dict[int, MnemoFact]:
facts: dict[int, MnemoFact] = {}
for id, key, value in db.execute(
"""
SELECT _id, key, value
FROM facts, data_for_fact
WHERE facts._id=data_for_fact._fact_id"""
):
if not (fact := facts.get(id)):
facts[id] = fact = MnemoFact(id)
fact.fields[key] = value
return facts
def gather_cards_into_facts(db: DB, facts: dict[int, MnemoFact]) -> None:
for fact_id, *row in db.execute(
"""
SELECT
_fact_id,
fact_view_id,
tags,
next_rep,
last_rep,
easiness,
acq_reps + ret_reps,
lapses
FROM cards"""
):
facts[fact_id].cards.append(MnemoCard(*row))
for fact in facts.values():
fact.cards.sort(key=lambda c: c.card_ord())

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -18,6 +18,8 @@ ignored-classes=
CustomStudyRequest,
Cram,
ScheduleCardsAsNewRequest,
CsvColumn,
CsvMetadata,
[REPORTS]
output-format=colorized

View File

@ -1 +1 @@
../../../../.prettierrc
../../../../.prettierrc

View File

@ -7,6 +7,7 @@ _pages = [
"change-notetype",
"card-info",
"fields",
"import-csv",
]
[copy_files_into_group(

View File

@ -49,7 +49,12 @@ class ExportDialog(QDialog):
self.open()
def setup(self, did: DeckId | None) -> None:
self.exporters: list[Type[Exporter]] = [ApkgExporter, ColpkgExporter]
self.exporters: list[Type[Exporter]] = [
ApkgExporter,
ColpkgExporter,
NoteCsvExporter,
CardCsvExporter,
]
self.frm.format.insertItems(
0, [f"{e.name()} (.{e.extension})" for e in self.exporters]
)
@ -72,6 +77,7 @@ class ExportDialog(QDialog):
# save button
b = QPushButton(tr.exporting_export())
self.frm.buttonBox.addButton(b, QDialogButtonBox.ButtonRole.AcceptRole)
self.frm.includeHTML.setChecked(True)
# set default option if accessed through deck button
if did:
name = self.mw.col.decks.get(did)["name"]
@ -102,7 +108,7 @@ class ExportDialog(QDialog):
title=tr.actions_export(),
dir_description="export",
key=self.exporter.name(),
ext=self.exporter.extension,
ext="." + self.exporter.extension,
fname=filename,
)
if not path:
@ -244,6 +250,56 @@ class ApkgExporter(Exporter):
).with_backend_progress(export_progress_update).run_in_background()
class NoteCsvExporter(Exporter):
extension = "txt"
show_deck_list = True
show_include_html = True
show_include_tags = True
@staticmethod
def name() -> str:
return tr.exporting_notes_in_plain_text()
@staticmethod
def export(mw: aqt.main.AnkiQt, options: Options) -> None:
QueryOp(
parent=mw,
op=lambda col: col.export_note_csv(
out_path=options.out_path,
limit=options.limit,
with_html=options.include_html,
with_tags=options.include_tags,
),
success=lambda count: tooltip(
tr.exporting_note_exported(count=count), parent=mw
),
).with_backend_progress(export_progress_update).run_in_background()
class CardCsvExporter(Exporter):
extension = "txt"
show_deck_list = True
show_include_html = True
@staticmethod
def name() -> str:
return tr.exporting_cards_in_plain_text()
@staticmethod
def export(mw: aqt.main.AnkiQt, options: Options) -> None:
QueryOp(
parent=mw,
op=lambda col: col.export_card_csv(
out_path=options.out_path,
limit=options.limit,
with_html=options.include_html,
),
success=lambda count: tooltip(
tr.exporting_card_exported(count=count), parent=mw
),
).with_backend_progress(export_progress_update).run_in_background()
def export_progress_update(progress: Progress, update: ProgressUpdate) -> None:
if not progress.HasField("exporting"):
return

View File

@ -0,0 +1,62 @@
# Copyright: Ankitects Pty Ltd and contributors
# License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
from __future__ import annotations
import aqt
import aqt.deckconf
import aqt.main
import aqt.operations
from anki.collection import ImportCsvRequest
from aqt.qt import *
from aqt.utils import addCloseShortcut, disable_help_button, restoreGeom, saveGeom, tr
from aqt.webview import AnkiWebView
class ImportCsvDialog(QDialog):
TITLE = "csv import"
silentlyClose = True
def __init__(
self,
mw: aqt.main.AnkiQt,
path: str,
on_accepted: Callable[[ImportCsvRequest], None],
) -> None:
QDialog.__init__(self, mw)
self.mw = mw
self._on_accepted = on_accepted
self._setup_ui(path)
self.show()
def _setup_ui(self, path: str) -> None:
self.setWindowModality(Qt.WindowModality.ApplicationModal)
self.mw.garbage_collect_on_dialog_finish(self)
self.setMinimumSize(400, 300)
disable_help_button(self)
restoreGeom(self, self.TITLE)
addCloseShortcut(self)
self.web = AnkiWebView(title=self.TITLE)
self.web.setVisible(False)
self.web.load_ts_page("import-csv")
layout = QVBoxLayout()
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(self.web)
self.setLayout(layout)
self.web.eval(f"anki.setupImportCsvPage('{path}');")
self.setWindowTitle(tr.decks_import_file())
def reject(self) -> None:
self.web.cleanup()
self.web = None
saveGeom(self, self.TITLE)
QDialog.reject(self)
def do_import(self, data: bytes) -> None:
request = ImportCsvRequest()
request.ParseFromString(data)
self._on_accepted(request)
super().reject()

View File

@ -3,33 +3,155 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass
from itertools import chain
from typing import Any, Tuple, Type
import aqt.main
from anki.collection import Collection, ImportLogWithChanges, Progress
from anki.collection import (
Collection,
DupeResolution,
ImportCsvRequest,
ImportLogWithChanges,
Progress,
)
from anki.errors import Interrupted
from anki.foreign_data import mnemosyne
from anki.lang import without_unicode_isolation
from aqt.import_export.import_csv_dialog import ImportCsvDialog
from aqt.operations import CollectionOp, QueryOp
from aqt.progress import ProgressUpdate
from aqt.qt import *
from aqt.utils import askUser, getFile, showInfo, showText, showWarning, tooltip, tr
from aqt.utils import askUser, getFile, showText, showWarning, tooltip, tr
class Importer(ABC):
accepted_file_endings: list[str]
@classmethod
def can_import(cls, lowercase_filename: str) -> bool:
return any(
lowercase_filename.endswith(ending) for ending in cls.accepted_file_endings
)
@classmethod
@abstractmethod
def do_import(cls, mw: aqt.main.AnkiQt, path: str) -> None:
...
class ColpkgImporter(Importer):
accepted_file_endings = [".apkg", ".colpkg"]
@staticmethod
def can_import(filename: str) -> bool:
return (
filename == "collection.apkg"
or (filename.startswith("backup-") and filename.endswith(".apkg"))
or filename.endswith(".colpkg")
)
@staticmethod
def do_import(mw: aqt.main.AnkiQt, path: str) -> None:
if askUser(
tr.importing_this_will_delete_your_existing_collection(),
msgfunc=QMessageBox.warning,
defaultno=True,
):
ColpkgImporter._import(mw, path)
@staticmethod
def _import(mw: aqt.main.AnkiQt, file: str) -> None:
def on_success() -> None:
mw.loadCollection()
tooltip(tr.importing_importing_complete())
def on_failure(err: Exception) -> None:
mw.loadCollection()
if not isinstance(err, Interrupted):
showWarning(str(err))
QueryOp(
parent=mw,
op=lambda _: mw.create_backup_now(),
success=lambda _: mw.unloadCollection(
lambda: import_collection_package_op(mw, file, on_success)
.failure(on_failure)
.run_in_background()
),
).with_progress().run_in_background()
class ApkgImporter(Importer):
accepted_file_endings = [".apkg", ".zip"]
@staticmethod
def do_import(mw: aqt.main.AnkiQt, path: str) -> None:
CollectionOp(
parent=mw,
op=lambda col: col.import_anki_package(path),
).with_backend_progress(import_progress_update).success(
show_import_log
).run_in_background()
class MnemosyneImporter(Importer):
accepted_file_endings = [".db"]
@staticmethod
def do_import(mw: aqt.main.AnkiQt, path: str) -> None:
QueryOp(
parent=mw,
op=lambda col: mnemosyne.serialize(path, col.decks.current()["id"]),
success=lambda json: import_json_string(mw, json),
).with_progress().run_in_background()
class CsvImporter(Importer):
accepted_file_endings = [".csv", ".tsv", ".txt"]
@staticmethod
def do_import(mw: aqt.main.AnkiQt, path: str) -> None:
def on_accepted(request: ImportCsvRequest) -> None:
CollectionOp(
parent=mw,
op=lambda col: col.import_csv(request),
).with_backend_progress(import_progress_update).success(
show_import_log
).run_in_background()
ImportCsvDialog(mw, path, on_accepted)
class JsonImporter(Importer):
accepted_file_endings = [".anki-json"]
@staticmethod
def do_import(mw: aqt.main.AnkiQt, path: str) -> None:
CollectionOp(
parent=mw,
op=lambda col: col.import_json_file(path),
).with_backend_progress(import_progress_update).success(
show_import_log
).run_in_background()
IMPORTERS: list[Type[Importer]] = [
ColpkgImporter,
ApkgImporter,
MnemosyneImporter,
CsvImporter,
]
def import_file(mw: aqt.main.AnkiQt, path: str) -> None:
filename = os.path.basename(path).lower()
if filename.endswith(".anki"):
showInfo(tr.importing_anki_files_are_from_a_very())
elif filename.endswith(".anki2"):
showInfo(tr.importing_anki2_files_are_not_directly_importable())
elif is_collection_package(filename):
maybe_import_collection_package(mw, path)
elif filename.endswith(".apkg") or filename.endswith(".zip"):
import_anki_package(mw, path)
else:
showWarning(
tr.importing_unable_to_import_filename(filename=filename),
parent=mw,
textFormat="plain",
)
for importer in IMPORTERS:
if importer.can_import(filename):
importer.do_import(mw, path)
return
showWarning("Unsupported file type.")
def prompt_for_file_then_import(mw: aqt.main.AnkiQt) -> None:
@ -38,53 +160,20 @@ def prompt_for_file_then_import(mw: aqt.main.AnkiQt) -> None:
def get_file_path(mw: aqt.main.AnkiQt) -> str | None:
if file := getFile(
mw,
tr.actions_import(),
None,
key="import",
filter=tr.importing_packaged_anki_deckcollection_apkg_colpkg_zip(),
):
filter = without_unicode_isolation(
tr.importing_all_supported_formats(
val="({})".format(
" ".join(f"*{ending}" for ending in all_accepted_file_endings())
)
)
)
if file := getFile(mw, tr.actions_import(), None, key="import", filter=filter):
return str(file)
return None
def is_collection_package(filename: str) -> bool:
return (
filename == "collection.apkg"
or (filename.startswith("backup-") and filename.endswith(".apkg"))
or filename.endswith(".colpkg")
)
def maybe_import_collection_package(mw: aqt.main.AnkiQt, path: str) -> None:
if askUser(
tr.importing_this_will_delete_your_existing_collection(),
msgfunc=QMessageBox.warning,
defaultno=True,
):
import_collection_package(mw, path)
def import_collection_package(mw: aqt.main.AnkiQt, file: str) -> None:
def on_success() -> None:
mw.loadCollection()
tooltip(tr.importing_importing_complete())
def on_failure(err: Exception) -> None:
mw.loadCollection()
if not isinstance(err, Interrupted):
showWarning(str(err))
QueryOp(
parent=mw,
op=lambda _: mw.create_backup_now(),
success=lambda _: mw.unloadCollection(
lambda: import_collection_package_op(mw, file, on_success)
.failure(on_failure)
.run_in_background()
),
).with_progress().run_in_background()
def all_accepted_file_endings() -> set[str]:
return set(chain(*(importer.accepted_file_endings for importer in IMPORTERS)))
def import_collection_package_op(
@ -106,10 +195,9 @@ def import_collection_package_op(
)
def import_anki_package(mw: aqt.main.AnkiQt, path: str) -> None:
def import_json_string(mw: aqt.main.AnkiQt, json: str) -> None:
CollectionOp(
parent=mw,
op=lambda col: col.import_anki_package(path),
parent=mw, op=lambda col: col.import_json_string(json)
).with_backend_progress(import_progress_update).success(
show_import_log
).run_in_background()
@ -120,29 +208,22 @@ def show_import_log(log_with_changes: ImportLogWithChanges) -> None:
def stringify_log(log: ImportLogWithChanges.Log) -> str:
total = len(log.conflicting) + len(log.updated) + len(log.new) + len(log.duplicate)
queues = log_queues(log)
return "\n".join(
chain(
(tr.importing_notes_found_in_file(val=total),),
(tr.importing_notes_found_in_file(val=log.found_notes),),
(
template_string(val=len(row))
for (row, template_string) in (
(log.conflicting, tr.importing_notes_that_could_not_be_imported),
(log.updated, tr.importing_notes_updated_as_file_had_newer),
(log.new, tr.importing_notes_added_from_file),
(log.duplicate, tr.importing_notes_skipped_as_theyre_already_in),
)
if row
queue.summary_template(val=len(queue.notes))
for queue in queues
if queue.notes
),
("",),
*(
[f"[{action}] {', '.join(note.fields)}" for note in rows]
for (rows, action) in (
(log.conflicting, tr.importing_skipped()),
(log.updated, tr.importing_updated()),
(log.new, tr.adding_added()),
(log.duplicate, tr.importing_identical()),
)
[
f"[{queue.action_string}] {', '.join(note.fields)}"
for note in queue.notes
]
for queue in queues
),
)
)
@ -154,3 +235,61 @@ def import_progress_update(progress: Progress, update: ProgressUpdate) -> None:
update.label = progress.importing
if update.user_wants_abort:
update.abort = True
@dataclass
class LogQueue:
notes: Any
# Callable[[Union[str, int, float]], str] (if mypy understood kwargs)
summary_template: Any
action_string: str
def first_field_queue(log: ImportLogWithChanges.Log) -> LogQueue:
if log.dupe_resolution == DupeResolution.ADD:
summary_template = tr.importing_added_duplicate_with_first_field
action_string = tr.adding_added()
elif log.dupe_resolution == DupeResolution.IGNORE:
summary_template = tr.importing_first_field_matched
action_string = tr.importing_skipped()
else:
summary_template = tr.importing_first_field_matched
action_string = tr.importing_updated()
return LogQueue(log.first_field_match, summary_template, action_string)
def log_queues(log: ImportLogWithChanges.Log) -> Tuple[LogQueue, ...]:
return (
LogQueue(
log.conflicting,
tr.importing_notes_that_could_not_be_imported,
tr.importing_skipped(),
),
LogQueue(
log.updated,
tr.importing_notes_updated_as_file_had_newer,
tr.importing_updated(),
),
LogQueue(log.new, tr.importing_notes_added_from_file, tr.adding_added()),
LogQueue(
log.duplicate,
tr.importing_notes_skipped_as_theyre_already_in,
tr.importing_identical(),
),
first_field_queue(log),
LogQueue(
log.missing_notetype,
lambda val: f"Notes skipped, as their notetype was missing: {val}",
tr.importing_skipped(),
),
LogQueue(
log.missing_deck,
lambda val: f"Notes skipped, as their deck was missing: {val}",
tr.importing_skipped(),
),
LogQueue(
log.empty_first_field,
tr.importing_empty_first_field,
tr.importing_skipped(),
),
)

View File

@ -13,12 +13,11 @@ import aqt.forms
import aqt.modelchooser
from anki.importing.anki2 import MediaMapInvalid, V2ImportIntoV1
from anki.importing.apkg import AnkiPackageImporter
from aqt.import_export.importing import import_collection_package
from aqt.import_export.importing import ColpkgImporter
from aqt.main import AnkiQt, gui_hooks
from aqt.qt import *
from aqt.utils import (
HelpPage,
askUser,
disable_help_button,
getFile,
getText,
@ -437,11 +436,5 @@ def setupApkgImport(mw: AnkiQt, importer: AnkiPackageImporter) -> bool:
if not full:
# adding
return True
if askUser(
tr.importing_this_will_delete_your_existing_collection(),
msgfunc=QMessageBox.warning,
defaultno=True,
):
import_collection_package(mw, importer.file)
ColpkgImporter.do_import(mw, importer.file)
return False

View File

@ -31,6 +31,7 @@ from anki.scheduler.v3 import NextStates
from anki.utils import dev_mode
from aqt.changenotetype import ChangeNotetypeDialog
from aqt.deckoptions import DeckOptionsDialog
from aqt.import_export.import_csv_dialog import ImportCsvDialog
from aqt.operations.deck import update_deck_configs as update_deck_configs_op
from aqt.qt import *
@ -438,6 +439,18 @@ def change_notetype() -> bytes:
return b""
def import_csv() -> bytes:
data = request.data
def handle_on_main() -> None:
window = aqt.mw.app.activeWindow()
if isinstance(window, ImportCsvDialog):
window.do_import(data)
aqt.mw.taskman.run_on_main(handle_on_main)
return b""
post_handler_list = [
congrats_info,
get_deck_configs_for_update,
@ -445,13 +458,19 @@ post_handler_list = [
next_card_states,
set_next_card_states,
change_notetype,
import_csv,
]
exposed_backend_list = [
# DeckService
"get_deck_names",
# I18nService
"i18n_resources",
# ImportExportService
"get_csv_metadata",
# NotesService
"get_field_names",
"get_note",
# NotetypesService
"get_notetype_names",

View File

@ -76,6 +76,7 @@ rust_library(
"//rslib/cargo:bytes",
"//rslib/cargo:chrono",
"//rslib/cargo:coarsetime",
"//rslib/cargo:csv",
"//rslib/cargo:flate2",
"//rslib/cargo:fluent",
"//rslib/cargo:fnv",

View File

@ -100,3 +100,4 @@ unic-ucd-category = "0.9.0"
id_tree = "1.8.0"
zstd = { version="0.10.0", features=["zstdmt"] }
num_cpus = "1.13.1"
csv = "1.1.6"

View File

@ -106,10 +106,15 @@ pub fn write_backend_proto_rs() {
"#[derive(strum::EnumIter)]",
)
.type_attribute("HelpPageLinkRequest.HelpPage", "#[derive(strum::EnumIter)]")
.type_attribute("CsvMetadata.Delimiter", "#[derive(strum::EnumIter)]")
.type_attribute(
"Preferences.BackupLimits",
"#[derive(Copy, serde_derive::Deserialize, serde_derive::Serialize)]",
)
.type_attribute(
"ImportCsvRequest.DupeResolution",
"#[derive(serde_derive::Deserialize, serde_derive::Serialize)]",
)
.compile_protos(paths.as_slice(), &[proto_dir])
.unwrap();
}

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -66,6 +66,15 @@ alias(
],
)
alias(
name = "csv",
actual = "@raze__csv__1_1_6//:csv",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
actual = "@raze__env_logger__0_9_0//:env_logger",

View File

@ -6,11 +6,8 @@ use std::path::Path;
use super::{progress::Progress, Backend};
pub(super) use crate::backend_proto::importexport_service::Service as ImportExportService;
use crate::{
backend_proto::{self as pb, export_anki_package_request::Selector},
import_export::{
package::{import_colpkg, NoteLog},
ExportProgress, ImportProgress,
},
backend_proto::{self as pb, export_limit, ExportLimit},
import_export::{package::import_colpkg, ExportProgress, ImportProgress, NoteLog},
prelude::*,
search::SearchNode,
};
@ -55,19 +52,16 @@ impl ImportExportService for Backend {
fn import_anki_package(
&self,
input: pb::ImportAnkiPackageRequest,
) -> Result<pb::ImportAnkiPackageResponse> {
) -> Result<pb::ImportResponse> {
self.with_col(|col| col.import_apkg(&input.package_path, self.import_progress_fn()))
.map(Into::into)
}
fn export_anki_package(&self, input: pb::ExportAnkiPackageRequest) -> Result<pb::UInt32> {
let selector = input
.selector
.ok_or_else(|| AnkiError::invalid_input("missing oneof"))?;
self.with_col(|col| {
col.export_apkg(
&input.out_path,
SearchNode::from_selector(selector),
SearchNode::from(input.limit.unwrap_or_default()),
input.with_scheduling,
input.with_media,
input.legacy,
@ -77,15 +71,60 @@ impl ImportExportService for Backend {
})
.map(Into::into)
}
}
impl SearchNode {
fn from_selector(selector: Selector) -> Self {
match selector {
Selector::WholeCollection(_) => Self::WholeCollection,
Selector::DeckId(did) => Self::from_deck_id(did, true),
Selector::NoteIds(nids) => Self::from_note_ids(nids.note_ids),
}
fn get_csv_metadata(&self, input: pb::CsvMetadataRequest) -> Result<pb::CsvMetadata> {
let delimiter = input.delimiter.is_some().then(|| input.delimiter());
self.with_col(|col| {
col.get_csv_metadata(&input.path, delimiter, input.notetype_id.map(Into::into))
})
}
fn import_csv(&self, input: pb::ImportCsvRequest) -> Result<pb::ImportResponse> {
self.with_col(|col| {
let dupe_resolution = input.dupe_resolution();
col.import_csv(
&input.path,
input.metadata.unwrap_or_default(),
dupe_resolution,
self.import_progress_fn(),
)
})
.map(Into::into)
}
fn export_note_csv(&self, input: pb::ExportNoteCsvRequest) -> Result<pb::UInt32> {
self.with_col(|col| {
col.export_note_csv(
&input.out_path,
SearchNode::from(input.limit.unwrap_or_default()),
input.with_html,
input.with_tags,
self.export_progress_fn(),
)
})
.map(Into::into)
}
fn export_card_csv(&self, input: pb::ExportCardCsvRequest) -> Result<pb::UInt32> {
self.with_col(|col| {
col.export_card_csv(
&input.out_path,
SearchNode::from(input.limit.unwrap_or_default()),
input.with_html,
self.export_progress_fn(),
)
})
.map(Into::into)
}
fn import_json_file(&self, input: pb::String) -> Result<pb::ImportResponse> {
self.with_col(|col| col.import_json_file(&input.val, self.import_progress_fn()))
.map(Into::into)
}
fn import_json_string(&self, input: pb::String) -> Result<pb::ImportResponse> {
self.with_col(|col| col.import_json_string(&input.val, self.import_progress_fn()))
.map(Into::into)
}
}
@ -101,7 +140,7 @@ impl Backend {
}
}
impl From<OpOutput<NoteLog>> for pb::ImportAnkiPackageResponse {
impl From<OpOutput<NoteLog>> for pb::ImportResponse {
fn from(output: OpOutput<NoteLog>) -> Self {
Self {
changes: Some(output.changes.into()),
@ -109,3 +148,18 @@ impl From<OpOutput<NoteLog>> for pb::ImportAnkiPackageResponse {
}
}
}
impl From<ExportLimit> for SearchNode {
fn from(export_limit: ExportLimit) -> Self {
use export_limit::Limit;
let limit = export_limit
.limit
.unwrap_or(Limit::WholeCollection(pb::Empty {}));
match limit {
Limit::WholeCollection(_) => Self::WholeCollection,
Limit::DeckId(did) => Self::from_deck_id(did, true),
Limit::NoteIds(nids) => Self::from_note_ids(nids.note_ids),
Limit::CardIds(cids) => Self::from_card_ids(cids.cids),
}
}
}

View File

@ -168,9 +168,15 @@ impl NotetypesService for Backend {
.map(Into::into)
})
}
fn change_notetype(&self, input: pb::ChangeNotetypeRequest) -> Result<pb::OpChanges> {
self.with_col(|col| col.change_notetype_of_notes(input.into()).map(Into::into))
}
fn get_field_names(&self, input: pb::NotetypeId) -> Result<pb::StringList> {
self.with_col(|col| col.storage.get_field_names(input.into()))
.map(Into::into)
}
}
impl From<pb::Notetype> for Notetype {

View File

@ -122,6 +122,7 @@ pub(super) fn progress_to_proto(progress: Option<Progress>, tr: &I18n) -> pb::Pr
ExportProgress::File => tr.exporting_exporting_file(),
ExportProgress::Media(n) => tr.exporting_processed_media_files(n),
ExportProgress::Notes(n) => tr.importing_processed_notes(n),
ExportProgress::Cards(n) => tr.importing_processed_cards(n),
ExportProgress::Gathering => tr.importing_gathering(),
}
.into(),

View File

@ -388,7 +388,7 @@ impl RowContext {
fn note_field_str(&self) -> String {
let index = self.notetype.config.sort_field_idx as usize;
html_to_text_line(&self.note.fields()[index]).into()
html_to_text_line(&self.note.fields()[index], true).into()
}
fn get_is_rtl(&self, column: Column) -> bool {
@ -426,6 +426,7 @@ impl RowContext {
} else {
&answer
},
true,
)
.to_string()
}
@ -545,7 +546,7 @@ impl RowContext {
}
fn question_str(&self) -> String {
html_to_text_line(&self.render_context.as_ref().unwrap().question).to_string()
html_to_text_line(&self.render_context.as_ref().unwrap().question, true).to_string()
}
fn get_row_font_name(&self) -> Result<String> {

View File

@ -262,7 +262,11 @@ impl Collection {
// write note, updating tags and generating missing cards
let ctx = genctx.get_or_insert_with(|| {
CardGenContext::new(&nt, self.get_last_deck_added_to_for_notetype(nt.id), usn)
CardGenContext::new(
nt.as_ref(),
self.get_last_deck_added_to_for_notetype(nt.id),
usn,
)
});
self.update_note_inner_generating_cards(
ctx, &mut note, &original, false, norm, true,

View File

@ -187,6 +187,12 @@ impl From<regex::Error> for AnkiError {
}
}
impl From<csv::Error> for AnkiError {
fn from(err: csv::Error) -> Self {
AnkiError::InvalidInput(err.to_string())
}
}
#[derive(Debug, PartialEq)]
pub struct CardTypeError {
pub notetype: String,
@ -209,6 +215,7 @@ pub enum ImportError {
Corrupt,
TooNew,
MediaImportFailed(String),
NoFieldColumn,
}
impl ImportError {
@ -217,6 +224,7 @@ impl ImportError {
ImportError::Corrupt => tr.importing_the_provided_file_is_not_a(),
ImportError::TooNew => tr.errors_collection_too_new(),
ImportError::MediaImportFailed(err) => tr.importing_failed_to_import_media_file(err),
ImportError::NoFieldColumn => tr.importing_file_must_contain_field_column(),
}
.into()
}

View File

@ -4,10 +4,18 @@
mod gather;
mod insert;
pub mod package;
pub mod text;
use std::marker::PhantomData;
use crate::prelude::*;
pub use crate::backend_proto::import_response::{Log as NoteLog, Note as LogNote};
use crate::{
prelude::*,
text::{
newlines_to_spaces, strip_html_preserving_media_filenames, truncate_to_char_boundary,
CowMapping,
},
};
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum ImportProgress {
@ -24,6 +32,7 @@ pub enum ExportProgress {
File,
Gathering,
Notes(usize),
Cards(usize),
Media(usize),
}
@ -94,4 +103,28 @@ impl<'f, F: 'f + FnMut(usize) -> Result<()>> Incrementor<'f, F> {
}
(self.update_fn)(self.count)
}
pub(crate) fn count(&self) -> usize {
self.count
}
}
impl Note {
pub(crate) fn into_log_note(self) -> LogNote {
LogNote {
id: Some(self.id.into()),
fields: self
.into_fields()
.into_iter()
.map(|field| {
let mut reduced = strip_html_preserving_media_filenames(&field)
.map_cow(newlines_to_spaces)
.get_owned()
.unwrap_or(field);
truncate_to_char_boundary(&mut reduced, 80);
reduced
})
.collect(),
}
}
}

View File

@ -17,9 +17,7 @@ use zstd::stream::copy_decode;
use crate::{
collection::CollectionBuilder,
import_export::{
gather::ExchangeData,
package::{Meta, NoteLog},
ImportProgress, IncrementableProgress,
gather::ExchangeData, package::Meta, ImportProgress, IncrementableProgress, NoteLog,
},
prelude::*,
search::SearchNode,

View File

@ -13,14 +13,10 @@ use sha1::Sha1;
use super::{media::MediaUseMap, Context};
use crate::{
import_export::{
package::{media::safe_normalized_file_name, LogNote, NoteLog},
ImportProgress, IncrementableProgress,
package::media::safe_normalized_file_name, ImportProgress, IncrementableProgress, NoteLog,
},
prelude::*,
text::{
newlines_to_spaces, replace_media_refs, strip_html_preserving_media_filenames,
truncate_to_char_boundary, CowMapping,
},
text::replace_media_refs,
};
struct NoteContext<'a> {
@ -65,26 +61,6 @@ impl NoteImports {
}
}
impl Note {
fn into_log_note(self) -> LogNote {
LogNote {
id: Some(self.id.into()),
fields: self
.into_fields()
.into_iter()
.map(|field| {
let mut reduced = strip_html_preserving_media_filenames(&field)
.map_cow(newlines_to_spaces)
.get_owned()
.unwrap_or(field);
truncate_to_char_boundary(&mut reduced, 80);
reduced
})
.collect(),
}
}
}
#[derive(Debug, Clone, Copy)]
pub(crate) struct NoteMeta {
id: NoteId,

View File

@ -11,5 +11,4 @@ pub(crate) use colpkg::export::export_colpkg_from_data;
pub use colpkg::import::import_colpkg;
pub(self) use meta::{Meta, Version};
pub use crate::backend_proto::import_anki_package_response::{Log as NoteLog, Note as LogNote};
pub(self) use crate::backend_proto::{media_entries::MediaEntry, MediaEntries};

View File

@ -0,0 +1,159 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use std::{borrow::Cow, fs::File, io::Write};
use itertools::Itertools;
use lazy_static::lazy_static;
use regex::Regex;
use super::metadata::Delimiter;
use crate::{
import_export::{ExportProgress, IncrementableProgress},
notetype::RenderCardOutput,
prelude::*,
search::SortMode,
template::RenderedNode,
text::{html_to_text_line, CowMapping},
};
const DELIMITER: Delimiter = Delimiter::Tab;
impl Collection {
pub fn export_card_csv(
&mut self,
path: &str,
search: impl TryIntoSearch,
with_html: bool,
progress_fn: impl 'static + FnMut(ExportProgress, bool) -> bool,
) -> Result<usize> {
let mut progress = IncrementableProgress::new(progress_fn);
progress.call(ExportProgress::File)?;
let mut incrementor = progress.incrementor(ExportProgress::Cards);
let mut writer = file_writer_with_header(path)?;
let mut cards = self.search_cards(search, SortMode::NoOrder)?;
cards.sort_unstable();
for &card in &cards {
incrementor.increment()?;
writer.write_record(self.card_record(card, with_html)?)?;
}
writer.flush()?;
Ok(cards.len())
}
pub fn export_note_csv(
&mut self,
path: &str,
search: impl TryIntoSearch,
with_html: bool,
with_tags: bool,
progress_fn: impl 'static + FnMut(ExportProgress, bool) -> bool,
) -> Result<usize> {
let mut progress = IncrementableProgress::new(progress_fn);
progress.call(ExportProgress::File)?;
let mut incrementor = progress.incrementor(ExportProgress::Notes);
let mut writer = file_writer_with_header(path)?;
self.search_notes_into_table(search)?;
self.storage.for_each_note_in_search(|note| {
incrementor.increment()?;
writer.write_record(note_record(&note, with_html, with_tags))?;
Ok(())
})?;
writer.flush()?;
self.storage.clear_searched_notes_table()?;
Ok(incrementor.count())
}
fn card_record(&mut self, card: CardId, with_html: bool) -> Result<[String; 2]> {
let RenderCardOutput { qnodes, anodes, .. } = self.render_existing_card(card, false)?;
Ok([
rendered_nodes_to_record_field(&qnodes, with_html, false),
rendered_nodes_to_record_field(&anodes, with_html, true),
])
}
}
fn file_writer_with_header(path: &str) -> Result<csv::Writer<File>> {
let mut file = File::create(path)?;
write_header(&mut file)?;
Ok(csv::WriterBuilder::new()
.delimiter(DELIMITER.byte())
.flexible(true)
.from_writer(file))
}
fn write_header(writer: &mut impl Write) -> Result<()> {
write!(writer, "#separator:{}\n#html:true\n", DELIMITER.name())?;
Ok(())
}
fn rendered_nodes_to_record_field(
nodes: &[RenderedNode],
with_html: bool,
answer_side: bool,
) -> String {
let text = rendered_nodes_to_str(nodes);
let mut text = strip_redundant_sections(&text);
if answer_side {
text = text.map_cow(strip_answer_side_question);
}
if !with_html {
text = text.map_cow(|t| html_to_text_line(t, false));
}
text.into()
}
fn rendered_nodes_to_str(nodes: &[RenderedNode]) -> String {
nodes
.iter()
.map(|node| match node {
RenderedNode::Text { text } => text,
RenderedNode::Replacement { current_text, .. } => current_text,
})
.join("")
}
fn note_record(note: &Note, with_html: bool, with_tags: bool) -> Vec<String> {
let mut fields: Vec<_> = note
.fields()
.iter()
.map(|f| field_to_record_field(f, with_html))
.collect();
if with_tags {
fields.push(note.tags.join(" "));
}
fields
}
fn field_to_record_field(field: &str, with_html: bool) -> String {
let mut text = strip_redundant_sections(field);
if !with_html {
text = text.map_cow(|t| html_to_text_line(t, false));
}
text.into()
}
fn strip_redundant_sections(text: &str) -> Cow<str> {
lazy_static! {
static ref RE: Regex = Regex::new(
r"(?isx)
<style>.*?</style> # style elements
|
\[\[type:[^]]+\]\] # type replacements
"
)
.unwrap();
}
RE.replace_all(text.as_ref(), "")
}
fn strip_answer_side_question(text: &str) -> Cow<str> {
lazy_static! {
static ref RE: Regex = Regex::new(r"(?is)^.*<hr id=answer>\n*").unwrap();
}
RE.replace_all(text.as_ref(), "")
}

View File

@ -0,0 +1,354 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use std::{
fs::File,
io::{BufRead, BufReader, Read, Seek, SeekFrom},
};
use crate::{
import_export::{
text::{
csv::metadata::{CsvDeck, CsvMetadata, CsvNotetype, Delimiter},
DupeResolution, ForeignData, ForeignNote, NameOrId,
},
ImportProgress, NoteLog,
},
prelude::*,
};
impl Collection {
pub fn import_csv(
&mut self,
path: &str,
metadata: CsvMetadata,
dupe_resolution: DupeResolution,
progress_fn: impl 'static + FnMut(ImportProgress, bool) -> bool,
) -> Result<OpOutput<NoteLog>> {
let file = File::open(path)?;
let default_deck = metadata.deck()?.name_or_id();
let default_notetype = metadata.notetype()?.name_or_id();
let mut ctx = ColumnContext::new(&metadata)?;
let notes = ctx.deserialize_csv(file, metadata.delimiter())?;
ForeignData {
dupe_resolution,
default_deck,
default_notetype,
notes,
global_tags: metadata.global_tags,
updated_tags: metadata.updated_tags,
..Default::default()
}
.import(self, progress_fn)
}
}
impl CsvMetadata {
fn deck(&self) -> Result<&CsvDeck> {
self.deck
.as_ref()
.ok_or_else(|| AnkiError::invalid_input("deck oneof not set"))
}
fn notetype(&self) -> Result<&CsvNotetype> {
self.notetype
.as_ref()
.ok_or_else(|| AnkiError::invalid_input("notetype oneof not set"))
}
fn field_source_columns(&self) -> Result<Vec<Option<usize>>> {
Ok(match self.notetype()? {
CsvNotetype::GlobalNotetype(global) => global
.field_columns
.iter()
.map(|&i| (i > 0).then(|| i as usize))
.collect(),
CsvNotetype::NotetypeColumn(_) => {
let meta_columns = self.meta_columns();
(1..self.column_labels.len() + 1)
.filter(|idx| !meta_columns.contains(idx))
.map(Some)
.collect()
}
})
}
}
impl CsvDeck {
fn name_or_id(&self) -> NameOrId {
match self {
Self::DeckId(did) => NameOrId::Id(*did),
Self::DeckColumn(_) => NameOrId::default(),
}
}
fn column(&self) -> Option<usize> {
match self {
Self::DeckId(_) => None,
Self::DeckColumn(column) => Some(*column as usize),
}
}
}
impl CsvNotetype {
fn name_or_id(&self) -> NameOrId {
match self {
Self::GlobalNotetype(nt) => NameOrId::Id(nt.id),
Self::NotetypeColumn(_) => NameOrId::default(),
}
}
fn column(&self) -> Option<usize> {
match self {
Self::GlobalNotetype(_) => None,
Self::NotetypeColumn(column) => Some(*column as usize),
}
}
}
/// Column indices for the fields of a notetype.
type FieldSourceColumns = Vec<Option<usize>>;
// Column indices are 1-based.
struct ColumnContext {
tags_column: Option<usize>,
deck_column: Option<usize>,
notetype_column: Option<usize>,
/// Source column indices for the fields of a notetype, identified by its
/// name or id as string. The empty string corresponds to the default notetype.
field_source_columns: FieldSourceColumns,
/// How fields are converted to strings. Used for escaping HTML if appropriate.
stringify: fn(&str) -> String,
}
impl ColumnContext {
fn new(metadata: &CsvMetadata) -> Result<Self> {
Ok(Self {
tags_column: (metadata.tags_column > 0).then(|| metadata.tags_column as usize),
deck_column: metadata.deck()?.column(),
notetype_column: metadata.notetype()?.column(),
field_source_columns: metadata.field_source_columns()?,
stringify: stringify_fn(metadata.is_html),
})
}
fn deserialize_csv(
&mut self,
mut reader: impl Read + Seek,
delimiter: Delimiter,
) -> Result<Vec<ForeignNote>> {
remove_tags_line_from_reader(&mut reader)?;
let mut csv_reader = csv::ReaderBuilder::new()
.has_headers(false)
.flexible(true)
.comment(Some(b'#'))
.delimiter(delimiter.byte())
.from_reader(reader);
self.deserialize_csv_reader(&mut csv_reader)
}
fn deserialize_csv_reader(
&mut self,
reader: &mut csv::Reader<impl Read>,
) -> Result<Vec<ForeignNote>> {
reader
.records()
.into_iter()
.map(|res| {
res.map_err(Into::into)
.map(|record| self.foreign_note_from_record(&record))
})
.collect()
}
fn foreign_note_from_record(&mut self, record: &csv::StringRecord) -> ForeignNote {
let notetype = self.gather_notetype(record).into();
let deck = self.gather_deck(record).into();
let tags = self.gather_tags(record);
let fields = self.gather_note_fields(record);
ForeignNote {
notetype,
fields,
tags,
deck,
..Default::default()
}
}
fn gather_notetype(&self, record: &csv::StringRecord) -> String {
self.notetype_column
.and_then(|i| record.get(i - 1))
.unwrap_or_default()
.to_string()
}
fn gather_deck(&self, record: &csv::StringRecord) -> String {
self.deck_column
.and_then(|i| record.get(i - 1))
.unwrap_or_default()
.to_string()
}
fn gather_tags(&self, record: &csv::StringRecord) -> Vec<String> {
self.tags_column
.and_then(|i| record.get(i - 1))
.unwrap_or_default()
.split_whitespace()
.filter(|s| !s.is_empty())
.map(ToString::to_string)
.collect()
}
fn gather_note_fields(&mut self, record: &csv::StringRecord) -> Vec<String> {
let stringify = self.stringify;
self.field_source_columns
.iter()
.map(|opt| opt.and_then(|idx| record.get(idx - 1)).unwrap_or_default())
.map(stringify)
.collect()
}
}
fn stringify_fn(is_html: bool) -> fn(&str) -> String {
if is_html {
ToString::to_string
} else {
htmlescape::encode_minimal
}
}
/// If the reader's first line starts with "tags:", which is allowed for historic
/// reasons, seek to the second line.
fn remove_tags_line_from_reader(reader: &mut (impl Read + Seek)) -> Result<()> {
let mut buf_reader = BufReader::new(reader);
let mut first_line = String::new();
buf_reader.read_line(&mut first_line)?;
let offset = if first_line.starts_with("tags:") {
first_line.as_bytes().len()
} else {
0
};
buf_reader
.into_inner()
.seek(SeekFrom::Start(offset as u64))?;
Ok(())
}
#[cfg(test)]
mod test {
use std::io::Cursor;
use super::*;
use crate::backend_proto::import_export::csv_metadata::MappedNotetype;
macro_rules! import {
($metadata:expr, $csv:expr) => {{
let reader = Cursor::new($csv);
let delimiter = $metadata.delimiter();
let mut ctx = ColumnContext::new(&$metadata).unwrap();
ctx.deserialize_csv(reader, delimiter).unwrap()
}};
}
macro_rules! assert_imported_fields {
($metadata:expr, $csv:expr, $expected:expr) => {
let notes = import!(&$metadata, $csv);
let fields: Vec<_> = notes.into_iter().map(|note| note.fields).collect();
assert_eq!(fields, $expected);
};
}
impl CsvMetadata {
fn defaults_for_testing() -> Self {
Self {
delimiter: Delimiter::Comma as i32,
force_delimiter: false,
is_html: false,
force_is_html: false,
tags_column: 0,
global_tags: Vec::new(),
updated_tags: Vec::new(),
column_labels: vec!["".to_string(); 2],
deck: Some(CsvDeck::DeckId(1)),
notetype: Some(CsvNotetype::GlobalNotetype(MappedNotetype {
id: 1,
field_columns: vec![1, 2],
})),
}
}
}
#[test]
fn should_allow_missing_columns() {
let metadata = CsvMetadata::defaults_for_testing();
assert_imported_fields!(metadata, "foo\n", &[&["foo", ""]]);
}
#[test]
fn should_respect_custom_delimiter() {
let mut metadata = CsvMetadata::defaults_for_testing();
metadata.set_delimiter(Delimiter::Pipe);
assert_imported_fields!(metadata, "fr,ont|ba,ck\n", &[&["fr,ont", "ba,ck"]]);
}
#[test]
fn should_ignore_first_line_starting_with_tags() {
let metadata = CsvMetadata::defaults_for_testing();
assert_imported_fields!(metadata, "tags:foo\nfront,back\n", &[&["front", "back"]]);
}
#[test]
fn should_respect_column_remapping() {
let mut metadata = CsvMetadata::defaults_for_testing();
metadata
.notetype
.replace(CsvNotetype::GlobalNotetype(MappedNotetype {
id: 1,
field_columns: vec![3, 1],
}));
assert_imported_fields!(metadata, "front,foo,back\n", &[&["back", "front"]]);
}
#[test]
fn should_ignore_lines_starting_with_number_sign() {
let metadata = CsvMetadata::defaults_for_testing();
assert_imported_fields!(metadata, "#foo\nfront,back\n#bar\n", &[&["front", "back"]]);
}
#[test]
fn should_escape_html_entities_if_csv_is_html() {
let mut metadata = CsvMetadata::defaults_for_testing();
assert_imported_fields!(metadata, "<hr>\n", &[&["&lt;hr&gt;", ""]]);
metadata.is_html = true;
assert_imported_fields!(metadata, "<hr>\n", &[&["<hr>", ""]]);
}
#[test]
fn should_parse_tag_column() {
let mut metadata = CsvMetadata::defaults_for_testing();
metadata.tags_column = 3;
let notes = import!(metadata, "front,back,foo bar\n");
assert_eq!(notes[0].tags, &["foo", "bar"]);
}
#[test]
fn should_parse_deck_column() {
let mut metadata = CsvMetadata::defaults_for_testing();
metadata.deck.replace(CsvDeck::DeckColumn(1));
let notes = import!(metadata, "front,back\n");
assert_eq!(notes[0].deck, NameOrId::Name(String::from("front")));
}
#[test]
fn should_parse_notetype_column() {
let mut metadata = CsvMetadata::defaults_for_testing();
metadata.notetype.replace(CsvNotetype::NotetypeColumn(1));
metadata.column_labels.push("".to_string());
let notes = import!(metadata, "Basic,front,back\nCloze,foo,bar\n");
assert_eq!(notes[0].fields, &["front", "back"]);
assert_eq!(notes[0].notetype, NameOrId::Name(String::from("Basic")));
assert_eq!(notes[1].fields, &["foo", "bar"]);
assert_eq!(notes[1].notetype, NameOrId::Name(String::from("Cloze")));
}
}

View File

@ -0,0 +1,595 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use std::{
collections::{HashMap, HashSet},
fs::File,
io::{BufRead, BufReader},
};
use strum::IntoEnumIterator;
pub use crate::backend_proto::import_export::{
csv_metadata::{Deck as CsvDeck, Delimiter, MappedNotetype, Notetype as CsvNotetype},
CsvMetadata,
};
use crate::{
error::ImportError, import_export::text::NameOrId, notetype::NoteField, prelude::*,
text::is_html,
};
impl Collection {
pub fn get_csv_metadata(
&mut self,
path: &str,
delimiter: Option<Delimiter>,
notetype_id: Option<NotetypeId>,
) -> Result<CsvMetadata> {
let reader = BufReader::new(File::open(path)?);
self.get_reader_metadata(reader, delimiter, notetype_id)
}
fn get_reader_metadata(
&mut self,
reader: impl BufRead,
delimiter: Option<Delimiter>,
notetype_id: Option<NotetypeId>,
) -> Result<CsvMetadata> {
let mut metadata = CsvMetadata::default();
let line = self.parse_meta_lines(reader, &mut metadata)?;
maybe_set_fallback_delimiter(delimiter, &mut metadata, &line);
maybe_set_fallback_columns(&mut metadata, &line)?;
maybe_set_fallback_is_html(&mut metadata, &line)?;
self.maybe_set_fallback_notetype(&mut metadata, notetype_id)?;
self.maybe_init_notetype_map(&mut metadata)?;
self.maybe_set_fallback_deck(&mut metadata)?;
Ok(metadata)
}
/// Parses the meta head of the file, and returns the first content line.
fn parse_meta_lines(
&mut self,
mut reader: impl BufRead,
metadata: &mut CsvMetadata,
) -> Result<String> {
let mut line = String::new();
reader.read_line(&mut line)?;
if self.parse_first_line(&line, metadata) {
line.clear();
reader.read_line(&mut line)?;
while self.parse_line(&line, metadata) {
line.clear();
reader.read_line(&mut line)?;
}
}
Ok(line)
}
/// True if the line is a meta line, i.e. a comment, or starting with 'tags:'.
fn parse_first_line(&mut self, line: &str, metadata: &mut CsvMetadata) -> bool {
if let Some(tags) = line.strip_prefix("tags:") {
metadata.global_tags = collect_tags(tags);
true
} else {
self.parse_line(line, metadata)
}
}
/// True if the line is a comment.
fn parse_line(&mut self, line: &str, metadata: &mut CsvMetadata) -> bool {
if let Some(l) = line.strip_prefix('#') {
if let Some((key, value)) = l.split_once(':') {
self.parse_meta_value(key, strip_line_ending(value), metadata);
}
true
} else {
false
}
}
fn parse_meta_value(&mut self, key: &str, value: &str, metadata: &mut CsvMetadata) {
match key.trim().to_ascii_lowercase().as_str() {
"separator" => {
if let Some(delimiter) = delimiter_from_value(value) {
metadata.delimiter = delimiter as i32;
metadata.force_delimiter = true;
}
}
"html" => {
if let Ok(is_html) = value.to_lowercase().parse() {
metadata.is_html = is_html;
metadata.force_is_html = true;
}
}
"tags" => metadata.global_tags = collect_tags(value),
"columns" => {
if let Ok(columns) = self.parse_columns(value, metadata) {
metadata.column_labels = columns;
}
}
"notetype" => {
if let Ok(Some(nt)) = self.notetype_by_name_or_id(&NameOrId::parse(value)) {
metadata.notetype = Some(CsvNotetype::new_global(nt.id));
}
}
"deck" => {
if let Ok(Some(did)) = self.deck_id_by_name_or_id(&NameOrId::parse(value)) {
metadata.deck = Some(CsvDeck::DeckId(did.0));
}
}
"notetype column" => {
if let Ok(n) = value.trim().parse() {
metadata.notetype = Some(CsvNotetype::NotetypeColumn(n));
}
}
"deck column" => {
if let Ok(n) = value.trim().parse() {
metadata.deck = Some(CsvDeck::DeckColumn(n));
}
}
_ => (),
}
}
fn parse_columns(&mut self, line: &str, metadata: &mut CsvMetadata) -> Result<Vec<String>> {
let delimiter = if metadata.force_delimiter {
metadata.delimiter()
} else {
delimiter_from_line(line)
};
map_single_record(line, delimiter, |record| {
record.iter().map(ToString::to_string).collect()
})
}
fn maybe_set_fallback_notetype(
&mut self,
metadata: &mut CsvMetadata,
notetype_id: Option<NotetypeId>,
) -> Result<()> {
if let Some(ntid) = notetype_id {
metadata.notetype = Some(CsvNotetype::new_global(ntid));
} else if metadata.notetype.is_none() {
metadata.notetype = Some(CsvNotetype::new_global(self.fallback_notetype_id()?));
}
Ok(())
}
fn maybe_set_fallback_deck(&mut self, metadata: &mut CsvMetadata) -> Result<()> {
if metadata.deck.is_none() {
metadata.deck = Some(CsvDeck::DeckId(
metadata
.notetype_id()
.and_then(|ntid| self.default_deck_for_notetype(ntid).transpose())
.unwrap_or_else(|| self.get_current_deck().map(|d| d.id))?
.0,
));
}
Ok(())
}
fn maybe_init_notetype_map(&mut self, metadata: &mut CsvMetadata) -> Result<()> {
let meta_columns = metadata.meta_columns();
if let Some(CsvNotetype::GlobalNotetype(ref mut global)) = metadata.notetype {
let notetype = self
.get_notetype(NotetypeId(global.id))?
.ok_or(AnkiError::NotFound)?;
global.field_columns = vec![0; notetype.fields.len()];
global.field_columns[0] = 1;
let column_len = metadata.column_labels.len();
if metadata.column_labels.iter().all(String::is_empty) {
map_field_columns_by_index(&mut global.field_columns, column_len, &meta_columns);
} else {
map_field_columns_by_name(
&mut global.field_columns,
&metadata.column_labels,
&meta_columns,
&notetype.fields,
);
}
ensure_first_field_is_mapped(&mut global.field_columns, column_len, &meta_columns)?;
}
Ok(())
}
fn fallback_notetype_id(&mut self) -> Result<NotetypeId> {
Ok(if let Some(notetype_id) = self.get_current_notetype_id() {
notetype_id
} else {
self.storage
.get_all_notetype_names()?
.first()
.ok_or(AnkiError::NotFound)?
.0
})
}
}
pub(super) fn collect_tags(txt: &str) -> Vec<String> {
txt.split_whitespace()
.filter(|s| !s.is_empty())
.map(ToString::to_string)
.collect()
}
fn map_field_columns_by_index(
field_columns: &mut [u32],
column_len: usize,
meta_columns: &HashSet<usize>,
) {
let mut field_columns = field_columns.iter_mut();
for index in 1..column_len + 1 {
if !meta_columns.contains(&index) {
if let Some(field_column) = field_columns.next() {
*field_column = index as u32;
} else {
break;
}
}
}
}
fn map_field_columns_by_name(
field_columns: &mut [u32],
column_labels: &[String],
meta_columns: &HashSet<usize>,
note_fields: &[NoteField],
) {
let columns: HashMap<&str, usize> = HashMap::from_iter(
column_labels
.iter()
.enumerate()
.map(|(idx, s)| (s.as_str(), idx + 1))
.filter(|(_, idx)| !meta_columns.contains(idx)),
);
for (column, field) in field_columns.iter_mut().zip(note_fields) {
if let Some(index) = columns.get(field.name.as_str()) {
*column = *index as u32;
}
}
}
fn ensure_first_field_is_mapped(
field_columns: &mut [u32],
column_len: usize,
meta_columns: &HashSet<usize>,
) -> Result<()> {
if field_columns[0] == 0 {
field_columns[0] = (1..column_len + 1)
.find(|i| !meta_columns.contains(i))
.ok_or(AnkiError::ImportError(ImportError::NoFieldColumn))?
as u32;
}
Ok(())
}
fn maybe_set_fallback_columns(metadata: &mut CsvMetadata, line: &str) -> Result<()> {
if metadata.column_labels.is_empty() {
let columns = map_single_record(line, metadata.delimiter(), |r| r.len())?;
metadata.column_labels = vec![String::new(); columns];
}
Ok(())
}
fn maybe_set_fallback_is_html(metadata: &mut CsvMetadata, line: &str) -> Result<()> {
// TODO: should probably check more than one line; can reuse preview lines
// when it's implemented
if !metadata.force_is_html {
metadata.is_html =
map_single_record(line, metadata.delimiter(), |r| r.iter().any(is_html))?;
}
Ok(())
}
fn maybe_set_fallback_delimiter(
delimiter: Option<Delimiter>,
metadata: &mut CsvMetadata,
line: &str,
) {
if let Some(delim) = delimiter {
metadata.set_delimiter(delim);
} else if !metadata.force_delimiter {
metadata.set_delimiter(delimiter_from_line(line));
}
}
fn delimiter_from_value(value: &str) -> Option<Delimiter> {
let normed = value.to_ascii_lowercase();
for delimiter in Delimiter::iter() {
if normed.trim() == delimiter.name() || normed.as_bytes() == [delimiter.byte()] {
return Some(delimiter);
}
}
None
}
fn delimiter_from_line(line: &str) -> Delimiter {
// TODO: use smarter heuristic
for delimiter in Delimiter::iter() {
if line.contains(delimiter.byte() as char) {
return delimiter;
}
}
Delimiter::Space
}
fn map_single_record<T>(
line: &str,
delimiter: Delimiter,
op: impl FnOnce(&csv::StringRecord) -> T,
) -> Result<T> {
csv::ReaderBuilder::new()
.delimiter(delimiter.byte())
.from_reader(line.as_bytes())
.headers()
.map_err(|_| AnkiError::ImportError(ImportError::Corrupt))
.map(op)
}
fn strip_line_ending(line: &str) -> &str {
line.strip_suffix("\r\n")
.unwrap_or_else(|| line.strip_suffix('\n').unwrap_or(line))
}
impl Delimiter {
pub fn byte(self) -> u8 {
match self {
Delimiter::Comma => b',',
Delimiter::Semicolon => b';',
Delimiter::Tab => b'\t',
Delimiter::Space => b' ',
Delimiter::Pipe => b'|',
Delimiter::Colon => b':',
}
}
pub fn name(self) -> &'static str {
match self {
Delimiter::Comma => "comma",
Delimiter::Semicolon => "semicolon",
Delimiter::Tab => "tab",
Delimiter::Space => "space",
Delimiter::Pipe => "pipe",
Delimiter::Colon => "colon",
}
}
}
impl CsvNotetype {
fn new_global(id: NotetypeId) -> Self {
Self::GlobalNotetype(MappedNotetype {
id: id.0,
field_columns: Vec::new(),
})
}
}
impl CsvMetadata {
fn notetype_id(&self) -> Option<NotetypeId> {
if let Some(CsvNotetype::GlobalNotetype(ref global)) = self.notetype {
Some(NotetypeId(global.id))
} else {
None
}
}
pub(super) fn meta_columns(&self) -> HashSet<usize> {
let mut columns = HashSet::new();
if let Some(CsvDeck::DeckColumn(deck_column)) = self.deck {
columns.insert(deck_column as usize);
}
if let Some(CsvNotetype::NotetypeColumn(notetype_column)) = self.notetype {
columns.insert(notetype_column as usize);
}
if self.tags_column > 0 {
columns.insert(self.tags_column as usize);
}
columns
}
}
impl NameOrId {
pub fn parse(s: &str) -> Self {
if let Ok(id) = s.parse() {
Self::Id(id)
} else {
Self::Name(s.to_string())
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::collection::open_test_collection;
macro_rules! metadata {
($col:expr,$csv:expr) => {
metadata!($col, $csv, None)
};
($col:expr,$csv:expr, $delim:expr) => {
$col.get_reader_metadata(BufReader::new($csv.as_bytes()), $delim, None)
.unwrap()
};
}
impl CsvMetadata {
fn unwrap_deck_id(&self) -> i64 {
match self.deck {
Some(CsvDeck::DeckId(did)) => did,
_ => panic!("no deck id"),
}
}
fn unwrap_notetype_id(&self) -> i64 {
match self.notetype {
Some(CsvNotetype::GlobalNotetype(ref nt)) => nt.id,
_ => panic!("no notetype id"),
}
}
}
#[test]
fn should_detect_deck_by_name_or_id() {
let mut col = open_test_collection();
let deck_id = col.get_or_create_normal_deck("my deck").unwrap().id.0;
assert_eq!(metadata!(col, "#deck:my deck\n").unwrap_deck_id(), deck_id);
assert_eq!(
metadata!(col, format!("#deck:{deck_id}\n")).unwrap_deck_id(),
deck_id
);
// fallback
assert_eq!(metadata!(col, "#deck:foo\n").unwrap_deck_id(), 1);
assert_eq!(metadata!(col, "\n").unwrap_deck_id(), 1);
}
#[test]
fn should_detect_notetype_by_name_or_id() {
let mut col = open_test_collection();
let basic_id = col.get_notetype_by_name("Basic").unwrap().unwrap().id.0;
assert_eq!(
metadata!(col, "#notetype:Basic\n").unwrap_notetype_id(),
basic_id
);
assert_eq!(
metadata!(col, &format!("#notetype:{basic_id}\n")).unwrap_notetype_id(),
basic_id
);
}
#[test]
fn should_detect_valid_delimiters() {
let mut col = open_test_collection();
assert_eq!(
metadata!(col, "#separator:comma\n").delimiter(),
Delimiter::Comma
);
assert_eq!(
metadata!(col, "#separator:\t\n").delimiter(),
Delimiter::Tab
);
// fallback
assert_eq!(
metadata!(col, "#separator:foo\n").delimiter(),
Delimiter::Space
);
assert_eq!(
metadata!(col, "#separator:♥\n").delimiter(),
Delimiter::Space
);
// pick up from first line
assert_eq!(metadata!(col, "foo\tbar\n").delimiter(), Delimiter::Tab);
// override with provided
assert_eq!(
metadata!(col, "#separator: \nfoo\tbar\n", Some(Delimiter::Pipe)).delimiter(),
Delimiter::Pipe
);
}
#[test]
fn should_enforce_valid_html_flag() {
let mut col = open_test_collection();
let meta = metadata!(col, "#html:true\n");
assert!(meta.is_html);
assert!(meta.force_is_html);
let meta = metadata!(col, "#html:FALSE\n");
assert!(!meta.is_html);
assert!(meta.force_is_html);
assert!(!metadata!(col, "#html:maybe\n").force_is_html);
}
#[test]
fn should_set_missing_html_flag_by_first_line() {
let mut col = open_test_collection();
let meta = metadata!(col, "<br/>\n");
assert!(meta.is_html);
assert!(!meta.force_is_html);
// HTML check is field-, not row-based
assert!(!metadata!(col, "<br,/>\n").is_html);
assert!(!metadata!(col, "#html:false\n<br>\n").is_html);
}
#[test]
fn should_detect_old_and_new_style_tags() {
let mut col = open_test_collection();
assert_eq!(metadata!(col, "tags:foo bar\n").global_tags, ["foo", "bar"]);
assert_eq!(
metadata!(col, "#tags:foo bar\n").global_tags,
["foo", "bar"]
);
// only in head
assert_eq!(
metadata!(col, "#\n#tags:foo bar\n").global_tags,
["foo", "bar"]
);
assert_eq!(metadata!(col, "\n#tags:foo bar\n").global_tags, [""; 0]);
// only on very first line
assert_eq!(metadata!(col, "#\ntags:foo bar\n").global_tags, [""; 0]);
}
#[test]
fn should_detect_column_number_and_names() {
let mut col = open_test_collection();
// detect from line
assert_eq!(metadata!(col, "foo;bar\n").column_labels.len(), 2);
// detect encoded
assert_eq!(
metadata!(col, "#separator:,\nfoo;bar\n")
.column_labels
.len(),
1
);
assert_eq!(
metadata!(col, "#separator:|\nfoo|bar\n")
.column_labels
.len(),
2
);
// override
assert_eq!(
metadata!(col, "#separator:;\nfoo;bar\n", Some(Delimiter::Pipe))
.column_labels
.len(),
1
);
// custom names
assert_eq!(
metadata!(col, "#columns:one,two\n").column_labels,
["one", "two"]
);
assert_eq!(
metadata!(col, "#separator:|\n#columns:one|two\n").column_labels,
["one", "two"]
);
}
impl CsvMetadata {
fn unwrap_notetype_map(&self) -> &[u32] {
match &self.notetype {
Some(CsvNotetype::GlobalNotetype(nt)) => &nt.field_columns,
_ => panic!("no notetype map"),
}
}
}
#[test]
fn should_map_default_notetype_fields_by_index_if_no_column_names() {
let mut col = open_test_collection();
let meta = metadata!(col, "#deck column:1\nfoo,bar,baz\n");
assert_eq!(meta.unwrap_notetype_map(), &[2, 3]);
}
#[test]
fn should_map_default_notetype_fields_by_given_column_names() {
let mut col = open_test_collection();
let meta = metadata!(col, "#columns:Back,Front\nfoo,bar,baz\n");
assert_eq!(meta.unwrap_notetype_map(), &[2, 1]);
}
}

View File

@ -0,0 +1,6 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
mod export;
mod import;
mod metadata;

View File

@ -0,0 +1,504 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use std::{borrow::Cow, collections::HashMap, mem, sync::Arc};
use super::NameOrId;
use crate::{
card::{CardQueue, CardType},
import_export::{
text::{
DupeResolution, ForeignCard, ForeignData, ForeignNote, ForeignNotetype, ForeignTemplate,
},
ImportProgress, IncrementableProgress, LogNote, NoteLog,
},
notetype::{CardGenContext, CardTemplate, NoteField, NotetypeConfig},
prelude::*,
text::strip_html_preserving_media_filenames,
};
impl ForeignData {
pub fn import(
self,
col: &mut Collection,
progress_fn: impl 'static + FnMut(ImportProgress, bool) -> bool,
) -> Result<OpOutput<NoteLog>> {
let mut progress = IncrementableProgress::new(progress_fn);
progress.call(ImportProgress::File)?;
col.transact(Op::Import, |col| {
let mut ctx = Context::new(&self, col)?;
ctx.import_foreign_notetypes(self.notetypes)?;
ctx.import_foreign_notes(
self.notes,
&self.global_tags,
&self.updated_tags,
&mut progress,
)
})
}
}
impl NoteLog {
fn new(dupe_resolution: DupeResolution, found_notes: u32) -> Self {
Self {
dupe_resolution: dupe_resolution as i32,
found_notes,
..Default::default()
}
}
}
struct Context<'a> {
col: &'a mut Collection,
/// Contains the optional default notetype with the default key.
notetypes: HashMap<NameOrId, Option<Arc<Notetype>>>,
/// Contains the optional default deck id with the default key.
deck_ids: HashMap<NameOrId, Option<DeckId>>,
usn: Usn,
normalize_notes: bool,
today: u32,
dupe_resolution: DupeResolution,
card_gen_ctxs: HashMap<(NotetypeId, DeckId), CardGenContext<Arc<Notetype>>>,
existing_notes: HashMap<(NotetypeId, u32), Vec<NoteId>>,
}
struct NoteContext {
note: Note,
dupes: Vec<Note>,
cards: Vec<Card>,
notetype: Arc<Notetype>,
deck_id: DeckId,
}
impl<'a> Context<'a> {
fn new(data: &ForeignData, col: &'a mut Collection) -> Result<Self> {
let usn = col.usn()?;
let normalize_notes = col.get_config_bool(BoolKey::NormalizeNoteText);
let today = col.timing_today()?.days_elapsed;
let mut notetypes = HashMap::new();
notetypes.insert(
NameOrId::default(),
col.notetype_by_name_or_id(&data.default_notetype)?,
);
let mut deck_ids = HashMap::new();
deck_ids.insert(
NameOrId::default(),
col.deck_id_by_name_or_id(&data.default_deck)?,
);
let existing_notes = col.storage.all_notes_by_type_and_checksum()?;
Ok(Self {
col,
usn,
normalize_notes,
today,
dupe_resolution: data.dupe_resolution,
notetypes,
deck_ids,
card_gen_ctxs: HashMap::new(),
existing_notes,
})
}
fn import_foreign_notetypes(&mut self, notetypes: Vec<ForeignNotetype>) -> Result<()> {
for foreign in notetypes {
let mut notetype = foreign.into_native();
notetype.usn = self.usn;
self.col
.add_notetype_inner(&mut notetype, self.usn, false)?;
}
Ok(())
}
fn notetype_for_note(&mut self, note: &ForeignNote) -> Result<Option<Arc<Notetype>>> {
Ok(if let Some(nt) = self.notetypes.get(&note.notetype) {
nt.clone()
} else {
let nt = self.col.notetype_by_name_or_id(&note.notetype)?;
self.notetypes.insert(note.notetype.clone(), nt.clone());
nt
})
}
fn deck_id_for_note(&mut self, note: &ForeignNote) -> Result<Option<DeckId>> {
Ok(if let Some(did) = self.deck_ids.get(&note.deck) {
*did
} else {
let did = self.col.deck_id_by_name_or_id(&note.deck)?;
self.deck_ids.insert(note.deck.clone(), did);
did
})
}
fn import_foreign_notes(
&mut self,
notes: Vec<ForeignNote>,
global_tags: &[String],
updated_tags: &[String],
progress: &mut IncrementableProgress<ImportProgress>,
) -> Result<NoteLog> {
let mut incrementor = progress.incrementor(ImportProgress::Notes);
let mut log = NoteLog::new(self.dupe_resolution, notes.len() as u32);
for foreign in notes {
incrementor.increment()?;
if foreign.first_field_is_empty() {
log.empty_first_field.push(foreign.into_log_note());
continue;
}
if let Some(notetype) = self.notetype_for_note(&foreign)? {
if let Some(deck_id) = self.deck_id_for_note(&foreign)? {
let ctx = self.build_note_context(foreign, notetype, deck_id, global_tags)?;
self.import_note(ctx, updated_tags, &mut log)?;
} else {
log.missing_deck.push(foreign.into_log_note());
}
} else {
log.missing_notetype.push(foreign.into_log_note());
}
}
Ok(log)
}
fn build_note_context(
&mut self,
foreign: ForeignNote,
notetype: Arc<Notetype>,
deck_id: DeckId,
global_tags: &[String],
) -> Result<NoteContext> {
let (mut note, cards) = foreign.into_native(&notetype, deck_id, self.today, global_tags);
note.prepare_for_update(&notetype, self.normalize_notes)?;
let dupes = self.find_duplicates(&notetype, &note)?;
Ok(NoteContext {
note,
dupes,
cards,
notetype,
deck_id,
})
}
fn find_duplicates(&mut self, notetype: &Notetype, note: &Note) -> Result<Vec<Note>> {
let checksum = note
.checksum
.ok_or_else(|| AnkiError::invalid_input("note unprepared"))?;
self.existing_notes
.get(&(notetype.id, checksum))
.map(|dupe_ids| self.col.get_full_duplicates(note, dupe_ids))
.unwrap_or_else(|| Ok(vec![]))
}
fn import_note(
&mut self,
ctx: NoteContext,
updated_tags: &[String],
log: &mut NoteLog,
) -> Result<()> {
match self.dupe_resolution {
_ if ctx.dupes.is_empty() => self.add_note(ctx, &mut log.new)?,
DupeResolution::Add => self.add_note(ctx, &mut log.first_field_match)?,
DupeResolution::Update => self.update_with_note(ctx, updated_tags, log)?,
DupeResolution::Ignore => log.first_field_match.push(ctx.note.into_log_note()),
}
Ok(())
}
fn add_note(&mut self, mut ctx: NoteContext, log_queue: &mut Vec<LogNote>) -> Result<()> {
self.col.canonify_note_tags(&mut ctx.note, self.usn)?;
ctx.note.usn = self.usn;
self.col.add_note_only_undoable(&mut ctx.note)?;
self.add_cards(&mut ctx.cards, &ctx.note, ctx.deck_id, ctx.notetype)?;
log_queue.push(ctx.note.into_log_note());
Ok(())
}
fn add_cards(
&mut self,
cards: &mut [Card],
note: &Note,
deck_id: DeckId,
notetype: Arc<Notetype>,
) -> Result<()> {
self.import_cards(cards, note.id)?;
self.generate_missing_cards(notetype, deck_id, note)
}
fn update_with_note(
&mut self,
mut ctx: NoteContext,
updated_tags: &[String],
log: &mut NoteLog,
) -> Result<()> {
self.prepare_note_for_update(&mut ctx.note, updated_tags)?;
for dupe in mem::take(&mut ctx.dupes) {
self.maybe_update_dupe(dupe, &mut ctx, log)?;
}
Ok(())
}
fn prepare_note_for_update(&mut self, note: &mut Note, updated_tags: &[String]) -> Result<()> {
note.tags.extend(updated_tags.iter().cloned());
self.col.canonify_note_tags(note, self.usn)?;
note.set_modified(self.usn);
Ok(())
}
fn maybe_update_dupe(
&mut self,
dupe: Note,
ctx: &mut NoteContext,
log: &mut NoteLog,
) -> Result<()> {
ctx.note.id = dupe.id;
if dupe.equal_fields_and_tags(&ctx.note) {
log.duplicate.push(dupe.into_log_note());
} else {
self.col.update_note_undoable(&ctx.note, &dupe)?;
log.first_field_match.push(dupe.into_log_note());
}
self.add_cards(&mut ctx.cards, &ctx.note, ctx.deck_id, ctx.notetype.clone())
}
fn import_cards(&mut self, cards: &mut [Card], note_id: NoteId) -> Result<()> {
for card in cards {
card.note_id = note_id;
self.col.add_card(card)?;
}
Ok(())
}
fn generate_missing_cards(
&mut self,
notetype: Arc<Notetype>,
deck_id: DeckId,
note: &Note,
) -> Result<()> {
let card_gen_context = self
.card_gen_ctxs
.entry((notetype.id, deck_id))
.or_insert_with(|| CardGenContext::new(notetype, Some(deck_id), self.usn));
self.col
.generate_cards_for_existing_note(card_gen_context, note)
}
}
impl Note {
fn first_field_stripped(&self) -> Cow<str> {
strip_html_preserving_media_filenames(&self.fields()[0])
}
}
impl Collection {
pub(super) fn deck_id_by_name_or_id(&mut self, deck: &NameOrId) -> Result<Option<DeckId>> {
match deck {
NameOrId::Id(id) => Ok(self.get_deck(DeckId(*id))?.map(|_| DeckId(*id))),
NameOrId::Name(name) => self.get_deck_id(name),
}
}
pub(super) fn notetype_by_name_or_id(
&mut self,
notetype: &NameOrId,
) -> Result<Option<Arc<Notetype>>> {
match notetype {
NameOrId::Id(id) => self.get_notetype(NotetypeId(*id)),
NameOrId::Name(name) => self.get_notetype_by_name(name),
}
}
fn get_full_duplicates(&mut self, note: &Note, dupe_ids: &[NoteId]) -> Result<Vec<Note>> {
let first_field = note.first_field_stripped();
dupe_ids
.iter()
.filter_map(|&dupe_id| self.storage.get_note(dupe_id).transpose())
.filter(|res| match res {
Ok(dupe) => dupe.first_field_stripped() == first_field,
Err(_) => true,
})
.collect()
}
}
impl ForeignNote {
fn into_native(
self,
notetype: &Notetype,
deck_id: DeckId,
today: u32,
extra_tags: &[String],
) -> (Note, Vec<Card>) {
// TODO: Handle new and learning cards
let mut note = Note::new(notetype);
note.tags = self.tags;
note.tags.extend(extra_tags.iter().cloned());
note.fields_mut()
.iter_mut()
.zip(self.fields.into_iter())
.for_each(|(field, value)| *field = value);
let cards = self
.cards
.into_iter()
.enumerate()
.map(|(idx, c)| c.into_native(NoteId(0), idx as u16, deck_id, today))
.collect();
(note, cards)
}
fn first_field_is_empty(&self) -> bool {
self.fields.get(0).map(String::is_empty).unwrap_or(true)
}
}
impl ForeignCard {
fn into_native(self, note_id: NoteId, template_idx: u16, deck_id: DeckId, today: u32) -> Card {
Card {
note_id,
template_idx,
deck_id,
due: self.native_due(today),
interval: self.interval,
ease_factor: (self.ease_factor * 1000.).round() as u16,
reps: self.reps,
lapses: self.lapses,
ctype: CardType::Review,
queue: CardQueue::Review,
..Default::default()
}
}
fn native_due(self, today: u32) -> i32 {
let remaining_secs = self.interval as i64 - TimestampSecs::now().0;
let remaining_days = remaining_secs / (60 * 60 * 24);
0.max(remaining_days as i32 + today as i32)
}
}
impl ForeignNotetype {
fn into_native(self) -> Notetype {
Notetype {
name: self.name,
fields: self.fields.into_iter().map(NoteField::new).collect(),
templates: self
.templates
.into_iter()
.map(ForeignTemplate::into_native)
.collect(),
config: if self.is_cloze {
NotetypeConfig::new_cloze()
} else {
NotetypeConfig::new()
},
..Notetype::default()
}
}
}
impl ForeignTemplate {
fn into_native(self) -> CardTemplate {
CardTemplate::new(self.name, self.qfmt, self.afmt)
}
}
impl Note {
fn equal_fields_and_tags(&self, other: &Self) -> bool {
self.fields() == other.fields() && self.tags == other.tags
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::collection::open_test_collection;
impl ForeignData {
fn with_defaults() -> Self {
Self {
default_notetype: NameOrId::Name("Basic".to_string()),
default_deck: NameOrId::Id(1),
..Default::default()
}
}
fn add_note(&mut self, fields: &[&str]) {
self.notes.push(ForeignNote {
fields: fields.iter().map(ToString::to_string).collect(),
..Default::default()
});
}
}
#[test]
fn should_always_add_note_if_dupe_mode_is_add() {
let mut col = open_test_collection();
let mut data = ForeignData::with_defaults();
data.add_note(&["same", "old"]);
data.dupe_resolution = DupeResolution::Add;
data.clone().import(&mut col, |_, _| true).unwrap();
data.import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.notes_table_len(), 2);
}
#[test]
fn should_add_or_ignore_note_if_dupe_mode_is_ignore() {
let mut col = open_test_collection();
let mut data = ForeignData::with_defaults();
data.add_note(&["same", "old"]);
data.dupe_resolution = DupeResolution::Ignore;
data.clone().import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.notes_table_len(), 1);
data.notes[0].fields[1] = "new".to_string();
data.import(&mut col, |_, _| true).unwrap();
let notes = col.storage.get_all_notes();
assert_eq!(notes.len(), 1);
assert_eq!(notes[0].fields()[1], "old");
}
#[test]
fn should_update_or_add_note_if_dupe_mode_is_update() {
let mut col = open_test_collection();
let mut data = ForeignData::with_defaults();
data.add_note(&["same", "old"]);
data.dupe_resolution = DupeResolution::Update;
data.clone().import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.notes_table_len(), 1);
data.notes[0].fields[1] = "new".to_string();
data.import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.get_all_notes()[0].fields()[1], "new");
}
#[test]
fn should_recognize_normalized_duplicate_only_if_normalization_is_enabled() {
let mut col = open_test_collection();
col.add_new_note_with_fields("Basic", &["", "old"]);
let mut data = ForeignData::with_defaults();
data.dupe_resolution = DupeResolution::Update;
data.add_note(&["", "new"]);
data.clone().import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.get_all_notes()[0].fields(), &["", "new"]);
col.set_config_bool(BoolKey::NormalizeNoteText, false, false)
.unwrap();
data.import(&mut col, |_, _| true).unwrap();
let notes = col.storage.get_all_notes();
assert_eq!(notes[0].fields(), &["", "new"]);
assert_eq!(notes[1].fields(), &["", "new"]);
}
#[test]
fn should_add_global_tags() {
let mut col = open_test_collection();
let mut data = ForeignData::with_defaults();
data.add_note(&["foo"]);
data.notes[0].tags = vec![String::from("bar")];
data.global_tags = vec![String::from("baz")];
data.import(&mut col, |_, _| true).unwrap();
assert_eq!(col.storage.get_all_notes()[0].tags, ["bar", "baz"]);
}
}

View File

@ -0,0 +1,30 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use crate::{
import_export::{text::ForeignData, ImportProgress, NoteLog},
prelude::*,
};
impl Collection {
pub fn import_json_file(
&mut self,
path: &str,
mut progress_fn: impl 'static + FnMut(ImportProgress, bool) -> bool,
) -> Result<OpOutput<NoteLog>> {
progress_fn(ImportProgress::Gathering, false);
let slice = std::fs::read(path)?;
let data: ForeignData = serde_json::from_slice(&slice)?;
data.import(self, progress_fn)
}
pub fn import_json_string(
&mut self,
json: &str,
mut progress_fn: impl 'static + FnMut(ImportProgress, bool) -> bool,
) -> Result<OpOutput<NoteLog>> {
progress_fn(ImportProgress::Gathering, false);
let data: ForeignData = serde_json::from_str(json)?;
data.import(self, progress_fn)
}
}

View File

@ -0,0 +1,87 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
pub mod csv;
mod import;
mod json;
use serde_derive::{Deserialize, Serialize};
use super::LogNote;
use crate::backend_proto::import_csv_request::DupeResolution;
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[serde(default)]
pub struct ForeignData {
dupe_resolution: DupeResolution,
default_deck: NameOrId,
default_notetype: NameOrId,
notes: Vec<ForeignNote>,
notetypes: Vec<ForeignNotetype>,
global_tags: Vec<String>,
updated_tags: Vec<String>,
}
#[derive(Debug, Clone, PartialEq, Default, Serialize, Deserialize)]
#[serde(default)]
pub struct ForeignNote {
fields: Vec<String>,
tags: Vec<String>,
notetype: NameOrId,
deck: NameOrId,
cards: Vec<ForeignCard>,
}
#[derive(Debug, Clone, Copy, PartialEq, Default, Serialize, Deserialize)]
#[serde(default)]
pub struct ForeignCard {
pub due: i32,
pub interval: u32,
pub ease_factor: f32,
pub reps: u32,
pub lapses: u32,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct ForeignNotetype {
name: String,
fields: Vec<String>,
templates: Vec<ForeignTemplate>,
#[serde(default)]
is_cloze: bool,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct ForeignTemplate {
name: String,
qfmt: String,
afmt: String,
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(untagged)]
pub enum NameOrId {
Id(i64),
Name(String),
}
impl Default for NameOrId {
fn default() -> Self {
NameOrId::Name(String::new())
}
}
impl From<String> for NameOrId {
fn from(s: String) -> Self {
Self::Name(s)
}
}
impl ForeignNote {
pub(crate) fn into_log_note(self) -> LogNote {
LogNote {
id: None,
fields: self.fields,
}
}
}

View File

@ -80,7 +80,7 @@ impl Collection {
.get_notetype(note.notetype_id)?
.ok_or_else(|| AnkiError::invalid_input("missing note type"))?;
let last_deck = col.get_last_deck_added_to_for_notetype(note.notetype_id);
let ctx = CardGenContext::new(&nt, last_deck, col.usn()?);
let ctx = CardGenContext::new(nt.as_ref(), last_deck, col.usn()?);
let norm = col.get_config_bool(BoolKey::NormalizeNoteText);
col.add_note_inner(&ctx, note, did, norm)
})
@ -334,7 +334,7 @@ impl Collection {
pub(crate) fn add_note_inner(
&mut self,
ctx: &CardGenContext,
ctx: &CardGenContext<&Notetype>,
note: &mut Note,
did: DeckId,
normalize_text: bool,
@ -397,7 +397,7 @@ impl Collection {
.get_notetype(note.notetype_id)?
.ok_or_else(|| AnkiError::invalid_input("missing note type"))?;
let last_deck = self.get_last_deck_added_to_for_notetype(note.notetype_id);
let ctx = CardGenContext::new(&nt, last_deck, self.usn()?);
let ctx = CardGenContext::new(nt.as_ref(), last_deck, self.usn()?);
let norm = self.get_config_bool(BoolKey::NormalizeNoteText);
self.update_note_inner_generating_cards(&ctx, note, &existing_note, true, norm, true)?;
Ok(())
@ -405,7 +405,7 @@ impl Collection {
pub(crate) fn update_note_inner_generating_cards(
&mut self,
ctx: &CardGenContext,
ctx: &CardGenContext<&Notetype>,
note: &mut Note,
original: &Note,
mark_note_modified: bool,
@ -508,7 +508,7 @@ impl Collection {
if out.generate_cards {
let ctx = genctx.get_or_insert_with(|| {
CardGenContext::new(
&nt,
nt.as_ref(),
self.get_last_deck_added_to_for_notetype(nt.id),
usn,
)

View File

@ -40,7 +40,7 @@ impl Collection {
/// Saves in the undo queue, and commits to DB.
/// No validation, card generation or normalization is done.
pub(super) fn update_note_undoable(&mut self, note: &Note, original: &Note) -> Result<()> {
pub(crate) fn update_note_undoable(&mut self, note: &Note, original: &Note) -> Result<()> {
self.save_undo(UndoableNoteChange::Updated(Box::new(original.clone())));
self.storage.update_note(note)?;

View File

@ -1,7 +1,10 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use std::collections::{HashMap, HashSet};
use std::{
collections::{HashMap, HashSet},
ops::Deref,
};
use itertools::Itertools;
use rand::{rngs::StdRng, Rng, SeedableRng};
@ -38,9 +41,9 @@ pub(crate) struct SingleCardGenContext {
/// Info required to determine which cards should be generated when note added/updated,
/// and where they should be placed.
pub(crate) struct CardGenContext<'a> {
pub(crate) struct CardGenContext<N: Deref<Target = Notetype>> {
pub usn: Usn,
pub notetype: &'a Notetype,
pub notetype: N,
/// The last deck that was added to with this note type
pub last_deck: Option<DeckId>,
cards: Vec<SingleCardGenContext>,
@ -53,20 +56,21 @@ pub(crate) struct CardGenCache {
deck_configs: HashMap<DeckId, DeckConfig>,
}
impl CardGenContext<'_> {
pub(crate) fn new(nt: &Notetype, last_deck: Option<DeckId>, usn: Usn) -> CardGenContext<'_> {
impl<N: Deref<Target = Notetype>> CardGenContext<N> {
pub(crate) fn new(nt: N, last_deck: Option<DeckId>, usn: Usn) -> CardGenContext<N> {
let cards = nt
.templates
.iter()
.map(|tmpl| SingleCardGenContext {
template: tmpl.parsed_question(),
target_deck_id: tmpl.target_deck_id(),
})
.collect();
CardGenContext {
usn,
last_deck,
notetype: nt,
cards: nt
.templates
.iter()
.map(|tmpl| SingleCardGenContext {
template: tmpl.parsed_question(),
target_deck_id: tmpl.target_deck_id(),
})
.collect(),
cards,
}
}
@ -209,7 +213,7 @@ pub(crate) fn extract_data_from_existing_cards(
impl Collection {
pub(crate) fn generate_cards_for_new_note(
&mut self,
ctx: &CardGenContext,
ctx: &CardGenContext<impl Deref<Target = Notetype>>,
note: &Note,
target_deck_id: DeckId,
) -> Result<()> {
@ -224,7 +228,7 @@ impl Collection {
pub(crate) fn generate_cards_for_existing_note(
&mut self,
ctx: &CardGenContext,
ctx: &CardGenContext<impl Deref<Target = Notetype>>,
note: &Note,
) -> Result<()> {
let existing = self.storage.existing_cards_for_note(note.id)?;
@ -233,7 +237,7 @@ impl Collection {
fn generate_cards_for_note(
&mut self,
ctx: &CardGenContext,
ctx: &CardGenContext<impl Deref<Target = Notetype>>,
note: &Note,
existing: &[AlreadyGeneratedCardInfo],
target_deck_id: Option<DeckId>,
@ -246,7 +250,10 @@ impl Collection {
self.add_generated_cards(note.id, &cards, target_deck_id, cache)
}
pub(crate) fn generate_cards_for_notetype(&mut self, ctx: &CardGenContext) -> Result<()> {
pub(crate) fn generate_cards_for_notetype(
&mut self,
ctx: &CardGenContext<impl Deref<Target = Notetype>>,
) -> Result<()> {
let existing_cards = self.storage.existing_cards_for_notetype(ctx.notetype.id)?;
let by_note = group_generated_cards_by_note(existing_cards);
let mut cache = CardGenCache::default();

View File

@ -0,0 +1,7 @@
.cloze {
font-weight: bold;
color: blue;
}
.nightMode .cloze {
color: lightblue;
}

View File

@ -53,6 +53,7 @@ use crate::{
define_newtype!(NotetypeId, i64);
pub(crate) const DEFAULT_CSS: &str = include_str!("styling.css");
pub(crate) const DEFAULT_CLOZE_CSS: &str = include_str!("cloze_styling.css");
pub(crate) const DEFAULT_LATEX_HEADER: &str = include_str!("header.tex");
pub(crate) const DEFAULT_LATEX_FOOTER: &str = r"\end{document}";
lazy_static! {
@ -88,16 +89,29 @@ impl Default for Notetype {
usn: Usn(0),
fields: vec![],
templates: vec![],
config: NotetypeConfig {
css: DEFAULT_CSS.into(),
latex_pre: DEFAULT_LATEX_HEADER.into(),
latex_post: DEFAULT_LATEX_FOOTER.into(),
..Default::default()
},
config: NotetypeConfig::new(),
}
}
}
impl NotetypeConfig {
pub(crate) fn new() -> Self {
NotetypeConfig {
css: DEFAULT_CSS.into(),
latex_pre: DEFAULT_LATEX_HEADER.into(),
latex_post: DEFAULT_LATEX_FOOTER.into(),
..Default::default()
}
}
pub(crate) fn new_cloze() -> Self {
let mut config = Self::new();
config.css += DEFAULT_CLOZE_CSS;
config.kind = NotetypeKind::Cloze as i32;
config
}
}
impl Notetype {
pub fn new_note(&self) -> Note {
Note::new(self)

View File

@ -255,7 +255,7 @@ impl Collection {
.get_notetype(new_notetype_id)?
.ok_or(AnkiError::NotFound)?;
let last_deck = self.get_last_deck_added_to_for_notetype(notetype.id);
let ctx = CardGenContext::new(&notetype, last_deck, usn);
let ctx = CardGenContext::new(notetype.as_ref(), last_deck, usn);
for nid in note_ids {
let mut note = self.storage.get_note(*nid)?.ok_or(AnkiError::NotFound)?;

View File

@ -1,7 +1,7 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
use super::NotetypeKind;
use super::NotetypeConfig;
use crate::{
backend_proto::stock_notetype::Kind,
config::{ConfigEntry, ConfigKey},
@ -112,6 +112,7 @@ pub(crate) fn basic_optional_reverse(tr: &I18n) -> Notetype {
pub(crate) fn cloze(tr: &I18n) -> Notetype {
let mut nt = Notetype {
name: tr.notetypes_cloze_name().into(),
config: NotetypeConfig::new_cloze(),
..Default::default()
};
let text = tr.notetypes_text_field();
@ -121,15 +122,5 @@ pub(crate) fn cloze(tr: &I18n) -> Notetype {
let qfmt = format!("{{{{cloze:{}}}}}", text);
let afmt = format!("{}<br>\n{{{{{}}}}}", qfmt, back_extra);
nt.add_template(nt.name.clone(), qfmt, afmt);
nt.config.kind = NotetypeKind::Cloze as i32;
nt.config.css += "
.cloze {
font-weight: bold;
color: blue;
}
.nightMode .cloze {
color: lightblue;
}
";
nt
}

View File

@ -173,6 +173,23 @@ impl super::SqliteStorage {
.collect()
}
/// Returns [(nid, field 0)] of notes with the same checksum.
/// The caller should strip the fields and compare to see if they actually
/// match.
pub(crate) fn all_notes_by_type_and_checksum(
&self,
) -> Result<HashMap<(NotetypeId, u32), Vec<NoteId>>> {
let mut map = HashMap::new();
let mut stmt = self.db.prepare("SELECT mid, csum, id FROM notes")?;
let mut rows = stmt.query([])?;
while let Some(row) = rows.next()? {
map.entry((row.get(0)?, row.get(1)?))
.or_insert_with(Vec::new)
.push(row.get(2)?);
}
Ok(map)
}
/// Return total number of notes. Slow.
pub(crate) fn total_notes(&self) -> Result<u32> {
self.db
@ -296,6 +313,24 @@ impl super::SqliteStorage {
Ok(())
}
/// Cards will arrive in card id order, not search order.
pub(crate) fn for_each_note_in_search(
&self,
mut func: impl FnMut(Note) -> Result<()>,
) -> Result<()> {
let mut stmt = self.db.prepare_cached(concat!(
include_str!("get.sql"),
" WHERE id IN (SELECT nid FROM search_nids)"
))?;
let mut rows = stmt.query([])?;
while let Some(row) = rows.next()? {
let note = row_to_note(row)?;
func(note)?
}
Ok(())
}
pub(crate) fn note_guid_map(&mut self) -> Result<HashMap<String, NoteMeta>> {
self.db
.prepare("SELECT guid, id, mod, mid FROM notes")?
@ -313,6 +348,11 @@ impl super::SqliteStorage {
.collect::<Result<_>>()
.unwrap()
}
#[cfg(test)]
pub(crate) fn notes_table_len(&mut self) -> usize {
self.db_scalar("SELECT COUNT(*) FROM notes").unwrap()
}
}
fn row_to_note(row: &Row) -> Result<Note> {

View File

@ -374,4 +374,11 @@ impl SqliteStorage {
self.db.execute("update col set models = ?", [json])?;
Ok(())
}
pub(crate) fn get_field_names(&self, notetype_id: NotetypeId) -> Result<Vec<String>> {
self.db
.prepare_cached("SELECT name FROM fields WHERE ntid = ? ORDER BY ord")?
.query_and_then([notetype_id], |row| Ok(row.get(0)?))?
.collect()
}
}

View File

@ -41,6 +41,13 @@ impl Collection {
note
}
pub(crate) fn add_new_note_with_fields(&mut self, notetype: &str, fields: &[&str]) -> Note {
let mut note = self.new_note(notetype);
*note.fields_mut() = fields.iter().map(ToString::to_string).collect();
self.add_note(&mut note, DeckId(1)).unwrap();
note
}
pub(crate) fn get_all_notes(&mut self) -> Vec<Note> {
self.storage.get_all_notes()
}

View File

@ -172,11 +172,19 @@ lazy_static! {
"#).unwrap();
}
pub fn html_to_text_line(html: &str) -> Cow<str> {
pub fn is_html(text: &str) -> bool {
HTML.is_match(text)
}
pub fn html_to_text_line(html: &str, preserve_media_filenames: bool) -> Cow<str> {
PERSISTENT_HTML_SPACERS
.replace_all(html, " ")
.map_cow(|s| UNPRINTABLE_TAGS.replace_all(s, ""))
.map_cow(strip_html_preserving_media_filenames)
.map_cow(if preserve_media_filenames {
strip_html_preserving_media_filenames
} else {
strip_html
})
.trim()
}

View File

@ -70,3 +70,7 @@ samp {
background-position: left 0.75rem center;
}
}
.night-mode .form-select:disabled {
background-color: var(--disabled);
}

View File

@ -0,0 +1,47 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import { pageTheme } from "../sveltelib/theme";
export let id: string | undefined;
export let value: boolean;
export let disabled = false;
</script>
<div class="form-check form-switch">
<input
{id}
type="checkbox"
class="form-check-input"
class:nightMode={$pageTheme.isDark}
bind:checked={value}
{disabled}
/>
</div>
<style lang="scss">
.form-switch {
/* bootstrap adds a default 2.5em left pad, which causes */
/* text to wrap prematurely */
padding-left: 0.5em;
}
.form-check-input {
-webkit-appearance: none;
height: 1.6em;
/* otherwise the switch circle shows slightly off-centered */
margin-top: 0;
.form-switch & {
width: 3em;
margin-left: 1.5em;
}
}
.nightMode:not(:checked) {
background-color: var(--frame-bg);
border-color: var(--border);
}
</style>

View File

@ -5,9 +5,9 @@
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import Switch from "../components/Switch.svelte";
import Label from "./Label.svelte";
import RevertButton from "./RevertButton.svelte";
import Switch from "./Switch.svelte";
import TooltipLabel from "./TooltipLabel.svelte";
export let value: boolean;

View File

@ -0,0 +1,20 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
/**
* Trivial wrapper to silence Svelte deprecation warnings
*/
export function execCommand(
command: string,
showUI?: boolean | undefined,
value?: string | undefined,
): void {
document.execCommand(command, showUI, value);
}
/**
* Trivial wrappers to silence Svelte deprecation warnings
*/
export function queryCommandState(command: string): boolean {
return document.queryCommandState(command);
}

View File

@ -1,6 +1,7 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
export * from "./content-editable";
export * from "./location";
export * from "./move-nodes";
export * from "./place-caret";

View File

@ -26,13 +26,17 @@ _ts_deps = [
"//ts/lib",
"//ts/domlib",
"//ts/sveltelib",
"//ts/tag-editor",
"@npm//@fluent",
"@npm//@types/codemirror",
"@npm//codemirror",
"@npm//svelte",
]
compile_svelte(deps = _ts_deps)
compile_svelte(
visibility = ["//visibility:public"],
deps = _ts_deps,
)
typescript(
name = "editor",
@ -100,6 +104,7 @@ svelte_check(
"//sass:button_mixins_lib",
"//sass/bootstrap",
"//ts/components",
"//ts/tag-editor",
"//ts/editable:editable_ts",
"@npm//@types/bootstrap",
"@npm//@types/codemirror",

View File

@ -39,7 +39,9 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import Absolute from "../components/Absolute.svelte";
import Badge from "../components/Badge.svelte";
import StickyContainer from "../components/StickyContainer.svelte";
import { bridgeCommand } from "../lib/bridgecommand";
import { TagEditor } from "../tag-editor";
import { ChangeTimer } from "./change-timer";
import DecoratedElements from "./DecoratedElements.svelte";
import { clearableArray } from "./destroyable";
@ -59,7 +61,6 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import PlainTextBadge from "./PlainTextBadge.svelte";
import { editingInputIsRichText, RichTextInput } from "./rich-text-input";
import RichTextBadge from "./RichTextBadge.svelte";
import { TagEditor } from "./tag-editor";
function quoteFontFamily(fontFamily: string): string {
// generic families (e.g. sans-serif) must not be quoted
@ -380,7 +381,9 @@ the AddCards dialog) should be implemented in the user of this component.
</Fields>
</FieldsEditor>
<TagEditor {tags} on:tagsupdate={saveTags} />
<StickyContainer --gutter-block="0.1rem" --sticky-borders="1px 0 0" class="d-flex">
<TagEditor {tags} on:tagsupdate={saveTags} />
</StickyContainer>
</div>
<style lang="scss">

View File

@ -14,10 +14,10 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import IconButton from "../../components/IconButton.svelte";
import Shortcut from "../../components/Shortcut.svelte";
import WithDropdown from "../../components/WithDropdown.svelte";
import { execCommand } from "../../domlib";
import { getListItem } from "../../lib/dom";
import * as tr from "../../lib/ftl";
import { getPlatformString } from "../../lib/shortcuts";
import { execCommand } from "../helpers";
import { context } from "../NoteEditor.svelte";
import { editingInputIsRichText } from "../rich-text-input";
import CommandIconButton from "./CommandIconButton.svelte";

View File

@ -6,8 +6,8 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import IconButton from "../../components/IconButton.svelte";
import Shortcut from "../../components/Shortcut.svelte";
import WithState from "../../components/WithState.svelte";
import { execCommand, queryCommandState } from "../../domlib";
import { getPlatformString } from "../../lib/shortcuts";
import { execCommand, queryCommandState } from "../helpers";
import { context as noteEditorContext } from "../NoteEditor.svelte";
import { editingInputIsRichText } from "../rich-text-input";

View File

@ -1,24 +1,6 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
/**
* Trivial wrapper to silence Svelte deprecation warnings
*/
export function execCommand(
command: string,
showUI?: boolean | undefined,
value?: string | undefined,
): void {
document.execCommand(command, showUI, value);
}
/**
* Trivial wrappers to silence Svelte deprecation warnings
*/
export function queryCommandState(command: string): boolean {
return document.queryCommandState(command);
}
function isFontElement(element: Element): element is HTMLFontElement {
return element.tagName === "FONT";
}

View File

@ -2,8 +2,8 @@
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import { updateAllState } from "../components/WithState.svelte";
import { execCommand } from "../domlib";
import { filterHTML } from "../html-filter";
import { execCommand } from "./helpers";
export function pasteHTML(
html: string,

View File

@ -6,9 +6,7 @@
"mathjax-overlay/*",
"plain-text-input/*",
"rich-text-input/*",
"editor-toolbar/*",
"tag-editor/*",
"tag-editor/tag-options-button/*"
"editor-toolbar/*"
],
"references": [
{ "path": "../components" },
@ -16,6 +14,7 @@
{ "path": "../lib" },
{ "path": "../sveltelib" },
{ "path": "../editable" },
{ "path": "../html-filter" }
{ "path": "../html-filter" },
{ "path": "../tag-editor" }
]
}

81
ts/import-csv/BUILD.bazel Normal file
View File

@ -0,0 +1,81 @@
load("//ts:prettier.bzl", "prettier_test")
load("//ts:eslint.bzl", "eslint_test")
load("//ts/svelte:svelte.bzl", "compile_svelte", "svelte_check")
load("//ts:esbuild.bzl", "esbuild")
load("//ts:generate_page.bzl", "generate_page")
load("//ts:compile_sass.bzl", "compile_sass")
load("//ts:typescript.bzl", "typescript")
generate_page(page = "import-csv")
compile_sass(
srcs = ["import-csv-base.scss"],
group = "base_css",
visibility = ["//visibility:public"],
deps = [
"//sass:base_lib",
"//sass:scrollbar_lib",
"//sass/bootstrap",
],
)
_ts_deps = [
"//ts/components",
"//ts/lib",
"//ts/sveltelib",
"@npm//@fluent",
"@npm//@types/jest",
"@npm//lodash-es",
"@npm//svelte",
"@npm//marked",
]
compile_svelte(deps = _ts_deps)
typescript(
name = "index",
deps = _ts_deps + [
":svelte",
],
)
esbuild(
name = "import-csv",
args = {
"globalName": "anki",
"loader": {".svg": "text"},
},
entry_point = "index.ts",
output_css = "import-csv.css",
visibility = ["//visibility:public"],
deps = [
":base_css",
":index",
":svelte",
"//ts/tag-editor",
"@npm//@mdi",
"@npm//bootstrap-icons",
],
)
# Tests
################
prettier_test()
eslint_test()
svelte_check(
name = "svelte_check",
srcs = glob([
"*.ts",
"*.svelte",
]) + [
"//sass:button_mixins_lib",
"//sass/bootstrap",
"@npm//@types/bootstrap",
"@npm//@types/lodash-es",
"@npm//@types/marked",
"//ts/components",
],
)

View File

@ -0,0 +1,27 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import * as tr from "../lib/ftl";
import type { Decks } from "../lib/proto";
export let deckNameIds: Decks.DeckNameId[];
export let deckId: number;
</script>
<Row --cols={2}>
<Col --col-size={1}>
{tr.decksDeck()}
</Col>
<Col --col-size={1}>
<!-- svelte-ignore a11y-no-onchange -->
<select class="form-select" bind:value={deckId}>
{#each deckNameIds as { id, name }}
<option value={id}>{name}</option>
{/each}
</select>
</Col>
</Row>

View File

@ -0,0 +1,37 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import * as tr from "../lib/ftl";
import { ImportExport } from "../lib/proto";
export let delimiter: ImportExport.CsvMetadata.Delimiter;
export let disabled: boolean;
const Delimiter = ImportExport.CsvMetadata.Delimiter;
const delimiters = [
{ value: Delimiter.TAB, label: tr.importingTab() },
{ value: Delimiter.PIPE, label: tr.importingPipe() },
{ value: Delimiter.SEMICOLON, label: tr.importingSemicolon() },
{ value: Delimiter.COLON, label: tr.importingColon() },
{ value: Delimiter.COMMA, label: tr.importingComma() },
{ value: Delimiter.SPACE, label: tr.studyingSpace() },
];
</script>
<Row --cols={2}>
<Col --col-size={1}>
{tr.importingFieldSeparator()}
</Col>
<Col --col-size={1}>
<!-- svelte-ignore a11y-no-onchange -->
<select class="form-select" bind:value={delimiter} {disabled}>
{#each delimiters as { value, label }}
<option {value}>{label}</option>
{/each}
</select>
</Col>
</Row>

View File

@ -0,0 +1,41 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import * as tr from "../lib/ftl";
import { ImportExport } from "../lib/proto";
export let dupeResolution: ImportExport.ImportCsvRequest.DupeResolution;
const dupeResolutions = [
{
value: ImportExport.ImportCsvRequest.DupeResolution.UPDATE,
label: tr.importingUpdate(),
},
{
value: ImportExport.ImportCsvRequest.DupeResolution.ADD,
label: tr.importingDuplicate(),
},
{
value: ImportExport.ImportCsvRequest.DupeResolution.IGNORE,
label: tr.importingPreserve(),
},
];
</script>
<Row --cols={2}>
<Col --col-size={1}>
{tr.importingExistingNotes()}
</Col>
<Col --col-size={1}>
<!-- svelte-ignore a11y-no-onchange -->
<select class="form-select" bind:value={dupeResolution}>
{#each dupeResolutions as { label, value }}
<option {value}>{label}</option>
{/each}
</select>
</Col>
</Row>

View File

@ -0,0 +1,31 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Spacer from "../components/Spacer.svelte";
import * as tr from "../lib/ftl";
import type { ImportExport } from "../lib/proto";
import type { ColumnOption } from "./lib";
import { getNotetypeFields } from "./lib";
import MapperRow from "./MapperRow.svelte";
export let columnOptions: ColumnOption[];
export let tagsColumn: number;
export let globalNotetype: ImportExport.CsvMetadata.MappedNotetype | null;
</script>
{#if globalNotetype}
{#await getNotetypeFields(globalNotetype.id) then fieldNames}
{#each fieldNames as label, idx}
<!-- first index is treated specially, because it must be assigned some column -->
<MapperRow
{label}
columnOptions={idx === 0 ? columnOptions.slice(1) : columnOptions}
bind:value={globalNotetype.fieldColumns[idx]}
/>
{/each}
{/await}
{/if}
<Spacer --height="1.5rem" />
<MapperRow label={tr.editingTags()} {columnOptions} bind:value={tagsColumn} />

View File

@ -0,0 +1,21 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import StickyContainer from "../components/StickyContainer.svelte";
export let heading: string;
</script>
<StickyContainer --sticky-border="var(--border)" --sticky-borders="0px 0 1px">
<h1>
{heading}
</h1>
</StickyContainer>
<style lang="scss">
h1 {
padding-top: 0.5em;
}
</style>

View File

@ -0,0 +1,22 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import Switch from "../components/Switch.svelte";
import * as tr from "../lib/ftl";
export let isHtml: boolean;
export let disabled: boolean;
</script>
<Row --cols={2}>
<Col --col-size={1}>
{tr.importingAllowHtmlInFields()}
</Col>
<Col --col-size={1} --col-justify="flex-end">
<Switch id={undefined} bind:value={isHtml} {disabled} />
</Col>
</Row>

View File

@ -0,0 +1,109 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Container from "../components/Container.svelte";
import Row from "../components/Row.svelte";
import Spacer from "../components/Spacer.svelte";
import * as tr from "../lib/ftl";
import { Decks, ImportExport, importExport, Notetypes } from "../lib/proto";
import DeckSelector from "./DeckSelector.svelte";
import DelimiterSelector from "./DelimiterSelector.svelte";
import DupeResolutionSelector from "./DupeResolutionSelector.svelte";
import FieldMapper from "./FieldMapper.svelte";
import Header from "./Header.svelte";
import HtmlSwitch from "./HtmlSwitch.svelte";
import { getColumnOptions, getCsvMetadata } from "./lib";
import NotetypeSelector from "./NotetypeSelector.svelte";
import StickyFooter from "./StickyFooter.svelte";
import Tags from "./Tags.svelte";
export let path: string;
export let notetypeNameIds: Notetypes.NotetypeNameId[];
export let deckNameIds: Decks.DeckNameId[];
export let delimiter: ImportExport.CsvMetadata.Delimiter;
export let forceDelimiter: boolean;
export let forceIsHtml: boolean;
export let isHtml: boolean;
export let globalTags: string[];
export let updatedTags: string[];
export let columnLabels: string[];
export let tagsColumn: number;
// Protobuf oneofs. Exactly one of these pairs is expected to be set.
export let notetypeColumn: number | null;
export let globalNotetype: ImportExport.CsvMetadata.MappedNotetype | null;
export let deckId: number | null;
export let deckColumn: number | null;
let dupeResolution: ImportExport.ImportCsvRequest.DupeResolution;
let lastNotetypeId = globalNotetype?.id;
$: columnOptions = getColumnOptions(columnLabels, notetypeColumn, deckColumn);
$: getCsvMetadata(path, delimiter).then((meta) => {
columnLabels = meta.columnLabels;
});
$: if (globalNotetype?.id !== lastNotetypeId) {
lastNotetypeId = globalNotetype?.id;
getCsvMetadata(path, delimiter, globalNotetype?.id).then((meta) => {
globalNotetype = meta.globalNotetype ?? null;
});
}
async function onImport(): Promise<void> {
await importExport.importCsv(
ImportExport.ImportCsvRequest.create({
path,
dupeResolution,
metadata: ImportExport.CsvMetadata.create({
delimiter,
forceDelimiter,
isHtml,
forceIsHtml,
globalTags,
updatedTags,
columnLabels,
tagsColumn,
notetypeColumn,
globalNotetype,
deckColumn,
deckId,
}),
}),
);
}
</script>
<Container --gutter-inline="0.75rem" --gutter-block="0.25rem">
<Row --cols={2}>
<Col --col-size={1} breakpoint="md">
<Container>
<Header heading={tr.importingImportOptions()} />
<Spacer --height="1.5rem" />
{#if globalNotetype}
<NotetypeSelector
{notetypeNameIds}
bind:notetypeId={globalNotetype.id}
/>
{/if}
{#if deckId}
<DeckSelector {deckNameIds} bind:deckId />
{/if}
<DupeResolutionSelector bind:dupeResolution />
<DelimiterSelector bind:delimiter disabled={forceDelimiter} />
<HtmlSwitch bind:isHtml disabled={forceIsHtml} />
<Tags bind:globalTags bind:updatedTags />
</Container>
</Col>
<Col --col-size={1} breakpoint="md">
<Container>
<Header heading={tr.importingFieldMapping()} />
<Spacer --height="1.5rem" />
<FieldMapper {columnOptions} bind:globalNotetype bind:tagsColumn />
</Container>
</Col>
</Row>
<StickyFooter {path} {onImport} />
</Container>

View File

@ -0,0 +1,27 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import type { ColumnOption } from "./lib";
export let label: string;
export let columnOptions: ColumnOption[];
export let value: number;
</script>
<Row --cols={2}>
<Col --col-size={1}>
{label}
</Col>
<Col --col-size={1}>
<!-- svelte-ignore a11y-no-onchange -->
<select class="form-select" bind:value>
{#each columnOptions as { label, value, disabled }}
<option {value} {disabled}>{label}</option>
{/each}
</select>
</Col>
</Row>

View File

@ -0,0 +1,27 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import * as tr from "../lib/ftl";
import type { Notetypes } from "../lib/proto";
export let notetypeNameIds: Notetypes.NotetypeNameId[];
export let notetypeId: number;
</script>
<Row --cols={2}>
<Col --col-size={1}>
{tr.notetypesNotetype()}
</Col>
<Col --col-size={1}>
<!-- svelte-ignore a11y-no-onchange -->
<select class="form-select" bind:value={notetypeId}>
{#each notetypeNameIds as { id, name }}
<option value={id}>{name}</option>
{/each}
</select>
</Col>
</Row>

View File

@ -0,0 +1,52 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import ButtonGroup from "../components/ButtonGroup.svelte";
import Col from "../components/Col.svelte";
import LabelButton from "../components/LabelButton.svelte";
import Row from "../components/Row.svelte";
import Shortcut from "../components/Shortcut.svelte";
import * as tr from "../lib/ftl";
import { getPlatformString } from "../lib/shortcuts";
export let path: string;
export let onImport: () => void;
const keyCombination = "Control+Enter";
</script>
<div style:flex-grow="1" />
<div class="sticky-footer">
<Row --cols={5}
><Col --col-size={4}>{path}</Col><Col --col-justify="end">
<ButtonGroup size={2}>
<LabelButton
theme="primary"
tooltip={getPlatformString(keyCombination)}
on:click={onImport}
--border-left-radius="5px"
--border-right-radius="5px">{tr.actionsImport()}</LabelButton
>
<Shortcut {keyCombination} on:action={onImport} />
</ButtonGroup></Col
></Row
>
</div>
<style lang="scss">
.sticky-footer {
position: sticky;
bottom: 0;
z-index: 10;
margin: 0.75rem;
padding: 0.25rem;
background: var(--window-bg);
border-style: solid none none;
border-color: var(--border);
border-width: thin;
}
</style>

38
ts/import-csv/Tags.svelte Normal file
View File

@ -0,0 +1,38 @@
<!--
Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import { writable } from "svelte/store";
import Col from "../components/Col.svelte";
import Row from "../components/Row.svelte";
import * as tr from "../lib/ftl";
import TagEditor from "../tag-editor/TagEditor.svelte";
export let globalTags: string[];
export let updatedTags: string[];
const globalTagsWritable = writable<string[]>(globalTags);
const updatedTagsWritable = writable<string[]>(updatedTags);
</script>
<Row --cols={2}>
<Col>{tr.importingTagAllNotes()}</Col>
<Col>
<TagEditor
tags={globalTagsWritable}
on:tagsupdate={({ detail }) => (globalTags = detail.tags)}
keyCombination={"Control+T"}
/></Col
>
</Row>
<Row --cols={2}>
<Col>{tr.importingTagUpdatedNotes()}</Col>
<Col>
<TagEditor
tags={updatedTagsWritable}
on:tagsupdate={({ detail }) => (updatedTags = detail.tags)}
/></Col
>
</Row>

View File

@ -0,0 +1,34 @@
@use "sass/vars";
@use "sass/bootstrap-dark";
@import "sass/base";
@import "sass/bootstrap/scss/alert";
@import "sass/bootstrap/scss/buttons";
@import "sass/bootstrap/scss/button-group";
@import "sass/bootstrap/scss/close";
@import "sass/bootstrap/scss/grid";
@import "sass/bootstrap-forms";
.night-mode {
@include bootstrap-dark.night-mode;
}
body {
width: min(100vw, 70em);
margin: 0 auto;
}
html {
overflow-x: hidden;
}
#main {
padding: 0.5em 0.5em 1em 0.5em;
height: 100vh;
}
// override the default down arrow colour in <select> elements
.night-mode select {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23FFFFFF' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");
}

76
ts/import-csv/index.ts Normal file
View File

@ -0,0 +1,76 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import "./import-csv-base.css";
import { ModuleName, setupI18n } from "../lib/i18n";
import { checkNightMode } from "../lib/nightmode";
import {
Decks,
decks as decksService,
empty,
notetypes as notetypeService,
} from "../lib/proto";
import ImportCsvPage from "./ImportCsvPage.svelte";
import { getCsvMetadata } from "./lib";
const gettingNotetypes = notetypeService.getNotetypeNames(empty);
const gettingDecks = decksService.getDeckNames(
Decks.GetDeckNamesRequest.create({
skipEmptyDefault: false,
includeFiltered: false,
}),
);
const i18n = setupI18n({
modules: [
ModuleName.ACTIONS,
ModuleName.CHANGE_NOTETYPE,
ModuleName.DECKS,
ModuleName.EDITING,
ModuleName.IMPORTING,
ModuleName.KEYBOARD,
ModuleName.NOTETYPES,
ModuleName.STUDYING,
],
});
export async function setupImportCsvPage(path: string): Promise<ImportCsvPage> {
const gettingMetadata = getCsvMetadata(path);
const [notetypes, decks, metadata] = await Promise.all([
gettingNotetypes,
gettingDecks,
gettingMetadata,
i18n,
]);
checkNightMode();
return new ImportCsvPage({
target: document.body,
props: {
path: path,
deckNameIds: decks.entries,
notetypeNameIds: notetypes.entries,
delimiter: metadata.delimiter,
forceDelimiter: metadata.forceDelimiter,
isHtml: metadata.isHtml,
forceIsHtml: metadata.forceIsHtml,
globalTags: metadata.globalTags,
updatedTags: metadata.updatedTags,
columnLabels: metadata.columnLabels,
tagsColumn: metadata.tagsColumn,
globalNotetype: metadata.globalNotetype ?? null,
// Unset oneof numbers default to 0, which also means n/a here,
// but it's vital to differentiate between unset and 0 when reserializing.
notetypeColumn: metadata.notetypeColumn ? metadata.notetypeColumn : null,
deckId: metadata.deckId ? metadata.deckId : null,
deckColumn: metadata.deckColumn ? metadata.deckColumn : null,
},
});
}
/* // use #testXXXX where XXXX is notetype ID to test
if (window.location.hash.startsWith("#test")) {
const ntid = parseInt(window.location.hash.substr("#test".length), 10);
setupCsvImportPage(ntid, ntid);
} */

69
ts/import-csv/lib.ts Normal file
View File

@ -0,0 +1,69 @@
// Copyright: Ankitects Pty Ltd and contributors
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import * as tr from "../lib/ftl";
import {
ImportExport,
importExport,
Notetypes,
notetypes as notetypeService,
} from "../lib/proto";
export interface ColumnOption {
label: string;
value: number;
disabled: boolean;
}
export function getColumnOptions(
columnLabels: string[],
notetypeColumn: number | null,
deckColumn: number | null,
): ColumnOption[] {
return [{ label: tr.changeNotetypeNothing(), value: 0, disabled: false }].concat(
columnLabels.map((label, index) => {
index += 1;
if (index === notetypeColumn) {
return columnOption(tr.notetypesNotetype(), true, index);
} else if (index === deckColumn) {
return columnOption(tr.decksDeck(), true, index);
} else if (label === "") {
return columnOption(index, false, index);
} else {
return columnOption(`"${label}"`, false, index);
}
}),
);
}
function columnOption(
label: string | number,
disabled: boolean,
index: number,
): ColumnOption {
return {
label: tr.importingColumn({ val: label }),
value: index,
disabled,
};
}
export async function getNotetypeFields(notetypeId: number): Promise<string[]> {
return notetypeService
.getFieldNames(Notetypes.NotetypeId.create({ ntid: notetypeId }))
.then((list) => list.vals);
}
export async function getCsvMetadata(
path: string,
delimiter?: ImportExport.CsvMetadata.Delimiter,
notetypeId?: number,
): Promise<ImportExport.CsvMetadata> {
return importExport.getCsvMetadata(
ImportExport.CsvMetadataRequest.create({
path,
delimiter,
notetypeId,
}),
);
}

View File

@ -0,0 +1,12 @@
{
"extends": "../tsconfig.json",
"include": ["*"],
"references": [
{ "path": "../lib" },
{ "path": "../sveltelib" },
{ "path": "../components" }
],
"compilerOptions": {
"types": ["jest"]
}
}

View File

@ -15,6 +15,7 @@ import DeckConfig = anki.deckconfig;
import Decks = anki.decks;
import Generic = anki.generic;
import I18n = anki.i18n;
import ImportExport = anki.import_export;
import Notes = anki.notes;
import Notetypes = anki.notetypes;
import Scheduler = anki.scheduler;
@ -54,6 +55,8 @@ async function serviceCallback(
}
}
export const decks = Decks.DecksService.create(serviceCallback as RPCImpl);
export { DeckConfig };
export const deckConfig = DeckConfig.DeckConfigService.create(
serviceCallback as RPCImpl,
@ -62,6 +65,11 @@ export const deckConfig = DeckConfig.DeckConfigService.create(
export { I18n };
export const i18n = I18n.I18nService.create(serviceCallback as RPCImpl);
export { ImportExport };
export const importExport = ImportExport.ImportExportService.create(
serviceCallback as RPCImpl,
);
export { Notetypes };
export const notetypes = Notetypes.NotetypesService.create(serviceCallback as RPCImpl);

View File

@ -3,7 +3,7 @@ Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import { pageTheme } from "../../sveltelib/theme";
import { pageTheme } from "../sveltelib/theme";
export let selected = false;
export let active = false;

43
ts/tag-editor/BUILD.bazel Normal file
View File

@ -0,0 +1,43 @@
load("//ts/svelte:svelte.bzl", "compile_svelte", "svelte_check")
load("//ts:prettier.bzl", "prettier_test")
load("//ts:eslint.bzl", "eslint_test")
load("//ts:typescript.bzl", "typescript")
_ts_deps = [
"//ts/components",
"//ts/lib",
"//ts/domlib",
"//ts/sveltelib",
"@npm//@fluent",
"@npm//svelte",
]
compile_svelte(
visibility = ["//visibility:public"],
deps = _ts_deps,
)
typescript(
name = "tag-editor",
deps = _ts_deps + [
":svelte",
],
)
# Tests
################
prettier_test()
eslint_test()
svelte_check(
name = "svelte_check",
srcs = glob([
"**/*.ts",
"**/*.svelte",
]) + _ts_deps + [
"//sass:button_mixins_lib",
"@npm//@types/bootstrap",
],
)

View File

@ -5,7 +5,7 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
<script lang="ts">
import { createEventDispatcher, onMount } from "svelte";
import { pageTheme } from "../../sveltelib/theme";
import { pageTheme } from "../sveltelib/theme";
let className: string = "";
export { className as class };

View File

@ -3,7 +3,7 @@ Copyright: Ankitects Pty Ltd and contributors
License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
-->
<script lang="ts">
import Badge from "../../components/Badge.svelte";
import Badge from "../components/Badge.svelte";
import { deleteIcon } from "./icons";
let className: string = "";

View File

@ -7,9 +7,8 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import type { Writable } from "svelte/store";
import { writable } from "svelte/store";
import StickyContainer from "../../components/StickyContainer.svelte";
import { Tags, tags as tagsService } from "../../lib/proto";
import { execCommand } from "../helpers";
import { execCommand } from "../domlib";
import { Tags, tags as tagsService } from "../lib/proto";
import { TagOptionsButton } from "./tag-options-button";
import TagEditMode from "./TagEditMode.svelte";
import TagInput from "./TagInput.svelte";
@ -24,6 +23,7 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
import WithAutocomplete from "./WithAutocomplete.svelte";
export let tags: Writable<string[]>;
export let keyCombination: string = "Control+Shift+T";
let tagTypes: TagType[];
function tagsToTagTypes(tags: string[]): void {
@ -381,112 +381,106 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
$: anyTagsSelected = tagTypes.some((tag) => tag.selected);
</script>
<StickyContainer
--gutter-block="0.1rem"
--sticky-borders="1px 0 0"
bind:height
class="d-flex"
>
<div class="tag-editor-area" on:focusout={deselectIfLeave}>
<TagOptionsButton
bind:badgeHeight
tagsSelected={anyTagsSelected}
on:tagselectall={selectAllTags}
on:tagcopy={copySelectedTags}
on:tagdelete={deleteSelectedTags}
on:tagappend={appendEmptyTag}
/>
<div class="tag-editor-area" on:focusout={deselectIfLeave} bind:offsetHeight={height}>
<TagOptionsButton
bind:badgeHeight
tagsSelected={anyTagsSelected}
on:tagselectall={selectAllTags}
on:tagcopy={copySelectedTags}
on:tagdelete={deleteSelectedTags}
on:tagappend={appendEmptyTag}
{keyCombination}
/>
{#each tagTypes as tag, index (tag.id)}
<div class="tag-relative" class:hide-tag={index === active}>
<TagEditMode
class="ms-0"
name={index === active ? activeName : tag.name}
tooltip={tag.name}
active={index === active}
shorten={shortenTags}
bind:flash={tag.flash}
bind:selected={tag.selected}
on:tagedit={() => {
active = index;
deselect();
}}
on:tagselect={() => select(index)}
on:tagrange={() => selectRange(index)}
on:tagdelete={() => {
deselect();
deleteTagAt(index);
saveTags();
}}
/>
{#each tagTypes as tag, index (tag.id)}
<div class="tag-relative" class:hide-tag={index === active}>
<TagEditMode
class="ms-0"
name={index === active ? activeName : tag.name}
tooltip={tag.name}
active={index === active}
shorten={shortenTags}
bind:flash={tag.flash}
bind:selected={tag.selected}
on:tagedit={() => {
active = index;
deselect();
}}
on:tagselect={() => select(index)}
on:tagrange={() => selectRange(index)}
on:tagdelete={() => {
deselect();
deleteTagAt(index);
saveTags();
}}
/>
{#if index === active}
<WithAutocomplete
{suggestionsPromise}
{show}
on:update={updateSuggestions}
on:select={({ detail }) => onAutocomplete(detail.selected)}
on:choose={({ detail }) => {
onAutocomplete(detail.chosen);
splitTag(index, detail.chosen.length, detail.chosen.length);
{#if index === active}
<WithAutocomplete
{suggestionsPromise}
{show}
on:update={updateSuggestions}
on:select={({ detail }) => onAutocomplete(detail.selected)}
on:choose={({ detail }) => {
onAutocomplete(detail.chosen);
splitTag(index, detail.chosen.length, detail.chosen.length);
}}
let:createAutocomplete
let:hide
>
<TagInput
id={tag.id}
class="position-absolute start-0 top-0 bottom-0 ps-2 py-0"
disabled={autocompleteDisabled}
bind:name={activeName}
bind:input={activeInput}
on:focus={() => {
activeName = tag.name;
autocomplete = createAutocomplete();
}}
let:createAutocomplete
let:hide
>
<TagInput
id={tag.id}
class="position-absolute start-0 top-0 bottom-0 ps-2 py-0"
disabled={autocompleteDisabled}
bind:name={activeName}
bind:input={activeInput}
on:focus={() => {
activeName = tag.name;
autocomplete = createAutocomplete();
}}
on:keydown={onKeydown}
on:keyup={() => {
if (activeName.length === 0) {
hide?.();
}
}}
on:taginput={() => updateTagName(tag)}
on:tagsplit={({ detail }) =>
splitTag(index, detail.start, detail.end)}
on:tagadd={() => insertTagKeepFocus(index)}
on:tagdelete={() => deleteTagAt(index)}
on:tagselectall={async () => {
if (tagTypes.length <= 1) {
// Noop if no other tags exist
return;
}
on:keydown={onKeydown}
on:keyup={() => {
if (activeName.length === 0) {
hide?.();
}
}}
on:taginput={() => updateTagName(tag)}
on:tagsplit={({ detail }) =>
splitTag(index, detail.start, detail.end)}
on:tagadd={() => insertTagKeepFocus(index)}
on:tagdelete={() => deleteTagAt(index)}
on:tagselectall={async () => {
if (tagTypes.length <= 1) {
// Noop if no other tags exist
return;
}
activeInput.blur();
// Ensure blur events are processed first
await tick();
activeInput.blur();
// Ensure blur events are processed first
await tick();
selectAllTags();
}}
on:tagjoinprevious={() => joinWithPreviousTag(index)}
on:tagjoinnext={() => joinWithNextTag(index)}
on:tagmoveprevious={() => moveToPreviousTag(index)}
on:tagmovenext={() => moveToNextTag(index)}
on:tagaccept={() => {
deleteTagIfNotUnique(tag, index);
if (tag) {
updateTagName(tag);
}
saveTags();
decideNextActive();
}}
/>
</WithAutocomplete>
{/if}
</div>
{/each}
selectAllTags();
}}
on:tagjoinprevious={() => joinWithPreviousTag(index)}
on:tagjoinnext={() => joinWithNextTag(index)}
on:tagmoveprevious={() => moveToPreviousTag(index)}
on:tagmovenext={() => moveToNextTag(index)}
on:tagaccept={() => {
deleteTagIfNotUnique(tag, index);
if (tag) {
updateTagName(tag);
}
saveTags();
decideNextActive();
}}
/>
</WithAutocomplete>
{/if}
</div>
{/each}
<TagSpacer on:click={appendEmptyTag} />
</div>
</StickyContainer>
<TagSpacer on:click={appendEmptyTag} />
</div>
<style lang="scss">
.tag-editor-area {

View File

@ -5,7 +5,7 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
<script lang="ts">
import { createEventDispatcher, onMount, tick } from "svelte";
import { registerShortcut } from "../../lib/shortcuts";
import { registerShortcut } from "../lib/shortcuts";
import {
delimChar,
normalizeTagname,

View File

@ -5,9 +5,9 @@ License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
<script lang="ts">
import { createEventDispatcher } from "svelte";
import WithTooltip from "../../components/WithTooltip.svelte";
import { controlPressed, shiftPressed } from "../../lib/keys";
import { pageTheme } from "../../sveltelib/theme";
import WithTooltip from "../components/WithTooltip.svelte";
import { controlPressed, shiftPressed } from "../lib/keys";
import { pageTheme } from "../sveltelib/theme";
import Tag from "./Tag.svelte";
import { delimChar } from "./tags";

Some files were not shown because too many files have changed in this diff Show More