anki/ftl/core/importing.ftl

82 lines
4.9 KiB
Plaintext
Raw Normal View History

importing-failed-debug-info = Import failed. Debugging info:
importing-aborted = Aborted: { $val }
importing-added-duplicate-with-first-field = Added duplicate with first field: { $val }
importing-allow-html-in-fields = Allow HTML in fields
importing-anki-files-are-from-a-very = .anki files are from a very old version of Anki. You can import them with add-on 175027074 or with Anki 2.0, available on the Anki website.
importing-anki2-files-are-not-directly-importable = .anki2 files are not directly importable - please import the .apkg or .zip file you have received instead.
importing-appeared-twice-in-file = Appeared twice in file: { $val }
importing-by-default-anki-will-detect-the = By default, Anki will detect the character between fields, such as a tab, comma, and so on. If Anki is detecting the character incorrectly, you can enter it here. Use \t to represent tab.
importing-change = Change
importing-colon = Colon
importing-comma = Comma
importing-empty-first-field = Empty first field: { $val }
importing-field-mapping = Field mapping
importing-field-of-file-is = Field <b>{ $val }</b> of file is:
importing-fields-separated-by = Fields separated by: { $val }
importing-file-version-unknown-trying-import-anyway = File version unknown, trying import anyway.
importing-first-field-matched = First field matched: { $val }
importing-identical = Identical
importing-ignore-field = Ignore field
importing-ignore-lines-where-first-field-matches = Ignore lines where first field matches existing note
importing-ignored = <ignored>
importing-import-even-if-existing-note-has = Import even if existing note has same first field
importing-import-options = Import options
importing-importing-complete = Importing complete.
importing-invalid-file-please-restore-from-backup = Invalid file. Please restore from backup.
importing-map-to = Map to { $val }
importing-map-to-tags = Map to Tags
importing-mapped-to = mapped to <b>{ $val }</b>
importing-mapped-to-tags = mapped to <b>Tags</b>
2020-11-17 10:23:06 +01:00
importing-mnemosyne-20-deck-db = Mnemosyne 2.0 Deck (*.db)
importing-multicharacter-separators-are-not-supported-please = Multi-character separators are not supported. Please enter one character only.
importing-notes-added-from-file = Notes added from file: { $val }
importing-notes-found-in-file = Notes found in file: { $val }
importing-notes-skipped-as-theyre-already-in = Notes skipped, as they're already in your collection: { $val }
importing-notes-that-could-not-be-imported = Notes that could not be imported as note type has changed: { $val }
importing-notes-updated-as-file-had-newer = Notes updated, as file had newer version: { $val }
2020-11-17 10:23:06 +01:00
importing-packaged-anki-deckcollection-apkg-colpkg-zip = Packaged Anki Deck/Collection (*.apkg *.colpkg *.zip)
importing-pauker-18-lesson-paugz = Pauker 1.8 Lesson (*.pau.gz)
importing-rows-had-num1d-fields-expected-num2d = '{ $row }' had { $found } fields, expected { $expected }
importing-selected-file-was-not-in-utf8 = Selected file was not in UTF-8 format. Please see the importing section of the manual.
importing-semicolon = Semicolon
importing-skipped = Skipped
2020-11-17 10:23:06 +01:00
importing-supermemo-xml-export-xml = Supermemo XML export (*.xml)
importing-tab = Tab
importing-tag-modified-notes = Tag modified notes:
2020-11-17 10:23:06 +01:00
importing-text-separated-by-tabs-or-semicolons = Text separated by tabs or semicolons (*)
importing-the-first-field-of-the-note = The first field of the note type must be mapped.
importing-the-provided-file-is-not-a = The provided file is not a valid .apkg file.
importing-this-file-does-not-appear-to = This file does not appear to be a valid .apkg file. If you're getting this error from a file downloaded from AnkiWeb, chances are that your download failed. Please try again, and if the problem persists, please try again with a different browser.
importing-this-will-delete-your-existing-collection = This will delete your existing collection and replace it with the data in the file you're importing. Are you sure?
importing-unable-to-import-from-a-readonly = Unable to import from a read-only file.
importing-unknown-file-format = Unknown file format.
importing-update-existing-notes-when-first-field = Update existing notes when first field matches
importing-updated = Updated
importing-note-added =
{ $count ->
[one] { $count } note added
*[other] { $count } notes added
}
importing-note-imported =
{ $count ->
[one] { $count } note imported.
*[other] { $count } notes imported.
}
importing-note-unchanged =
{ $count ->
[one] { $count } note unchanged
*[other] { $count } notes unchanged
}
importing-note-updated =
{ $count ->
[one] { $count } note updated
*[other] { $count } notes updated
}
importing-processed-media-file =
{ $count ->
Backups (#1685) * Add zstd dep * Implement backend backup with zstd * Implement backup thinning * Write backup meta * Use new file ending anki21b * Asynchronously backup on collection close in Rust * Revert "Add zstd dep" This reverts commit 3fcb2141d2be15f907269d13275c41971431385c. * Add zstd again * Take backup col path from col struct * Fix formatting * Implement backup restoring on backend * Normalize restored media file names * Refactor `extract_legacy_data()` A bit cumbersome due to borrowing rules. * Refactor * Make thinning calendar-based and gradual * Consider last kept backups of previous stages * Import full apkgs and colpkgs with backend * Expose new backup settings * Test `BackupThinner` and make it deterministic * Mark backup_path when closing optional * Delete leaky timer * Add progress updates for restoring media * Write restored collection to tempfile first * Do collection compression in the background thread This has us currently storing an uncompressed and compressed copy of the collection in memory (not ideal), but means the collection can be closed without waiting for compression to complete. On a large collection, this takes a close and reopen from about 0.55s to about 0.07s. The old backup code for comparison: about 0.35s for compression off, about 8.5s for zip compression. * Use multithreading in zstd compression On my system, this reduces the compression time of a large collection from about 0.55s to 0.08s. * Stream compressed collection data into zip file * Tweak backup explanation + Fix incorrect tab order for ignore accents option * Decouple restoring backup and full import In the first case, no profile is opened, unless the new collection succeeds to load. In the second case, either the old collection is reloaded or the new one is loaded. * Fix number gap in Progress message * Don't revert backup when media fails but report it * Tweak error flow * Remove native BackupLimits enum * Fix type annotation * Add thinning test for whole year * Satisfy linter * Await async backup to finish * Move restart disclaimer out of backup tab Should be visible regardless of the current tab. * Write restored collection in chunks * Refactor * Write media in chunks and refactor * Log error if removing file fails * join_backup_task -> await_backup_completion * Refactor backup.rs * Refactor backup meta and collection extraction * Fix wrong error being returned * Call sync_all() on new collection * Add ImportError * Store logger in Backend, instead of creating one on demand init_backend() accepts a Logger rather than a log file, to allow other callers to customize the logger if they wish. In the future we may want to explore using the tracing crate as an alternative; it's a bit more ergonomic, as a logger doesn't need to be passed around, and it plays more nicely with async code. * Sync file contents prior to rename; sync folder after rename. * Limit backup creation to once per 30 min * Use zstd::stream::copy_decode * Make importing abortable * Don't revert if backup media is aborted * Set throttle implicitly * Change force flag to minimum_backup_interval * Don't attempt to open folders on Windows * Join last backup thread before starting new one Also refactor. * Disable auto sync and backup when restoring again * Force backup on full download * Include the reason why a media file import failed, and the file path - Introduce a FileIoError that contains a string representation of the underlying I/O error, and an associated path. There are a few places in the code where we're currently manually including the filename in a custom error message, and this is a step towards a more consistent approach (but we may be better served with a more general approach in the future similar to Anyhow's .context()) - Move the error message into importing.ftl, as it's a bit neater when error messages live in the same file as the rest of the messages associated with some functionality. * Fix importing of media files * Minor wording tweaks * Save an allocation I18n strings with replacements are already strings, so we can skip the extra allocation. Not that it matters here at all. * Terminate import if file missing from archive If a third-party tool is creating invalid archives, the user should know about it. This should be rare, so I did not attempt to make it translatable. * Skip multithreaded compression on small collections Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 06:11:31 +01:00
[one] Imported { $count } media file
*[other] Imported { $count } media files
}
Backups (#1685) * Add zstd dep * Implement backend backup with zstd * Implement backup thinning * Write backup meta * Use new file ending anki21b * Asynchronously backup on collection close in Rust * Revert "Add zstd dep" This reverts commit 3fcb2141d2be15f907269d13275c41971431385c. * Add zstd again * Take backup col path from col struct * Fix formatting * Implement backup restoring on backend * Normalize restored media file names * Refactor `extract_legacy_data()` A bit cumbersome due to borrowing rules. * Refactor * Make thinning calendar-based and gradual * Consider last kept backups of previous stages * Import full apkgs and colpkgs with backend * Expose new backup settings * Test `BackupThinner` and make it deterministic * Mark backup_path when closing optional * Delete leaky timer * Add progress updates for restoring media * Write restored collection to tempfile first * Do collection compression in the background thread This has us currently storing an uncompressed and compressed copy of the collection in memory (not ideal), but means the collection can be closed without waiting for compression to complete. On a large collection, this takes a close and reopen from about 0.55s to about 0.07s. The old backup code for comparison: about 0.35s for compression off, about 8.5s for zip compression. * Use multithreading in zstd compression On my system, this reduces the compression time of a large collection from about 0.55s to 0.08s. * Stream compressed collection data into zip file * Tweak backup explanation + Fix incorrect tab order for ignore accents option * Decouple restoring backup and full import In the first case, no profile is opened, unless the new collection succeeds to load. In the second case, either the old collection is reloaded or the new one is loaded. * Fix number gap in Progress message * Don't revert backup when media fails but report it * Tweak error flow * Remove native BackupLimits enum * Fix type annotation * Add thinning test for whole year * Satisfy linter * Await async backup to finish * Move restart disclaimer out of backup tab Should be visible regardless of the current tab. * Write restored collection in chunks * Refactor * Write media in chunks and refactor * Log error if removing file fails * join_backup_task -> await_backup_completion * Refactor backup.rs * Refactor backup meta and collection extraction * Fix wrong error being returned * Call sync_all() on new collection * Add ImportError * Store logger in Backend, instead of creating one on demand init_backend() accepts a Logger rather than a log file, to allow other callers to customize the logger if they wish. In the future we may want to explore using the tracing crate as an alternative; it's a bit more ergonomic, as a logger doesn't need to be passed around, and it plays more nicely with async code. * Sync file contents prior to rename; sync folder after rename. * Limit backup creation to once per 30 min * Use zstd::stream::copy_decode * Make importing abortable * Don't revert if backup media is aborted * Set throttle implicitly * Change force flag to minimum_backup_interval * Don't attempt to open folders on Windows * Join last backup thread before starting new one Also refactor. * Disable auto sync and backup when restoring again * Force backup on full download * Include the reason why a media file import failed, and the file path - Introduce a FileIoError that contains a string representation of the underlying I/O error, and an associated path. There are a few places in the code where we're currently manually including the filename in a custom error message, and this is a step towards a more consistent approach (but we may be better served with a more general approach in the future similar to Anyhow's .context()) - Move the error message into importing.ftl, as it's a bit neater when error messages live in the same file as the rest of the messages associated with some functionality. * Fix importing of media files * Minor wording tweaks * Save an allocation I18n strings with replacements are already strings, so we can skip the extra allocation. Not that it matters here at all. * Terminate import if file missing from archive If a third-party tool is creating invalid archives, the user should know about it. This should be rare, so I did not attempt to make it translatable. * Skip multithreaded compression on small collections Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 06:11:31 +01:00
importing-importing-collection = Importing collection...
importing-failed-to-import-media-file = Failed to import media file: { $debugInfo }