Commit Graph

14 Commits

Author SHA1 Message Date
RumovZ
855dc9d75b
Add Rust bin to deprecate unused ftl entries (#2364)
* Add Rust bin to deprecate unused ftl entries

* Align function names with bin names

* Support passing in multiple ftl roots

* Use source instead of jsons for deprecating

* Fix CargoRun not working more than once (dae)

* Add ftl:deprecate (dae)

* Deprecate some strings (dae)

This is not all of the strings that are currently unused

* Check json files before deprecating; add allowlist (dae)

The scheduler messages we'll probably want to reuse for the v2->v3
transition, so I'd prefer to keep them undeprecated for now.

* Deprecate old bury options (dae)

* Support gathering usages from Kotlin files for AnkiDroid (dae)

* Update json scripts (dae)

* Remove old deprecation headers

* Parameterize JSON roots to keep

* Tweak deprecation message (dae)
2023-02-07 11:56:14 +10:00
RumovZ
c521753057
Refactor error handling (#2136)
* Add crate snafu

* Replace all inline structs in AnkiError

* Derive Snafu on AnkiError

* Use snafu for card type errors

* Use snafu whatever error for InvalidInput

* Use snafu for NotFoundError and improve message

* Use snafu for FileIoError to attach context

Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.

* Add more context-attaching io helpers

* Add message, context and backtrace to new snafus

* Utilize error context and backtrace on frontend

* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.

* Rename localized(_description) -> message

* Remove accidentally committed experimental trait

* invalid_input_context -> ok_or_invalid

* ensure_valid_input! -> require!

* Always return `Err` from `invalid_input!`

Instead of a Result to unwrap, the macro accepts a source error now.

* new_tempfile_in_parent -> new_tempfile_in_parent_of

* ok_or_not_found -> or_not_found

* ok_or_invalid -> or_invalid

* Add crate convert_case

* Use unqualified lowercase type name

* Remove uses of snafu::ensure

* Allow public construction of InvalidInputErrors (dae)

Needed to port the AnkiDroid changes.

* Make into_protobuf() public (dae)

Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
2022-10-21 18:02:12 +10:00
RumovZ
8478492190
Check ids when gathering data (#1928)
This will throw an error if a card, note or revlog id from the future
is found during apkg import or export.
2022-06-24 13:56:52 +10:00
Damien Elmes
4687620f5e Expand normalization checks on import/export
The old Python code was only checking for NFC encoding, but we should
check for other issues like special filenames on windows (eg con.mp3)

- On export, the user is told to use Check Media if their media has
invalid filenames.
- On import, legacy packages will be transparently normalized. Since we're
doing the checks on export as well, any invalid names in a v3 package
are an error.
2022-03-17 17:31:19 +10:00
RumovZ
f3c8857421
Backups (#1685)
* Add zstd dep

* Implement backend backup with zstd

* Implement backup thinning

* Write backup meta

* Use new file ending anki21b

* Asynchronously backup on collection close in Rust

* Revert "Add zstd dep"

This reverts commit 3fcb2141d2be15f907269d13275c41971431385c.

* Add zstd again

* Take backup col path from col struct

* Fix formatting

* Implement backup restoring on backend

* Normalize restored media file names

* Refactor `extract_legacy_data()`

A bit cumbersome due to borrowing rules.

* Refactor

* Make thinning calendar-based and gradual

* Consider last kept backups of previous stages

* Import full apkgs and colpkgs with backend

* Expose new backup settings

* Test `BackupThinner` and make it deterministic

* Mark backup_path when closing optional

* Delete leaky timer

* Add progress updates for restoring media

* Write restored collection to tempfile first

* Do collection compression in the background thread

This has us currently storing an uncompressed and compressed copy of
the collection in memory (not ideal), but means the collection can be
closed without waiting for compression to complete. On a large collection,
this takes a close and reopen from about 0.55s to about 0.07s. The old
backup code for comparison: about 0.35s for compression off, about
8.5s for zip compression.

* Use multithreading in zstd compression

On my system, this reduces the compression time of a large collection
from about 0.55s to 0.08s.

* Stream compressed collection data into zip file

* Tweak backup explanation

+ Fix incorrect tab order for ignore accents option

* Decouple restoring backup and full import

In the first case, no profile is opened, unless the new collection
succeeds to load.
In the second case, either the old collection is reloaded or the new one
is loaded.

* Fix number gap in Progress message

* Don't revert backup when media fails but report it

* Tweak error flow

* Remove native BackupLimits enum

* Fix type annotation

* Add thinning test for whole year

* Satisfy linter

* Await async backup to finish

* Move restart disclaimer out of backup tab

Should be visible regardless of the current tab.

* Write restored collection in chunks

* Refactor

* Write media in chunks and refactor

* Log error if removing file fails

* join_backup_task -> await_backup_completion

* Refactor backup.rs

* Refactor backup meta and collection extraction

* Fix wrong error being returned

* Call sync_all() on new collection

* Add ImportError

* Store logger in Backend, instead of creating one on demand

init_backend() accepts a Logger rather than a log file, to allow other
callers to customize the logger if they wish.

In the future we may want to explore using the tracing crate as an
alternative; it's a bit more ergonomic, as a logger doesn't need to be
passed around, and it plays more nicely with async code.

* Sync file contents prior to rename; sync folder after rename.

* Limit backup creation to once per 30 min

* Use zstd::stream::copy_decode

* Make importing abortable

* Don't revert if backup media is aborted

* Set throttle implicitly

* Change force flag to minimum_backup_interval

* Don't attempt to open folders on Windows

* Join last backup thread before starting new one

Also refactor.

* Disable auto sync and backup when restoring again

* Force backup on full download

* Include the reason why a media file import failed, and the file path

- Introduce a FileIoError that contains a string representation of
the underlying I/O error, and an associated path. There are a few
places in the code where we're currently manually including the filename
in a custom error message, and this is a step towards a more consistent
approach (but we may be better served with a more general approach in
the future similar to Anyhow's .context())
- Move the error message into importing.ftl, as it's a bit neater
when error messages live in the same file as the rest of the messages
associated with some functionality.

* Fix importing of media files

* Minor wording tweaks

* Save an allocation

I18n strings with replacements are already strings, so we can skip the
extra allocation. Not that it matters here at all.

* Terminate import if file missing from archive

If a third-party tool is creating invalid archives, the user should know
about it. This should be rare, so I did not attempt to make it
translatable.

* Skip multithreaded compression on small collections

Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 15:11:31 +10:00
RumovZ
3e0c9dc866
New TTS/AV tag handling (#1559)
* Add new `card_rendering` mod

Parses a text with av/tts tags and strips or extracts tags.

* Replace old `extract_av_tags` and `strip_av_tags`

... with new `card_rendering` mod

* ressource -> resource

* Add AV prettifier for use in browser table

* Accept String in av tag routines

... and avoid redundant writes if no changes need to be made.

* add benchmarking with criterion; make links test optional (dae)

cargo install cargo-criterion, then run ./bench.sh

* performance comparison: creating HashMap up front (dae)

the previous solution:

anki_tag_parse          time:   [1.8401 us 1.8437 us 1.8476 us]

this solution:

anki_tag_parse          time:   [2.2420 us 2.2447 us 2.2477 us]
                        change: [+21.477% +21.770% +22.066%] (p = 0.00 < 0.05)
                        Performance has regressed.

* Revert "performance comparison: creating HashMap up front" (dae)

This reverts commit f19126a2f15b729b825825a49283f63ab13474d0.

* add missing header

* Write error message if tts lang is missing

* `Tag` -> `Directive`
2021-12-17 19:04:42 +10:00
Damien Elmes
5e10087aae handle notes with missing cards in browser
https://forums.ankiweb.net/t/2-1-45-release-candidate/11362/30
2021-07-22 14:58:57 +10:00
Damien Elmes
1f2567e04c add notetype changing to backend 2021-06-09 20:56:52 +10:00
Damien Elmes
0f5627bb7a limit custom study to 100 tags
The hard limit from sqlite may be larger, but things slow down as more
tags are selected.

https://forums.ankiweb.net/t/unable-to-create-custom-test/10467

There are a number of things that could be improved here:

- we should show a live count so users are aware of the limit
- we should be filling in the parent tags when they're not explicitly
listed on a card
- we should reconsider disabling the 'tags to include' by default

It may make sense to defer these changes until we can move this screen
into Svelte/handle the processing in the backend.
2021-06-02 11:15:39 +10:00
Damien Elmes
f55fe6e518 i18n error shown when attempting to rebuild normal deck 2021-04-01 22:55:10 +10:00
Damien Elmes
32af54cd4d catch attempts to nest under a filtered deck; don't show traceback 2021-03-01 09:58:12 +10:00
RumovZ
ef925a88d6 Add filtered deck error localisation on backend 2021-02-26 11:32:26 +01:00
RumovZ
8e43b29816 Localise RenameDeckError 2021-02-24 13:57:44 +01:00
Damien Elmes
a8ddb65e1c add ability to force interval reset
- use trailing ! to force a reset
- use - instead of ..
- tweak i18n messages and error handling
2021-02-08 22:33:27 +10:00