The old Python code was only checking for NFC encoding, but we should
check for other issues like special filenames on windows (eg con.mp3)
- On export, the user is told to use Check Media if their media has
invalid filenames.
- On import, legacy packages will be transparently normalized. Since we're
doing the checks on export as well, any invalid names in a v3 package
are an error.
* Fix legacy colpkg import; disable v3 import/export; add roundtrip test
The test has revealed we weren't decompressing the media files on v3
import. That's easy to fix, but means all files need decompressing
even when they already exist, which is not ideal - it would be better
to store size/checksum in the metadata instead.
* Switch media and meta to protobuf; re-enable v3 import/export
- Fixed media not being decompressed on import
- The uncompressed size and checksum is now included for each media
entry, so that we can quickly check if a given file needs to be extracted.
We're still just doing a naive size comparison on colpkg import at the
moment, but we may want to use a checksum in the future, and will need
a checksum for apkg imports.
- Checksums can't be efficiently encoded in JSON, so the media list
has been switched to protobuf to reduce the the space requirements.
- The meta file has been switched to protobuf as well, for consistency.
This will mean any colpkg files exported with beta7 will be
unreadable.
* Avoid integer version comparisons
* Re-enable v3 test
* Apply suggestions from code review
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add export_colpkg() method to Collection
More discoverable, and easier to call from unit tests
* Split import/export code out into separate folders
Currently colpkg/*.rs contain some routines that will be useful for
apkg import/export as well; in the future we can refactor them into a
separate file in the parent module.
* Return a proper error when media import fails
This tripped me up when writing the earlier unit test - I had called
the equivalent of import_colpkg()?, and it was returning a string error
that I didn't notice. In practice this should result in the same text
being shown in the UI, but just skips the tooltip.
* Automatically create media folder on import
* Move roundtrip test into separate file; check collection too
* Remove zstd version suffix
Prevents a warning shown each time Rust Analyzer is used to check the
code.
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Implement colpkg exporting on backend
* Use exporting logic in backup.rs
* Refactor exporting.rs
* Add backend function to export collection
* Refactor backend/collection.rs
* Use backend for colpkg exporting
* Don't use default zip compression for media
* Add exporting progress
* Refactor media file writing
* Write dummy collections
* Localize dummy collection note
* Minimize dummy db size
* Use `NamedTempFile::new()` instead of `new_in`
* Drop redundant v2 dummy collection
* COLLECTION_VERSION -> PACKAGE_VERSION
* Split `lock_collection()` into two to drop flag
* Expose new colpkg in GUI
* Improve dummy collection message
* Please type checker
* importing-colpkg-too-new -> exporting-...
* Compress the media map in the v3 package (dae)
On collections with lots of media, it can grow into megabytes.
Also return an error in extract_media_file_names(), instead of masking
it as an optional.
* Store media map as a vector in the v3 package (dae)
This compresses better (eg 280kb original, 100kb hashmap, 42kb vec)
In the colpkg import case we don't need random access. When importing
an apkg, we will need to be able to fetch file data for a given media
filename, but the existing map doesn't help us there, as we need
filename->index, not index->filename.
* Ensure folders in the media dir don't break the file mapping (dae)
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Add forget prompt with options
- Restore original position
- Reset reps and lapses
* Restore position when resetting for export
* Add config context to avoid passing keys
* Add routine to fetch defaults; use method-specific enum (dae)
* Keep original position by default (dae)
* Fix code completion for forget dialog (dae)
Needs to be a symbolic link to the generated file
* Add zstd dep
* Implement backend backup with zstd
* Implement backup thinning
* Write backup meta
* Use new file ending anki21b
* Asynchronously backup on collection close in Rust
* Revert "Add zstd dep"
This reverts commit 3fcb2141d2be15f907269d13275c41971431385c.
* Add zstd again
* Take backup col path from col struct
* Fix formatting
* Implement backup restoring on backend
* Normalize restored media file names
* Refactor `extract_legacy_data()`
A bit cumbersome due to borrowing rules.
* Refactor
* Make thinning calendar-based and gradual
* Consider last kept backups of previous stages
* Import full apkgs and colpkgs with backend
* Expose new backup settings
* Test `BackupThinner` and make it deterministic
* Mark backup_path when closing optional
* Delete leaky timer
* Add progress updates for restoring media
* Write restored collection to tempfile first
* Do collection compression in the background thread
This has us currently storing an uncompressed and compressed copy of
the collection in memory (not ideal), but means the collection can be
closed without waiting for compression to complete. On a large collection,
this takes a close and reopen from about 0.55s to about 0.07s. The old
backup code for comparison: about 0.35s for compression off, about
8.5s for zip compression.
* Use multithreading in zstd compression
On my system, this reduces the compression time of a large collection
from about 0.55s to 0.08s.
* Stream compressed collection data into zip file
* Tweak backup explanation
+ Fix incorrect tab order for ignore accents option
* Decouple restoring backup and full import
In the first case, no profile is opened, unless the new collection
succeeds to load.
In the second case, either the old collection is reloaded or the new one
is loaded.
* Fix number gap in Progress message
* Don't revert backup when media fails but report it
* Tweak error flow
* Remove native BackupLimits enum
* Fix type annotation
* Add thinning test for whole year
* Satisfy linter
* Await async backup to finish
* Move restart disclaimer out of backup tab
Should be visible regardless of the current tab.
* Write restored collection in chunks
* Refactor
* Write media in chunks and refactor
* Log error if removing file fails
* join_backup_task -> await_backup_completion
* Refactor backup.rs
* Refactor backup meta and collection extraction
* Fix wrong error being returned
* Call sync_all() on new collection
* Add ImportError
* Store logger in Backend, instead of creating one on demand
init_backend() accepts a Logger rather than a log file, to allow other
callers to customize the logger if they wish.
In the future we may want to explore using the tracing crate as an
alternative; it's a bit more ergonomic, as a logger doesn't need to be
passed around, and it plays more nicely with async code.
* Sync file contents prior to rename; sync folder after rename.
* Limit backup creation to once per 30 min
* Use zstd::stream::copy_decode
* Make importing abortable
* Don't revert if backup media is aborted
* Set throttle implicitly
* Change force flag to minimum_backup_interval
* Don't attempt to open folders on Windows
* Join last backup thread before starting new one
Also refactor.
* Disable auto sync and backup when restoring again
* Force backup on full download
* Include the reason why a media file import failed, and the file path
- Introduce a FileIoError that contains a string representation of
the underlying I/O error, and an associated path. There are a few
places in the code where we're currently manually including the filename
in a custom error message, and this is a step towards a more consistent
approach (but we may be better served with a more general approach in
the future similar to Anyhow's .context())
- Move the error message into importing.ftl, as it's a bit neater
when error messages live in the same file as the rest of the messages
associated with some functionality.
* Fix importing of media files
* Minor wording tweaks
* Save an allocation
I18n strings with replacements are already strings, so we can skip the
extra allocation. Not that it matters here at all.
* Terminate import if file missing from archive
If a third-party tool is creating invalid archives, the user should know
about it. This should be rare, so I did not attempt to make it
translatable.
* Skip multithreaded compression on small collections
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
Protobuf 3.15 introduced support for marking scalar fields like
uint32 as optional, and all of our tooling appears to support it
now. This allows us to use simple optional/null checks in our Rust/
TypeScript code, without having to resort to an inner message.
I had to apply a minor patch to protobufjs to get this working with
the json-module output; this has also been submitted upstream:
https://github.com/protobufjs/protobuf.js/pull/1693
I've modified CardStatsResponse as an example of the new syntax.
One thing to note: while the Rust and TypeScript bindings use optional/
null fields, as that is the norm in those languages, Google's Python
bindings are not very Pythonic. Referencing an optional field that is
missing will yield the default value, and a separate HasField() call
is required, eg:
```
>>> from anki.stats_pb2 import CardStatsResponse as R
... msg = R.FromString(b"")
... print(msg.first_review)
... print(msg.HasField("first_review"))
0
False
```
Tokio has had to be pinned, because the 1.17 release introduces
a dependency on windows_sys, which fails to build on Windows on
Bazel.
The issue appears to be the build script of a subcrate - it is using
CARGO_MANIFEST_DIR to update the linking path so windows.lib can be
found (it's contained in that crate), but the path is set incorrectly.
dfc25285a2/crates/targets/x86_64_msvc/build.rs
One way we might be able to work around it is to add to the link path
in our own build script.
The previous change in 1871b57663 failed
to consider the browser refreshing case, as reported here:
https://forums.ankiweb.net/t/anki-2-1-50-beta-3-4/17501/30
I previously attempted to solve this by having SetFlag skip the queue
rebuild, then mutating the captured mtimes in the queues. That didn't
work correctly when undoing, as the queue mutations weren't recorded.
This approach combines that attempt and the previous change: flag
setting is an undoable operation again, but does not change the card's
modification time, so it can be applied/undone without a queue build
being required. Instead of special-casing flag changes in the review
screen, we now just redraw the flag on changes.card, as any other card
op will have triggered a queue rebuild.
* Replace Card.data with .original_position
* Use and update original position in v3
* Show original position in card info
* Revert restoring original position for now
* Fix pb card to/from pylib card
* Try original_position as the last pb field
* minor wording tweaks (dae)
This is not ideal, but I struggled to come up with a better solution.
Background:
- The scheduler records the mtime of cards as it's building the queues,
and will throw an error in get_queued_cards() if the card on the DB
has a different mtime. This is to catch bugs - any operation that modifies
cards should be triggering a queue rebuild, or should adjust the queues
appropriately.
- The review screen skips the usual queue rebuild redraw, and directly
updates the flag icon. This is because a rebuild could cause a different
card to appear, or the answer side to switch back to the question side,
neither of which the user expects when they flag a card.
The current behaviour was broken: the queue rebuilding was still happening
on the backend, and the frontend was just failing to reflect it.
I initially tried to special-case Op::SetFlag, having it skip the queue
rebuild, and having set_card_flag() update the mtimes in the active
queue. But those mutations weren't captured by the undo log, so they
didn't get undone when undoing the set flag operation. We could perhaps
work around it by adding a separate undo entry to capture the mutation,
but it started to feel like it would be a pain to maintain moving forward.
By skipping the undo queue and retaining the same mtime, no queue
rebuild is required. Because we're setting usn, the cards will still
sync, but as mtime is not bumped, in the case of a conflict, an older
unsynced change from another client may revert the flag change.
Fixes https://forums.ankiweb.net/t/anki-2-1-50-beta-1-2/15608/145
* avoid repinning Rust deps by default
* add id_tree dependency
* Respect intermediate child limits in v3
* Test new behaviour of v3 counts
* Rework v3 queue building to respect parent limits
* Add missing did field to SQL query
* Fix `LimitTreeMap::is_exhausted()`
* Rework tree building logic
https://github.com/ankitects/anki/pull/1638#discussion_r798328734
* Add timer for build_queues()
* `is_exhausted()` -> `limit_reached()`
* Move context and limits into `QueueBuilder`
This allows for moving more logic into QueueBuilder, so less passing
around of arguments. Unfortunately, some tests will require additional
work to set up.
* Fix stop condition in new_cards_by_position
* Fix order gather order of new cards by deck
* Add scheduler/queue/builder/burying.rs
* Fix bad tree due to unsorted child decks
* Fix comment
* Fix `cap_new_to_review_rec()`
* Add test for new card gathering
* Always sort `child_decks()`
* Fix deck removal in `cap_new_to_review_rec()`
* Fix sibling ordering in new card gathering
* Remove limits for deck total count with children
* Add random gather order
* Remove bad sibling order handling
All routines ensure ascending order now.
Also do some other minor refactoring.
* Remove queue truncating
All routines stop now as soon as the root limit is reached.
* Move deck fetching into `QueueBuilder::new()`
* Rework new card gather and sort options
https://github.com/ankitects/anki/pull/1638#issuecomment-1032173013
* Disable new sort order choices ...
depending on set gather order.
* Use enum instead of numbers
* Ensure valid sort order setting
* Update new gather and sort order tooltips
* Warn about random insertion order with v3
* Revert "Add timer for build_queues()"
This reverts commit c9f5fc6ebe87953c17a0c842990b009b5596c69c.
* Update rslib/src/storage/card/mod.rs (dae)
* minor wording tweaks to the tooltips (dae)
+ move legacy strings to bottom
+ consistent capitalization (our leech action still needs fixing,
but that will require introducing a new 'suspend card' string as the
existing one is used elsewhere as well)
* Avoid rebuilding regex in field search
* Special case search in all fields
* Don't repeat mid nodes in field search sql
Small speed gain for searches like `*:re:foo` and reduces the sql tree
depth if a lot of field names of the same notetype match.
* Add sql function to match fields with regex
* Optimise used field search algorithm
- Searching in all fields is a special case.
- Using native SQL comparison is preferred.
- For Regex, use newly added SQL function.
* Please clippy
* Avoid pyramid of doom
* nt_fields -> matched_fields
* Add tests for regex and all field searches
* minor tweaks for readability (dae)
* Implement custom study on backend
* Switch frontend to backend custom study
* Skip typecheck for new pb classes
* Build tag search string on backend
Also fixes escaping of special characters in tag names.
* `cram.cards` -> `cram.card_limit`
* Assign more meaningful names in `TagLimit`
* Broaden rustfmt glob
* Use `invalid_input()` helper
* Assign `FilteredDeckForUpdate` to temp var
* Implement `SearchBuilder`
* Rewrite `custom_study()` with `SearchBuilder`
* Replace match macros with `SearchBuilder`
* Remove `into_nodes_list` & `concatenate_searches`
* Fix new preview card's position being interpreted as a date
Can be reproduced by opening the Card Info screen of a new preview card
not answered yet.
* Update rslib/src/stats/card.rs
* Add new `card_rendering` mod
Parses a text with av/tts tags and strips or extracts tags.
* Replace old `extract_av_tags` and `strip_av_tags`
... with new `card_rendering` mod
* ressource -> resource
* Add AV prettifier for use in browser table
* Accept String in av tag routines
... and avoid redundant writes if no changes need to be made.
* add benchmarking with criterion; make links test optional (dae)
cargo install cargo-criterion, then run ./bench.sh
* performance comparison: creating HashMap up front (dae)
the previous solution:
anki_tag_parse time: [1.8401 us 1.8437 us 1.8476 us]
this solution:
anki_tag_parse time: [2.2420 us 2.2447 us 2.2477 us]
change: [+21.477% +21.770% +22.066%] (p = 0.00 < 0.05)
Performance has regressed.
* Revert "performance comparison: creating HashMap up front" (dae)
This reverts commit f19126a2f15b729b825825a49283f63ab13474d0.
* add missing header
* Write error message if tts lang is missing
* `Tag` -> `Directive`
* Make hard repeat the current step's interval in v3
Unless for the first step to avoid identical interval with Again.
* Make Hard repeat the current step's interval in v2
* Adjust test to new Hard behaviour
* Fix steps being mistaken for seconds
* Cap steps at `u32::max` seconds
* Fix overflow of steps in Rust
* Prevent overflow of `IntervalKind`
* Prevent overflow in `revlod/mod.rs`
Also replace some `as` with `from` and `try_from` as is recommended to
highlight potential issues.
* Ensure v2 doesn't store overflowing revlog ivls
* Lower steps cap in deck options
Whereas large card intervals are converted to days, revlog intervals use
i32s to store large numbers of seconds.
* Format
This brings the behaviour a bit closer to the default ordering of new
cards when they are reset, and is better than an undefined template
order. But it's a stopgap solution, and in the long run, filtered decks
need a bit of a rethink with the improved ordering than v3 has brought.
This was broken by an SQLite upgrade - previously we received the rows
in ix_cards_sched order, but recent versions use a table scan for that
query when the order is unspecified. Solved by being explicit about the
order we expect results to arrive.
https://forums.ankiweb.net/t/skipping-new-cards/15410