* Drop support for checkpoints
* Deprecate .flush()
* Remove .begin/.commit
* Remove rollback() and deprecate save/autosave/reset()
There's no need to commit anymore, as the Rust code is handling
transactions for us.
* Add safer transact() method
This will ensure add-on authors can't accidentally leave a transaction
open, leading to data loss.
---------
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
(for upgrading users, please see the notes at the bottom)
Bazel brought a lot of nice things to the table, such as rebuilds based on
content changes instead of modification times, caching of build products,
detection of incorrect build rules via a sandbox, and so on. Rewriting the build
in Bazel was also an opportunity to improve on the Makefile-based build we had
prior, which was pretty poor: most dependencies were external or not pinned, and
the build graph was poorly defined and mostly serialized. It was not uncommon
for fresh checkouts to fail due to floating dependencies, or for things to break
when trying to switch to an older commit.
For day-to-day development, I think Bazel served us reasonably well - we could
generally switch between branches while being confident that builds would be
correct and reasonably fast, and not require full rebuilds (except on Windows,
where the lack of a sandbox and the TS rules would cause build breakages when TS
files were renamed/removed).
Bazel achieves that reliability by defining rules for each programming language
that define how source files should be turned into outputs. For the rules to
work with Bazel's sandboxing approach, they often have to reimplement or
partially bypass the standard tools that each programming language provides. The
Rust rules call Rust's compiler directly for example, instead of using Cargo,
and the Python rules extract each PyPi package into a separate folder that gets
added to sys.path.
These separate language rules allow proper declaration of inputs and outputs,
and offer some advantages such as caching of build products and fine-grained
dependency installation. But they also bring some downsides:
- The rules don't always support use-cases/platforms that the standard language
tools do, meaning they need to be patched to be used. I've had to contribute a
number of patches to the Rust, Python and JS rules to unblock various issues.
- The dependencies we use with each language sometimes make assumptions that do
not hold in Bazel, meaning they either need to be pinned or patched, or the
language rules need to be adjusted to accommodate them.
I was hopeful that after the initial setup work, things would be relatively
smooth-sailing. Unfortunately, that has not proved to be the case. Things
frequently broke when dependencies or the language rules were updated, and I
began to get frustrated at the amount of Anki development time I was instead
spending on build system upkeep. It's now about 2 years since switching to
Bazel, and I think it's time to cut losses, and switch to something else that's
a better fit.
The new build system is based on a small build tool called Ninja, and some
custom Rust code in build/. This means that to build Anki, Bazel is no longer
required, but Ninja and Rust need to be installed on your system. Python and
Node toolchains are automatically downloaded like in Bazel.
This new build system should result in faster builds in some cases:
- Because we're using cargo to build now, Rust builds are able to take advantage
of pipelining and incremental debug builds, which we didn't have with Bazel.
It's also easier to override the default linker on Linux/macOS, which can
further improve speeds.
- External Rust crates are now built with opt=1, which improves performance
of debug builds.
- Esbuild is now used to transpile TypeScript, instead of invoking the TypeScript
compiler. This results in faster builds, by deferring typechecking to test/check
time, and by allowing more work to happen in parallel.
As an example of the differences, when testing with the mold linker on Linux,
adding a new message to tags.proto (which triggers a recompile of the bulk of
the Rust and TypeScript code) results in a compile that goes from about 22s on
Bazel to about 7s in the new system. With the standard linker, it's about 9s.
Some other changes of note:
- Our Rust workspace now uses cargo-hakari to ensure all packages agree on
available features, preventing unnecessary rebuilds.
- pylib/anki is now a PEP420 implicit namespace, avoiding the need to merge
source files and generated files into a single folder for running. By telling
VSCode about the extra search path, code completion now works with generated
files without needing to symlink them into the source folder.
- qt/aqt can't use PEP420 as it's difficult to get rid of aqt/__init__.py.
Instead, the generated files are now placed in a separate _aqt package that's
added to the path.
- ts/lib is now exposed as @tslib, so the source code and generated code can be
provided under the same namespace without a merging step.
- MyPy and PyLint are now invoked once for the entire codebase.
- dprint will be used to format TypeScript/json files in the future instead of
the slower prettier (currently turned off to avoid causing conflicts). It can
automatically defer to prettier when formatting Svelte files.
- svelte-check is now used for typechecking our Svelte code, which revealed a
few typing issues that went undetected with the old system.
- The Jest unit tests now work on Windows as well.
If you're upgrading from Bazel, updated usage instructions are in docs/development.md and docs/build.md. A summary of the changes:
- please remove node_modules and .bazel
- install rustup (https://rustup.rs/)
- install rsync if not already installed (on windows, use pacman - see docs/windows.md)
- install Ninja (unzip from https://github.com/ninja-build/ninja/releases/tag/v1.11.1 and
place on your path, or from your distro/homebrew if it's 1.10+)
- update .vscode/settings.json from .vscode.dist
* Add crate csv
* Add start of csv importing on backend
* Add Menomosyne serializer
* Add csv and json importing on backend
* Add plaintext importing on frontend
* Add csv metadata extraction on backend
* Add csv importing with GUI
* Fix missing dfa file in build
Added compile_data_attr, then re-ran cargo/update.py.
* Don't use doubly buffered reader in csv
* Escape HTML entities if CSV is not HTML
Also use name 'is_html' consistently.
* Use decimal number as foreign ease (like '2.5')
* ForeignCard.ivl → ForeignCard.interval
* Only allow fixed set of CSV delimiters
* Map timestamp of ForeignCard to native due time
* Don't trim CSV records
* Document use of empty strings for defaults
* Avoid creating CardGenContexts for every note
This requires CardGenContext to be generic, so it works both with an
owned and borrowed notetype.
* Show all accepted file types in import file picker
* Add import_json_file()
* factor → ease_factor
* delimter_from_value → delimiter_from_value
* Map columns to fields, not the other way around
* Fallback to current config for csv metadata
* Add start of new import csv screen
* Temporary fix for compilation issue on Linux/Mac
* Disable jest bazel action for import-csv
Jest fails with an error code if no tests are available, but this would
not be noticable on Windows as Jest is not run there.
* Fix field mapping issue
* Revert "Temporary fix for compilation issue on Linux/Mac"
This reverts commit 21f8a261408cdae49ec031aa21a1b659c4f66d82.
* Add HtmlSwitch and move Switch to components
* Fix spacing and make selectors consistent
* Fix shortcut tooltip
* Place import button at the top with path
* Fix meta column indices
* Remove NotetypeForString
* Fix queue and type of foreign cards
* Support different dupe resolution strategies
* Allow dupe resolution selection when importing CSV
* Test import of unnormalized text
Close #1863.
* Fix logging of foreign notes
* Implement CSV exports
* Use db_scalar() in notes_table_len()
* Rework CSV metadata
- Notetypes and decks are either defined by a global id or by a column.
- If a notetype id is provided, its field map must also be specified.
- If a notetype column is provided, fields are now mapped by index
instead of name at import time. So the first non-meta column is used for
the first field of every note, regardless of notetype. This makes
importing easier and should improve compatiblity with files without a
notetype column.
- Ensure first field can be mapped to a column.
- Meta columns must be defined as `#[meta name]:[column index]` instead
of in the `#columns` tag.
- Column labels contain the raw names defined by the file and must be
prettified by the frontend.
* Adjust frontend to new backend column mapping
* Add force flags for is_html and delimiter
* Detect if CSV is HTML by field content
* Update dupe resolution labels
* Simplify selectors
* Fix coalescence of oneofs in TS
* Disable meta columns from selection
Plus a lot of refactoring.
* Make import button stick to the bottom
* Write delimiter and html flag into csv
* Refetch field map after notetype change
* Fix log labels for csv import
* Log notes whose deck/notetype was missing
* Fix hiding of empty log queues
* Implement adding tags to all notes of a csv
* Fix dupe resolution not being set in log
* Implement adding tags to updated notes of a csv
* Check first note field is not empty
* Temporary fix for build on Linux/Mac
* Fix inverted html check (dae)
* Remove unused ftl string
* Delimiter → Separator
* Remove commented-out line
* Don't accept .json files
* Tweak tag ftl strings
* Remove redundant blur call
* Strip sound and add spaces in csv export
* Export HTML by default
* Fix unset deck in Mnemosyne import
Also accept both numbers and strings for notetypes and decks in JSON.
* Make DupeResolution::Update the default
* Fix missing dot in extension
* Make column indices 1-based
* Remove StickContainer from TagEditor
Fixes line breaking, border and z index on ImportCsvPage.
* Assign different key combos to tag editors
* Log all updated duplicates
Add a log field for the true number of found notes.
* Show identical notes as skipped
* Split tag-editor into separate ts module (dae)
* Add progress for CSV export
* Add progress for text import
* Tidy-ups after tag-editor split (dae)
- import-csv no longer depends on editor
- remove some commented lines
* Add apkg export on backend
* Filter out missing media-paths at write time
* Make TagMatcher::new() infallible
* Gather export data instead of copying directly
* Revert changes to rslib/src/tags/
* Reuse filename_is_safe/check_filename_safe()
* Accept func to produce MediaIter in export_apkg()
* Only store file folder once in MediaIter
* Use temporary tables for gathering
export_apkg() now accepts a search instead of a deck id. Decks are
gathered according to the matched notes' cards.
* Use schedule_as_new() to reset cards
* ExportData → ExchangeData
* Ignore ascii case when filtering system tags
* search_notes_cards_into_table →
search_cards_of_notes_into_table
* Start on apkg importing on backend
* Fix due dates in days for apkg export
* Refactor import-export/package
- Move media and meta code into appropriate modules.
- Normalize/check for normalization when deserializing media entries.
* Add SafeMediaEntry for deserialized MediaEntries
* Prepare media based on checksums
- Ensure all existing media files are hashed.
- Hash incoming files during preparation to detect conflicts.
- Uniquify names of conflicting files with hash (not notetype id).
- Mark media files as used while importing notes.
- Finally copy used media.
* Handle encoding in `replace_media_refs()`
* Add trait to keep down cow boilerplate
* Add notetypes immediately instaed of preparing
* Move target_col into Context
* Add notes immediately instaed of preparing
* Note id, not guid of conflicting notes
* Add import_decks()
* decks_configs → deck_configs
* Add import_deck_configs()
* Add import_cards(), import_revlog()
* Use dyn instead of generic for media_fn
Otherwise, would have to pass None with type annotation in the default
case.
* Fix signature of import_apkg()
* Fix search_cards_of_notes_into_table()
* Test new functions in text.rs
* Add roundtrip test for apkg (stub)
* Keep source id of imported cards (or skip)
* Keep source ids of imported revlog (or skip)
* Try to keep source ids of imported notes
* Make adding notetype with id undoable
* Wrap apkg import in transaction
* Keep source ids of imported deck configs (or skip)
* Handle card due dates and original due/did
* Fix importing cards/revlog
Card ids are manually uniquified.
* Factor out card importing
* Refactor card and revlog importing
* Factor out card importing
Also handle missing parents .
* Factor out note importing
* Factor out media importing
* Maybe upgrade scheduler of apkg
* Fix parent deck gathering
* Unconditionally import static media
* Fix deck importing edge cases
Test those edge cases, and add some global test helpers.
* Test note importing
* Let import_apkg() take a progress func
* Expand roundtrip apkg test
* Use fat pointer to avoid propogating generics
* Fix progress_fn type
* Expose apkg export/import on backend
* Return note log when importing apkg
* Fix archived collection name on apkg import
* Add CollectionOpWithBackendProgress
* Fix wrong Interrupted Exception being checked
* Add ClosedCollectionOp
* Add note ids to log and strip HTML
* Update progress when checking incoming media too
* Conditionally enable new importing in GUI
* Fix all_checksums() for media import
Entries of deleted files are nulled, not removed.
* Make apkg exporting on backend abortable
* Return number of notes imported from apkg
* Fix exception printing for QueryOp as well
* Add QueryOpWithBackendProgress
Also support backend exporting progress.
* Expose new apkg and colpkg exporting
* Open transaction in insert_data()
Was slowing down exporting by several orders of magnitude.
* Handle zstd-compressed apkg
* Add legacy arg to ExportAnkiPackage
Currently not exposed on the frontend
* Remove unused import in proto file
* Add symlink for typechecking of import_export_pb2
* Avoid kwargs in pb message creation, so typechecking is not lost
Protobuf's behaviour is rather subtle and I had to dig through the docs
to figure it out: set a field on a submessage to automatically assign
the submessage to the parent, or call SetInParent() to persist a default
version of the field you specified.
* Avoid re-exporting protobuf msgs we only use internally
* Stop after one test failure
mypy often fails much faster than pylint
* Avoid an extra allocation when extracting media checksums
* Update progress after prepare_media() finishes
Otherwise the bulk of the import ends up being shown as "Checked: 0"
in the progress window.
* Show progress of note imports
Note import is the slowest part, so showing progress here makes the UI
feel more responsive.
* Reset filtered decks at import time
Before this change, filtered decks exported with scheduling remained
filtered on import, and maybe_remove_from_filtered_deck() moved cards
into them as their home deck, leading to errors during review.
We may still want to provide a way to preserve filtered decks on import,
but to do that we'll need to ensure we don't rewrite the home decks of
cards, and we'll need to ensure the home decks are included as part of
the import (or give an error if they're not).
https://github.com/ankitects/anki/pull/1743/files#r839346423
* Fix a corner-case where due dates were shifted by a day
This issue existed in the old Python code as well. We need to include
the user's UTC offset in the exported file, or days_elapsed falls back
on the v1 cutoff calculation, which may be a day earlier or later than
the v2 calculation.
* Log conflicting note in remapped nt case
* take_fields() → into_fields()
* Alias `[u8; 20]` with `Sha1Hash`
* Truncate logged fields
* Rework apkg note import tests
- Use macros for more helpful errors.
- Split monolith into unit tests.
- Fix some unknown error with the previous test along the way.
(Was failing after 969484de4388d225c9f17d94534b3ba0094c3568.)
* Fix sorting of imported decks
Also adjust the test, so it fails without the patch. It was only passing
before, because the parent deck happened to come before the
inconsistently capitalised child alphabetically. But we want all parent
decks to be imported before their child decks, so their children can
adopt their capitalisation.
* target[_id]s → existing_card[_id]s
* export_collection_extracting_media() → ...
export_into_collection_file()
* target_already_exists→card_ordinal_already_exists
* Add search_cards_of_notes_into_table.sql
* Imrove type of apkg export selector/limit
* Remove redundant call to mod_schema()
* Parent tooltips to mw
* Fix a crash when truncating note text
String::truncate() is a bit of a footgun, and I've hit this before
too :-)
* Remove ExportLimit in favour of separate classes
* Remove OpWithBackendProgress and ClosedCollectionOp
Backend progress logic is now in ProgressManager. QueryOp can be used
for running on closed collection.
Also fix aborting of colpkg exports, which slipped through in #1817.
* Tidy up import log
* Avoid QDialog.exec()
* Default to excluding scheuling for deck list deck
* Use IncrementalProgress in whole import_export code
* Compare checksums when importing colpkgs
* Avoid registering changes if hashes are not needed
* ImportProgress::Collection → ImportProgress::File
* Make downgrading apkgs depend on meta version
* Generalise IncrementableProgress
And use it in entire import_export code instead.
* Fix type complexity lint
* Take count_map for IncrementableProgress::get_inner
* Replace import/export env with Shift click
* Accept all args from update() for backend progress
* Pass fields of ProgressUpdate explicitly
* Move update_interval into IncrementableProgress
* Outsource incrementing into Incrementor
* Mutate ProgressUpdate in progress_update callback
* Switch import/export legacy toggle to profile setting
Shift would have been nice, but the existing shortcuts complicate things.
If the user triggers an import with ctrl+shift+i, shift is unlikely to
have been released by the time our code runs, meaning the user accidentally
triggers the new code. We could potentially wait a while before bringing
up the dialog, but then we're forced to guess at how long it will take the
user to release the key.
One alternative would be to use alt instead of shift, but then we need to
trigger our shortcut when that key is pressed as well, and it could
potentially cause a conflict with an add-on that already uses that
combination.
* Show extension in export dialog
* Continue to provide separate options for schema 11+18 colpkg export
* Default to colpkg export when using File>Export
* Improve appearance of combo boxes when switching between apkg/colpkg
+ Deal with long deck names
* Convert newlines to spaces when showing fields from import
Ensures each imported note appears on a separate line
* Don't separate total note count from the other summary lines
This may come down to personal preference, but I feel the other counts
are equally as important, and separating them feels like it makes it
a bit easier to ignore them.
* Fix 'deck not normal' error when importing a filtered deck for the 2nd time
* Fix [Identical] being shown on first import
* Revert "Continue to provide separate options for schema 11+18 colpkg export"
This reverts commit 8f0b2c175f4794d642823b60414d142a12768441.
Will use a different approach
* Move legacy support into a separate exporter option; add to apkg export
* Adjust 'too new' message to also apply to .apkg import case
* Show a better message when attempting to import new apkg into old code
Previously the user could end seeing a message like:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 1: invalid start byte
Unfortunately we can't retroactively fix this for older clients.
* Hide legacy support option in older exporting screen
* Reflect change from paths to fnames in type & name
* Make imported decks normal at once
Then skip special casing in update_deck(). Also skip updating
description if new one is empty.
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Collection needs to be closed prior to backup even when not downgrading
* Backups -> BackupLimits
* Some improvements to backup_task
- backup_inner now returns the error instead of logging it, so that
the frontend can discover the issue when they await a backup (or create
another one)
- start_backup() was acquiring backup_task twice, and if another thread
started a backup between the two locks, the task could have been accidentally
overwritten without awaiting it
* Backups no longer require a collection close
- Instead of closing the collection, we ensure there is no active
transaction, and flush the WAL to disk. This means the undo history
is no longer lost on backup, which will be particularly useful if we
add a periodic backup in the future.
- Because a close is no longer required, backups are now achieved with
a separate command, instead of being included in CloseCollection().
- Full sync no longer requires an extra close+reopen step, and we now
wait for the backup to complete before proceeding.
- Create a backup before 'check db'
* Add File>Create Backup
https://forums.ankiweb.net/t/anki-mac-os-no-backup-on-sync/6157
* Defer checkpoint until we know we need it
When running periodic backups on a timer, we don't want to be fsync()ing
unnecessarily.
* Skip backup if modification time has not changed
We don't want the user leaving Anki open overnight, and coming back
to lots of identical backups.
* Periodic backups
Creates an automatic backup every 30 minutes if the collection has been
modified.
If there's a legacy checkpoint active, tries again 5 minutes later.
* Switch to a user-configurable backup duration
CreateBackup() now uses a simple force argument to determine whether
the user's limits should be respected or not, and only potentially
destructive ops (full download, check DB) override the user's configured
limit.
I considered having a separate limit for collection close and automatic
backups (eg keeping the previous 5 minute limit for collection close),
but that had two downsides:
- When the user closes their collection at the end of the day, they'd
get a recent backup. When they open the collection the next day, it
would get backed up again within 5 minutes, even though not much had
changed.
- Multiple limits are harder to communicate to users in the UI
Some remaining decisions I wasn't 100% sure about:
- If force is true but the collection has not been modified, the backup
will be skipped. If the user manually deleted their backups without
closing Anki, they wouldn't get a new one if the mtime hadn't changed.
- Force takes preference over the configured backup interval - should
we be ignored the user here, or take no backups at all?
Did a sneaky edit of the existing ftl string, as it hasn't been live
long.
* Move maybe_backup() into Collection
* Use a single method for manual and periodic backups
When manually creating a backup via the File menu, we no longer make
the user wait until the backup completes. As we continue waiting for
the backup in the background, if any errors occur, the user will get
notified about it fairly quickly.
* Show message to user if backup was skipped due to no changes
+ Don't incorrectly assert a backup will be created on force
* Add "automatic" to description
* Ensure we backup prior to importing colpkg if collection open
The backup doesn't happen when invoked from 'open backup' in the profile
screen, which matches Anki's previous behaviour. The user could
potentially clobber up to 30 minutes of their work if they exited to
the profile screen and restored a backup, but the alternative is we
create backups every time a backup is restored, which may happen a number
of times if the user is trying various ones. Or we could go back to a
separate throttle amount for this case, at the cost of more complexity.
* Remove the 0 special case on backup interval; minimum of 5 minutes
https://github.com/ankitects/anki/pull/1728#discussion_r830876833
* Write media files in chunks
* Test media file writing
* Add iter `ReadDirFiles`
* Remove ImportMediaError, fail fatally instead
Partially reverts commit f8ed4d89ba.
* Compare hashes of media files to be restored
* Improve `MediaCopier::copy()`
* Restore media files atomically with tempfile
* Make downgrade flag an enum
* Remove SchemaVersion::Latest in favour of Option
* Remove sha1 comparison again
* Remove unnecessary repr(u8) (dae)
* Fix legacy colpkg import; disable v3 import/export; add roundtrip test
The test has revealed we weren't decompressing the media files on v3
import. That's easy to fix, but means all files need decompressing
even when they already exist, which is not ideal - it would be better
to store size/checksum in the metadata instead.
* Switch media and meta to protobuf; re-enable v3 import/export
- Fixed media not being decompressed on import
- The uncompressed size and checksum is now included for each media
entry, so that we can quickly check if a given file needs to be extracted.
We're still just doing a naive size comparison on colpkg import at the
moment, but we may want to use a checksum in the future, and will need
a checksum for apkg imports.
- Checksums can't be efficiently encoded in JSON, so the media list
has been switched to protobuf to reduce the the space requirements.
- The meta file has been switched to protobuf as well, for consistency.
This will mean any colpkg files exported with beta7 will be
unreadable.
* Avoid integer version comparisons
* Re-enable v3 test
* Apply suggestions from code review
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add export_colpkg() method to Collection
More discoverable, and easier to call from unit tests
* Split import/export code out into separate folders
Currently colpkg/*.rs contain some routines that will be useful for
apkg import/export as well; in the future we can refactor them into a
separate file in the parent module.
* Return a proper error when media import fails
This tripped me up when writing the earlier unit test - I had called
the equivalent of import_colpkg()?, and it was returning a string error
that I didn't notice. In practice this should result in the same text
being shown in the UI, but just skips the tooltip.
* Automatically create media folder on import
* Move roundtrip test into separate file; check collection too
* Remove zstd version suffix
Prevents a warning shown each time Rust Analyzer is used to check the
code.
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add zstd dep
* Implement backend backup with zstd
* Implement backup thinning
* Write backup meta
* Use new file ending anki21b
* Asynchronously backup on collection close in Rust
* Revert "Add zstd dep"
This reverts commit 3fcb2141d2be15f907269d13275c41971431385c.
* Add zstd again
* Take backup col path from col struct
* Fix formatting
* Implement backup restoring on backend
* Normalize restored media file names
* Refactor `extract_legacy_data()`
A bit cumbersome due to borrowing rules.
* Refactor
* Make thinning calendar-based and gradual
* Consider last kept backups of previous stages
* Import full apkgs and colpkgs with backend
* Expose new backup settings
* Test `BackupThinner` and make it deterministic
* Mark backup_path when closing optional
* Delete leaky timer
* Add progress updates for restoring media
* Write restored collection to tempfile first
* Do collection compression in the background thread
This has us currently storing an uncompressed and compressed copy of
the collection in memory (not ideal), but means the collection can be
closed without waiting for compression to complete. On a large collection,
this takes a close and reopen from about 0.55s to about 0.07s. The old
backup code for comparison: about 0.35s for compression off, about
8.5s for zip compression.
* Use multithreading in zstd compression
On my system, this reduces the compression time of a large collection
from about 0.55s to 0.08s.
* Stream compressed collection data into zip file
* Tweak backup explanation
+ Fix incorrect tab order for ignore accents option
* Decouple restoring backup and full import
In the first case, no profile is opened, unless the new collection
succeeds to load.
In the second case, either the old collection is reloaded or the new one
is loaded.
* Fix number gap in Progress message
* Don't revert backup when media fails but report it
* Tweak error flow
* Remove native BackupLimits enum
* Fix type annotation
* Add thinning test for whole year
* Satisfy linter
* Await async backup to finish
* Move restart disclaimer out of backup tab
Should be visible regardless of the current tab.
* Write restored collection in chunks
* Refactor
* Write media in chunks and refactor
* Log error if removing file fails
* join_backup_task -> await_backup_completion
* Refactor backup.rs
* Refactor backup meta and collection extraction
* Fix wrong error being returned
* Call sync_all() on new collection
* Add ImportError
* Store logger in Backend, instead of creating one on demand
init_backend() accepts a Logger rather than a log file, to allow other
callers to customize the logger if they wish.
In the future we may want to explore using the tracing crate as an
alternative; it's a bit more ergonomic, as a logger doesn't need to be
passed around, and it plays more nicely with async code.
* Sync file contents prior to rename; sync folder after rename.
* Limit backup creation to once per 30 min
* Use zstd::stream::copy_decode
* Make importing abortable
* Don't revert if backup media is aborted
* Set throttle implicitly
* Change force flag to minimum_backup_interval
* Don't attempt to open folders on Windows
* Join last backup thread before starting new one
Also refactor.
* Disable auto sync and backup when restoring again
* Force backup on full download
* Include the reason why a media file import failed, and the file path
- Introduce a FileIoError that contains a string representation of
the underlying I/O error, and an associated path. There are a few
places in the code where we're currently manually including the filename
in a custom error message, and this is a step towards a more consistent
approach (but we may be better served with a more general approach in
the future similar to Anyhow's .context())
- Move the error message into importing.ftl, as it's a bit neater
when error messages live in the same file as the rest of the messages
associated with some functionality.
* Fix importing of media files
* Minor wording tweaks
* Save an allocation
I18n strings with replacements are already strings, so we can skip the
extra allocation. Not that it matters here at all.
* Terminate import if file missing from archive
If a third-party tool is creating invalid archives, the user should know
about it. This should be rare, so I did not attempt to make it
translatable.
* Skip multithreaded compression on small collections
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
The enum changes should work on PyQt 5.x, and are required in PyQt 6.x.
They are not supported by the PyQt5 typings however, so we need to run
our tests with PyQt6.
This adds Python 3.9 and 3.10 typing syntax to files that import
attributions from __future___. Python 3.9 should be able to cope with
the 3.10 syntax, but Python 3.8 will no longer work.
On Windows/Mac, install the latest Python 3.9 version from python.org.
There are currently no orjson wheels for Python 3.10 on Windows/Mac,
which will break the build unless you have Rust installed separately.
On Linux, modern distros should have Python 3.9 available already. If
you're on an older distro, you'll need to build Python from source first.
I18n is not set up at init time, so the strings can't be generated
at import.
@kelciour you have a few importing add-ons, so wanted to give you a
heads-up. The importing code is likely to change more in
future months, but for now this should be the only change
The existing code was really difficult to reason about:
- The default notetype depended on the selected deck, and vice versa,
and this logic was buried in the deck and notetype choosing screens,
and models.py.
- Changes to the notetype were not passed back directly, but were fired
via a hook, which changed any screen in the app that had a notetype
selector.
It also wasn't great for performance, as the most recent deck and tags
were embedded in the notetype, which can be expensive to save and sync
for large notetypes.
To address these points:
- The current deck for a notetype, and notetype for a deck, are now
stored in separate config variables, instead of directly in the deck
or notetype. These are cheap to read and write, and we'll be able to
sync them individually in the future once config syncing is updated in
the future. I seem to recall some users not wanting the tag saving
behaviour, so I've dropped that for now, but if people end up missing
it, it would be simple to add as an extra auxiliary config variable.
- The logic for getting the starting deck and notetype has been moved
into the backend. It should be the same as the older Python code, with
one exception: when "change deck depending on notetype" is enabled in
the preferences, it will start with the current notetype ("curModel"),
instead of first trying to get a deck-specific notetype.
- ModelChooser has been duplicated into notetypechooser.py, and it
has been updated to solely be concerned with keeping track of a selected
notetype - it no longer alters global state.
QTextEdit() will pin the CPU at 100% for seconds to minutes when
fed a large string to display - work around it by switching to
QPlainTextEdit().
Also strip HTML before showing the user - easier to read, and less
text to display. And turn off word wrap, as it makes it easier to skim,
and further reduces the work the widget needs to do.
https://forums.ankiweb.net/t/big-issue-where-anki-gets-slow-when-you-import-this-deck/7050