* First attempt at adding a directory for icons under //ts
* Fix image import
* Fix import order
* Add cloze button group
* Fix issue with toolbar.toolbar dynamically slottable
* Change tooltip for repeating cloze deletion
* Fix repeat cloze button not working on macOS (dae)
* TemplateSaveError -> CardTypeError
* Don't show success tooltip if export fails
* Attach help page to error
Show help link if export fails due to card type error.
* Add type (dae)
* Add shared show_exception() (dae)
- Use a shared routine for printing standard backend errors, so that
we can take advantage of the help links in eg. the card layout screen
as well.
- The truthiness check on help in showInfo() would have ignored the
enum 0 value.
- Close the exporting dialog on a documented failure as well
* Fix local variable help_page
* Add Deleted error and disable all bad browser rows
* Avoid error when opening the browse screen to a card with a missing note (dae)
* In cards mode, a missing note is NotFound, not Deleted (dae)
So we distinguish between referential integrity error, and explicit
deletion.
* Remove redundant try block
* Collection needs to be closed prior to backup even when not downgrading
* Backups -> BackupLimits
* Some improvements to backup_task
- backup_inner now returns the error instead of logging it, so that
the frontend can discover the issue when they await a backup (or create
another one)
- start_backup() was acquiring backup_task twice, and if another thread
started a backup between the two locks, the task could have been accidentally
overwritten without awaiting it
* Backups no longer require a collection close
- Instead of closing the collection, we ensure there is no active
transaction, and flush the WAL to disk. This means the undo history
is no longer lost on backup, which will be particularly useful if we
add a periodic backup in the future.
- Because a close is no longer required, backups are now achieved with
a separate command, instead of being included in CloseCollection().
- Full sync no longer requires an extra close+reopen step, and we now
wait for the backup to complete before proceeding.
- Create a backup before 'check db'
* Add File>Create Backup
https://forums.ankiweb.net/t/anki-mac-os-no-backup-on-sync/6157
* Defer checkpoint until we know we need it
When running periodic backups on a timer, we don't want to be fsync()ing
unnecessarily.
* Skip backup if modification time has not changed
We don't want the user leaving Anki open overnight, and coming back
to lots of identical backups.
* Periodic backups
Creates an automatic backup every 30 minutes if the collection has been
modified.
If there's a legacy checkpoint active, tries again 5 minutes later.
* Switch to a user-configurable backup duration
CreateBackup() now uses a simple force argument to determine whether
the user's limits should be respected or not, and only potentially
destructive ops (full download, check DB) override the user's configured
limit.
I considered having a separate limit for collection close and automatic
backups (eg keeping the previous 5 minute limit for collection close),
but that had two downsides:
- When the user closes their collection at the end of the day, they'd
get a recent backup. When they open the collection the next day, it
would get backed up again within 5 minutes, even though not much had
changed.
- Multiple limits are harder to communicate to users in the UI
Some remaining decisions I wasn't 100% sure about:
- If force is true but the collection has not been modified, the backup
will be skipped. If the user manually deleted their backups without
closing Anki, they wouldn't get a new one if the mtime hadn't changed.
- Force takes preference over the configured backup interval - should
we be ignored the user here, or take no backups at all?
Did a sneaky edit of the existing ftl string, as it hasn't been live
long.
* Move maybe_backup() into Collection
* Use a single method for manual and periodic backups
When manually creating a backup via the File menu, we no longer make
the user wait until the backup completes. As we continue waiting for
the backup in the background, if any errors occur, the user will get
notified about it fairly quickly.
* Show message to user if backup was skipped due to no changes
+ Don't incorrectly assert a backup will be created on force
* Add "automatic" to description
* Ensure we backup prior to importing colpkg if collection open
The backup doesn't happen when invoked from 'open backup' in the profile
screen, which matches Anki's previous behaviour. The user could
potentially clobber up to 30 minutes of their work if they exited to
the profile screen and restored a backup, but the alternative is we
create backups every time a backup is restored, which may happen a number
of times if the user is trying various ones. Or we could go back to a
separate throttle amount for this case, at the cost of more complexity.
* Remove the 0 special case on backup interval; minimum of 5 minutes
https://github.com/ankitects/anki/pull/1728#discussion_r830876833
* Write media files in chunks
* Test media file writing
* Add iter `ReadDirFiles`
* Remove ImportMediaError, fail fatally instead
Partially reverts commit f8ed4d89ba.
* Compare hashes of media files to be restored
* Improve `MediaCopier::copy()`
* Restore media files atomically with tempfile
* Make downgrade flag an enum
* Remove SchemaVersion::Latest in favour of Option
* Remove sha1 comparison again
* Remove unnecessary repr(u8) (dae)
* Fix legacy colpkg import; disable v3 import/export; add roundtrip test
The test has revealed we weren't decompressing the media files on v3
import. That's easy to fix, but means all files need decompressing
even when they already exist, which is not ideal - it would be better
to store size/checksum in the metadata instead.
* Switch media and meta to protobuf; re-enable v3 import/export
- Fixed media not being decompressed on import
- The uncompressed size and checksum is now included for each media
entry, so that we can quickly check if a given file needs to be extracted.
We're still just doing a naive size comparison on colpkg import at the
moment, but we may want to use a checksum in the future, and will need
a checksum for apkg imports.
- Checksums can't be efficiently encoded in JSON, so the media list
has been switched to protobuf to reduce the the space requirements.
- The meta file has been switched to protobuf as well, for consistency.
This will mean any colpkg files exported with beta7 will be
unreadable.
* Avoid integer version comparisons
* Re-enable v3 test
* Apply suggestions from code review
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add export_colpkg() method to Collection
More discoverable, and easier to call from unit tests
* Split import/export code out into separate folders
Currently colpkg/*.rs contain some routines that will be useful for
apkg import/export as well; in the future we can refactor them into a
separate file in the parent module.
* Return a proper error when media import fails
This tripped me up when writing the earlier unit test - I had called
the equivalent of import_colpkg()?, and it was returning a string error
that I didn't notice. In practice this should result in the same text
being shown in the UI, but just skips the tooltip.
* Automatically create media folder on import
* Move roundtrip test into separate file; check collection too
* Remove zstd version suffix
Prevents a warning shown each time Rust Analyzer is used to check the
code.
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
tools/mypy-watch now prints .py paths relative to the workspace root,
which makes it easy to click on them to jump to the relevant file/line
in VS Code.
* Add forget prompt with options
- Restore original position
- Reset reps and lapses
* Restore position when resetting for export
* Add config context to avoid passing keys
* Add routine to fetch defaults; use method-specific enum (dae)
* Keep original position by default (dae)
* Fix code completion for forget dialog (dae)
Needs to be a symbolic link to the generated file
* Add zstd dep
* Implement backend backup with zstd
* Implement backup thinning
* Write backup meta
* Use new file ending anki21b
* Asynchronously backup on collection close in Rust
* Revert "Add zstd dep"
This reverts commit 3fcb2141d2be15f907269d13275c41971431385c.
* Add zstd again
* Take backup col path from col struct
* Fix formatting
* Implement backup restoring on backend
* Normalize restored media file names
* Refactor `extract_legacy_data()`
A bit cumbersome due to borrowing rules.
* Refactor
* Make thinning calendar-based and gradual
* Consider last kept backups of previous stages
* Import full apkgs and colpkgs with backend
* Expose new backup settings
* Test `BackupThinner` and make it deterministic
* Mark backup_path when closing optional
* Delete leaky timer
* Add progress updates for restoring media
* Write restored collection to tempfile first
* Do collection compression in the background thread
This has us currently storing an uncompressed and compressed copy of
the collection in memory (not ideal), but means the collection can be
closed without waiting for compression to complete. On a large collection,
this takes a close and reopen from about 0.55s to about 0.07s. The old
backup code for comparison: about 0.35s for compression off, about
8.5s for zip compression.
* Use multithreading in zstd compression
On my system, this reduces the compression time of a large collection
from about 0.55s to 0.08s.
* Stream compressed collection data into zip file
* Tweak backup explanation
+ Fix incorrect tab order for ignore accents option
* Decouple restoring backup and full import
In the first case, no profile is opened, unless the new collection
succeeds to load.
In the second case, either the old collection is reloaded or the new one
is loaded.
* Fix number gap in Progress message
* Don't revert backup when media fails but report it
* Tweak error flow
* Remove native BackupLimits enum
* Fix type annotation
* Add thinning test for whole year
* Satisfy linter
* Await async backup to finish
* Move restart disclaimer out of backup tab
Should be visible regardless of the current tab.
* Write restored collection in chunks
* Refactor
* Write media in chunks and refactor
* Log error if removing file fails
* join_backup_task -> await_backup_completion
* Refactor backup.rs
* Refactor backup meta and collection extraction
* Fix wrong error being returned
* Call sync_all() on new collection
* Add ImportError
* Store logger in Backend, instead of creating one on demand
init_backend() accepts a Logger rather than a log file, to allow other
callers to customize the logger if they wish.
In the future we may want to explore using the tracing crate as an
alternative; it's a bit more ergonomic, as a logger doesn't need to be
passed around, and it plays more nicely with async code.
* Sync file contents prior to rename; sync folder after rename.
* Limit backup creation to once per 30 min
* Use zstd::stream::copy_decode
* Make importing abortable
* Don't revert if backup media is aborted
* Set throttle implicitly
* Change force flag to minimum_backup_interval
* Don't attempt to open folders on Windows
* Join last backup thread before starting new one
Also refactor.
* Disable auto sync and backup when restoring again
* Force backup on full download
* Include the reason why a media file import failed, and the file path
- Introduce a FileIoError that contains a string representation of
the underlying I/O error, and an associated path. There are a few
places in the code where we're currently manually including the filename
in a custom error message, and this is a step towards a more consistent
approach (but we may be better served with a more general approach in
the future similar to Anyhow's .context())
- Move the error message into importing.ftl, as it's a bit neater
when error messages live in the same file as the rest of the messages
associated with some functionality.
* Fix importing of media files
* Minor wording tweaks
* Save an allocation
I18n strings with replacements are already strings, so we can skip the
extra allocation. Not that it matters here at all.
* Terminate import if file missing from archive
If a third-party tool is creating invalid archives, the user should know
about it. This should be rare, so I did not attempt to make it
translatable.
* Skip multithreaded compression on small collections
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Load fields_web from fields.py if appropriate flag is set
* Add FieldsPage as entry for new fields view
* Pass mypy
* Fix pylint
* Fix fields_web in Qt5 (dae)
May not be related to the CI error, but required for compatibility
with Qt5.
md_in_html imports fine when done manually; it is likely PyOxidizer
has not instrumented import_module().
File "aqt.addons", line 631, in addonConfigHelp
File "markdown.core", line 386, in markdown
File "markdown.core", line 96, in __init__
File "markdown.core", line 123, in registerExtensions
File "markdown.core", line 162, in build_extension
File "importlib", line 127, in import_module
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'md_in_html'
The previous change in 1871b57663 failed
to consider the browser refreshing case, as reported here:
https://forums.ankiweb.net/t/anki-2-1-50-beta-3-4/17501/30
I previously attempted to solve this by having SetFlag skip the queue
rebuild, then mutating the captured mtimes in the queues. That didn't
work correctly when undoing, as the queue mutations weren't recorded.
This approach combines that attempt and the previous change: flag
setting is an undoable operation again, but does not change the card's
modification time, so it can be applied/undone without a queue build
being required. Instead of special-casing flag changes in the review
screen, we now just redraw the flag on changes.card, as any other card
op will have triggered a queue rebuild.
* Move Trigger into its own file
* Try implement HandlerList
* Implement new input handler and handler-list
* Use new refocus HandlerList in TextColorButton
* Fix TextColorButton on windows
* Move ColorPicker to editor-toolbar
* Change trigger behavior of overwriteSurround
* Fix mathjax-overlay flushCaret
* Insert image via bridgeCommand return value
* Fix invoking color picker with F8
* Have remove format work even when collapsed
* Satisfy formatter
* Insert media via callback resolved from python
* Replace print with web.eval
* Fix python formatting
* remove unused function (dae)
* Add progress.single_shot()
* Fix periodic garbage collection
* Properly cleanup mediasync timers
* Revert some replacements with `single_shot()`
These timers shouldn't fire if their widget is destroyed.
* Add timer docs explaining issues and alternatives
* Apply suggestions from code review
* Tweak docstrings
* Truncate long deck names to match AnkiWeb behavior
Prevent long deck name from obscuring deck stats in main deck browser - match behavior at https://ankiweb.net/decks/ for handling long deck names (truncate name)
* Fix formatting
* Update CONTRIBUTORS
Add myself to contributors list
* Clarify some comments
* Don't destructure insertion trigger
* Make superscript and subscript use domlib/surround
* Create new {Text,Highlight}ColorButton
* Use domlib/surround for textcolor
- However there's still a crucial bug, when you're breaking existing
colored span when unsurrounding, their color is not restored
* Add underline format to removeFormats
* Simplify type of ElementMatcher and ElementClearer for end users
* Add some comments for normalize-insertion-ranges
* Split normalize-insertion-ranges into remove-adjacent and remove-within
* Factor out find-remove from unsurround.ts
* Rename merge-mach, simplify remove-within
* Clarify some comments
* Refactor first reduce
* Refactor reduceRight
* Flatten functions in merge-ranges
* Move some functionality to merge-ranges and do not export
* Refactor merge-ranges
* Remove createInitialMergeMatch
* Finish refactoring of merge-ranges
* Refactor merge-ranges to minimal-ranges and add some unit testing
* Move more logic into text-node
* Remove most most of the logic from remove-adjacent
- remove-adjacent is still part of the "merging" logic, as it increases
the scope of the child node ranges
* Add some tests for edge cases
* Merge remove-adjacent logic into minimal-ranges
* Refactor unnecessary list destructuring
* Add some TODOs
* Put removing nodes and adding new nodes into sequence
* Refactor MatchResult to MatchType and return clear from matcher
* Inline surround/helpers
* Shorten name of param
* Add another edge case test
* Add an example where commonAncestorContainer != normalization level
* Fix bug in find-adjacent when find more than one nested nodes
* Allow comments for Along type
* Simplify find-adjacent by removing intermediate and/or curried functions
* Remove extend-adjacent
* Add more tests when find-adjacent finds by descension
* Fix find-adjacent descending into block-level elements
* Add clarifying comment to refusing to descend into block-level elements
* Move shifting logic into find-adjacent
* Rename file matcher to match-type
* Give a first implemention of TreeVertex
* Remove MatchType.ALONG
- findAdjacent now directly modifies the range
* Rename MatchType.MATCH into MatchType.REMOVE
* Implement a version of find-within that utilizies match-tree
* Turn child node range into a class
* Fix bug in new find-adjacent function
* Make all find-adjacent tests test for ranges
* Surrounding within farthestMatchingAncestor when available
* Fix an issue with negligable elements
- also rename "along" elements to "negligable"
* Add two TODOs to SurroundFormat interface
* Have a messy first implementation of the new tree-node algorithm
* Maintain whether formatting nodes are covered or within user selection
* Move covered and insideRange into TreeNode superclass
* Reimplement findAdjacent logic
* Add extension logic
* Add an evaluate method to nodes
* Introduce BlockNode
* Add a first evaluate implementation
* Add left shift and inner shift logic
* Implement SurroundFormatUser
* Allow pass in formatter, ascender and merger from outside
* Fix insideRange and covered switch-up
* Fix MatchNode.prototype.isAscendable
* Fix another switch-up of covered and insideRange...
* Remove a lot of old code
* Have surround functions only return the range
- I still cannot think of a good reason why we should return addedNodes
and removedNodes, except for testing.
* Create formatting-tree directory
* Create build-tree directory + Move find-above up to /domlib
* Remove range-anchors
* Move unsurround logic into no-splitting
* Fix extend-merge
* Fix inner shift being eroneusly returned as left shift
* Fix oversight in SplitRange
* Redefine how ranges are recreated
* Rename covered to insideMatch and put as fourth parameter instead of third
* Keep track of match holes and match leaves
* Rename ChildNodeRange to FlatRange
* Change signature of matcher
* Fix bug in extend-merge
* Improve Match class
* Utilize cache in TextColorButton
* Implement getBaseSurrounder for TextColorButton
* Add matchAncestors field to FormattingNode
* Introduce matchAncestors and getCache
* Do clearing during parsing already
- This way, you know whether elements will be removed before getting to
Formatting nodes
* Make HighlightColorButton use our surround mechanism
* Fix a bug with calling .removeAttribute and .hasAttribute
* Add side button to RemoveFormat button
* Add disabled to remove format side button
* Expose remove formats on RemoveFormat button
* Reinvent editor/surround as Surrounder class
* Fix split-text when working with insert trigger
* Try counteracting the contenteditable's auto surrounding
* Remove matching elements before normalizing
* Rewrite match-type
* Move setting match leaves into build
* Change editing strings
- So that color strings match bold/italic strings better
* Fix border radius of List options menu
* Implement extensions functionality
* Remove some unnecessary code
* Fix split range endOffset
* Type MatchType
* Reformat MatchType + add docs
* Fix domlib/surround/apply
* Satisfy last tests
* Register Surrounder as package
* Clarify some comments
* Correctly implement reformat
* Reformat with inactive eraser formats
* Clear empty spans with RemoveFormatButton
* Fix Super/Subscript button
* Use ftl string for hardcoded tooltip
* Adjust wording
* Improve type annotations in studydeck.py
* Use `super()`
* Explicitly delete widget after closing dialog if parent is None
* Consolidate common cleanup tasks into a function
..., and connect it to `finished` signal
* Fix new deck not being selected
* Remove redundant assignments
* Fix wrong hook being torn down
* Fix item models not being destroyed
* Add missing gc for FilteredDeckConfigDialog
* Add missing type annotation
* Pass calling widget as parent to QTimer
Implicitly passing `self.mw` as the parent means that the QTimer won't
get destroyed before quitting the app, which also thwarts garbage
collection of any data captured by a passed closure.
* Make `Editor._links` an instance variable
Browser is inserting a closure into this dict capturing itself. As a class
variable, it won't get destroyed, so neither will the browser.
* Make `Editor._links` funcs take instance again
* Deprecate calling progress.timer() without parent
* show caller location when printing deprecation warning (dae)
* Call StudyDeck with callback
* StudyDeck w/ callback, remove redundant assignment
* Replace exec() with show() for various dialogs
* Update super init args for Models.__init__
* Make StudyDialog ApplicationModal
* Remove .exec() from various dialogs and menus
* Add main view menu
* Add browser view menu
* Use standard keys for zooming and full screen
* Capitalise menu item names
* Toggle Showing Cards/Notes -> Toggle Cards/Notes
* Explicitly set linux full screen key
on_toggle_fullscreen -> on_toggle_full_screen
* Show buried until daily limits in overview screen
This explains differences between the counts shown in the deck tree and
those shown in the overview screen.
Closes#1633.
* interday learning cards can be buried too (dae)
* add 'buried' tooltip to bury counts; generate row in helper fn (dae)
* Use grey for buried counts
* Use submodule imports in aqt
* Use submodule imports in pylib
* More submodule imports in pylib
These required removing some direct imports to get rid of import cycles.
On a Linux machine here, the tests consistently fail when two copies
of black are run at once:
% bazel test //qt:format_check //pylib:format_check --cache_test_results=no
==================== Test output for //qt:format_check:
Process SyncManager-1:
Traceback (most recent call last):
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/managers.py", line 583, in _run_server
server = cls._Server(registry, address, authkey, serializer)
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/managers.py", line 156, in __init__
self.listener = Listener(address=address, backlog=16)
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/connection.py", line 453, in __init__
self._listener = SocketListener(address, family, backlog)
File "/home/dae/.cache/bazel/_bazel_dae/fc22e40cbbf8b7d16ac57a00991b1ef1/external/python/lib/python3.9/multiprocessing/connection.py", line 596, in __init__
self._socket.bind(address)
OSError: [Errno 98] Address already in use
I dug briefly into Black's code, but suspect this is actually an issue
with the multiprocessing library. Didn't have time to investigate it
further; this workaround will do for now.
(One day I'll get around to merging those separate scripts into a single
one. One day. :-))