* Pack FSRS data into card.data
* Update FSRS card data when preset or weights change
+ Show FSRS stats in card stats
* Show a warning when there's a limited review history
* Add some translations; tweak UI
* Fix default requested retention
* Add browser columns, fix calculation of R
* Property searches
eg prop:d>0.1
* Integrate FSRS into reviewer
* Warn about long learning steps
* Hide minimum interval when FSRS is on
* Don't apply interval multiplier to FSRS intervals
* Expose memory state to Python
* Don't set memory state on new cards
* Port Jarret's new tests; add some helpers to make tests more compact
https://github.com/open-spaced-repetition/fsrs-rs/pull/64
* Fix learning cards not being given memory state
* Require update to v3 scheduler
* Don't exclude single learning step when calculating memory state
* Use relearning step when learning steps unavailable
* Update docstring
* fix single_card_revlog_to_items (#2656)
* not need check the review_kind for unique_dates
* add email address to CONTRIBUTORS
* fix last first learn & keep early review
* cargo fmt
* cargo clippy --fix
* Add Jarrett to about screen
* Fix fsrs_memory_state being initialized to default in get_card()
* Set initial memory state on graduate
* Update to latest FSRS
* Fix experiment.log being empty
* Fix broken colpkg imports
Introduced by "Update FSRS card data when preset or weights change"
* Update memory state during (re)learning; use FSRS for graduating intervals
* Reset memory state when cards are manually rescheduled as new
* Add difficulty graph; hide eases when FSRS enabled
* Add retrievability graph
* Derive memory_state from revlog when it's missing and shouldn't be
---------
Co-authored-by: Jarrett Ye <jarrett.ye@outlook.com>
* Support searching for deck configs by name
* Integrate FSRS optimizer into Anki
* Hack in a rough implementation of evaluate_weights()
* Interrupt calculation if user closes dialog
* Fix interrupted error check
* log_loss/rmse
* Update to latest fsrs commit; add progress info to weight evaluation
* Fix progress not appearing when pretrain takes a while
* Update to latest commit
Also fix minilints declaring a stamp it wasn't creating. The same
approach is necessary with archives now too, as it no longer executes
under a standard "runner run".
For now, rustls is hard-coded - we could pass the desired TLS impl in
from the ./ninja script, but the runner is not recompiled frequently
anyway.
Workspace deps were introduced in Rust 1.64. They don't cover all the
cases that Hakari did unfortunately, but they are simpler to maintain,
and they avoid a couple of issues that Hakari had:
- It sometimes made updating dependencies harder due to the locked versions,
so you had to disable Hakari, do the updates, and then re-generate (
e.g. 943dddf28f)
- The current Hakari config was breaking AnkiDroid's build, as it was
stopping a cross-compile from functioning correctly.
Will be handy to use it in our other scripts in the future too - thanks
Rumo!
Results of benchmarking ./run before and after these crate splits:
- Touching a proto file leads to a slight increase: about +90ms
- Touching an rslib file leads to a bigger decrease, as there's less to
recompile: about -700ms
And ./ninja test is even better: about +200ms and -3800ms.
A couple of motivations for this:
- genbackend.py was somewhat messy, and difficult to change with the
lack of types. The mobile clients used it as a base for their generation,
so improving it will make life easier for them too, once they're ported.
- It will make it easier to write a .ts generator in the future
- We currently implement a bunch of helper methods on protobuf types
which don't allow us to compile the protobuf types until we compile
the Anki crate. If we change this in the future, we will be able to
do more of the compilation up-front.
We no longer need to record the services in the proto file, as we can
extract the service order from the compiled protos. Support for map types
has also been added.
* Migrate check_copyright to Rust
* Add a new lint to check accidental usages of /// in ts/svelte comments
* Fix a bunch of incorrect jdoc comments
* Move contributor check into minilints
Will allow users to detect the issue locally with './ninja check'
before pushing to CI.
* Make Cargo.toml consistent with other crates
This PR replaces the existing Python-driven sync server with a new one in Rust.
The new server supports both collection and media syncing, and is compatible
with both the new protocol mentioned below, and older clients. A setting has
been added to the preferences screen to point Anki to a local server, and a
similar setting is likely to come to AnkiMobile soon.
Documentation is available here: <https://docs.ankiweb.net/sync-server.html>
In addition to the new server and refactoring, this PR also makes changes to the
sync protocol. The existing sync protocol places payloads and metadata inside a
multipart POST body, which causes a few headaches:
- Legacy clients build the request in a non-deterministic order, meaning the
entire request needs to be scanned to extract the metadata.
- Reqwest's multipart API directly writes the multipart body, without exposing
the resulting stream to us, making it harder to track the progress of the
transfer. We've been relying on a patched version of reqwest for timeouts,
which is a pain to keep up to date.
To address these issues, the metadata is now sent in a HTTP header, with the
data payload sent directly in the body. Instead of the slower gzip, we now
use zstd. The old timeout handling code has been replaced with a new implementation
that wraps the request and response body streams to track progress, allowing us
to drop the git dependencies for reqwest, hyper-timeout and tokio-io-timeout.
The main other change to the protocol is that one-way syncs no longer need to
downgrade the collection to schema 11 prior to sending.
The existing architecture serializes all cards and revlog entries in
the search range into a protobuf message, which the web frontend needs
to decode and then process. The thinking at the time was that this would
make it easier for add-ons to add extra graphs, but in the ~2.5 years
since the new graphs were introduced, no add-ons appear to have taken
advantage of it.
The cards and revlog entries can grow quite large on large collections -
on a collection I tested with approximately 2.5M reviews, the serialized
data is about 110MB, which is a lot to have to deserialize in JavaScript.
This commit shifts the preliminary processing of the data to the Rust end,
which means the data is able to be processed faster, and less needs to
be sent to the frontend. On the test collection above, this reduces the
serialized data from about 110MB to about 160KB, resulting in a more
than 2x performance improvement, and reducing frontend memory usage from
about 400MB to about 40MB.
This also makes #2043 more feasible - while it is still about 50-100%
slower than protobufjs, with the much smaller message size, the difference
is only about 10ms.
(for upgrading users, please see the notes at the bottom)
Bazel brought a lot of nice things to the table, such as rebuilds based on
content changes instead of modification times, caching of build products,
detection of incorrect build rules via a sandbox, and so on. Rewriting the build
in Bazel was also an opportunity to improve on the Makefile-based build we had
prior, which was pretty poor: most dependencies were external or not pinned, and
the build graph was poorly defined and mostly serialized. It was not uncommon
for fresh checkouts to fail due to floating dependencies, or for things to break
when trying to switch to an older commit.
For day-to-day development, I think Bazel served us reasonably well - we could
generally switch between branches while being confident that builds would be
correct and reasonably fast, and not require full rebuilds (except on Windows,
where the lack of a sandbox and the TS rules would cause build breakages when TS
files were renamed/removed).
Bazel achieves that reliability by defining rules for each programming language
that define how source files should be turned into outputs. For the rules to
work with Bazel's sandboxing approach, they often have to reimplement or
partially bypass the standard tools that each programming language provides. The
Rust rules call Rust's compiler directly for example, instead of using Cargo,
and the Python rules extract each PyPi package into a separate folder that gets
added to sys.path.
These separate language rules allow proper declaration of inputs and outputs,
and offer some advantages such as caching of build products and fine-grained
dependency installation. But they also bring some downsides:
- The rules don't always support use-cases/platforms that the standard language
tools do, meaning they need to be patched to be used. I've had to contribute a
number of patches to the Rust, Python and JS rules to unblock various issues.
- The dependencies we use with each language sometimes make assumptions that do
not hold in Bazel, meaning they either need to be pinned or patched, or the
language rules need to be adjusted to accommodate them.
I was hopeful that after the initial setup work, things would be relatively
smooth-sailing. Unfortunately, that has not proved to be the case. Things
frequently broke when dependencies or the language rules were updated, and I
began to get frustrated at the amount of Anki development time I was instead
spending on build system upkeep. It's now about 2 years since switching to
Bazel, and I think it's time to cut losses, and switch to something else that's
a better fit.
The new build system is based on a small build tool called Ninja, and some
custom Rust code in build/. This means that to build Anki, Bazel is no longer
required, but Ninja and Rust need to be installed on your system. Python and
Node toolchains are automatically downloaded like in Bazel.
This new build system should result in faster builds in some cases:
- Because we're using cargo to build now, Rust builds are able to take advantage
of pipelining and incremental debug builds, which we didn't have with Bazel.
It's also easier to override the default linker on Linux/macOS, which can
further improve speeds.
- External Rust crates are now built with opt=1, which improves performance
of debug builds.
- Esbuild is now used to transpile TypeScript, instead of invoking the TypeScript
compiler. This results in faster builds, by deferring typechecking to test/check
time, and by allowing more work to happen in parallel.
As an example of the differences, when testing with the mold linker on Linux,
adding a new message to tags.proto (which triggers a recompile of the bulk of
the Rust and TypeScript code) results in a compile that goes from about 22s on
Bazel to about 7s in the new system. With the standard linker, it's about 9s.
Some other changes of note:
- Our Rust workspace now uses cargo-hakari to ensure all packages agree on
available features, preventing unnecessary rebuilds.
- pylib/anki is now a PEP420 implicit namespace, avoiding the need to merge
source files and generated files into a single folder for running. By telling
VSCode about the extra search path, code completion now works with generated
files without needing to symlink them into the source folder.
- qt/aqt can't use PEP420 as it's difficult to get rid of aqt/__init__.py.
Instead, the generated files are now placed in a separate _aqt package that's
added to the path.
- ts/lib is now exposed as @tslib, so the source code and generated code can be
provided under the same namespace without a merging step.
- MyPy and PyLint are now invoked once for the entire codebase.
- dprint will be used to format TypeScript/json files in the future instead of
the slower prettier (currently turned off to avoid causing conflicts). It can
automatically defer to prettier when formatting Svelte files.
- svelte-check is now used for typechecking our Svelte code, which revealed a
few typing issues that went undetected with the old system.
- The Jest unit tests now work on Windows as well.
If you're upgrading from Bazel, updated usage instructions are in docs/development.md and docs/build.md. A summary of the changes:
- please remove node_modules and .bazel
- install rustup (https://rustup.rs/)
- install rsync if not already installed (on windows, use pacman - see docs/windows.md)
- install Ninja (unzip from https://github.com/ninja-build/ninja/releases/tag/v1.11.1 and
place on your path, or from your distro/homebrew if it's 1.10+)
- update .vscode/settings.json from .vscode.dist
* Add crate snafu
* Replace all inline structs in AnkiError
* Derive Snafu on AnkiError
* Use snafu for card type errors
* Use snafu whatever error for InvalidInput
* Use snafu for NotFoundError and improve message
* Use snafu for FileIoError to attach context
Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.
* Add more context-attaching io helpers
* Add message, context and backtrace to new snafus
* Utilize error context and backtrace on frontend
* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.
* Rename localized(_description) -> message
* Remove accidentally committed experimental trait
* invalid_input_context -> ok_or_invalid
* ensure_valid_input! -> require!
* Always return `Err` from `invalid_input!`
Instead of a Result to unwrap, the macro accepts a source error now.
* new_tempfile_in_parent -> new_tempfile_in_parent_of
* ok_or_not_found -> or_not_found
* ok_or_invalid -> or_invalid
* Add crate convert_case
* Use unqualified lowercase type name
* Remove uses of snafu::ensure
* Allow public construction of InvalidInputErrors (dae)
Needed to port the AnkiDroid changes.
* Make into_protobuf() public (dae)
Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?