- In corner cases, enabling the new timezone handling later can cause
reviews to shift forward or back a day, so it's best to have it on
by default.
- https://github.com/ankidroid/Anki-Android/issues/5805 has not landed
in a stable release yet, but will hopefully not be too far off by the
time 2.1.41 is released.
- Existing users will be unaffected, as the upgrade prompt in the previous
commit asks them if they use AnkiDroid.
- Users starting on AnkiDroid will be unaffected, as their collections
will still be on V1.
- The error message AnkiWeb gives when syncing an older AnkiDroid
with the new timezone enabled has been updated to direct users to the
preferences screen.
- Rework V2 upgrade so that it no longer resets cards in learning,
or empties filtered decks.
- V1 users will receive a message at the top of the deck list
encouraging them to upgrade, and they can upgrade directly from that
screen.
- The setting in the preferences screen has been removed, so users
will need to use an older Anki version if they wish to switch back to
V1.
- Prevent V2 exports with scheduling from being importable into a V1
collection - the code was previously allowing this when it shouldn't
have been.
- New collections still default to v1 at the moment.
Also add helper to get map of decks and deck configs, as there were
a few places in the codebase where that was required.
Not plugged into the Python code yet. Still a work in progress.
Other changes:
- move a bunch of From implementations out of the giant backend/mod.rs
file into separate submodules.
- reorder backend methods to match proto order
- fix some clippy lints
The previous approach worked when the user pushes their due date back,
or moves it forward a little bit, but breaks down if they reschedule
shortly after the previous answer - a card that was only just answered
will have had an effective delay of 0, causing the interval to be
reset, which is not great.
I thought about limiting interval reductions, but that means the
behaviour is inconsistent when sending a card forward and moving it
back again.
We could apply a cap to the amount of interval we'll reduce, but that
will either doing something like dividing by 2 (which breaks down when
the action is performed repeatedly), or or looking up the review log
to try and determine the previous interval we should not go below.
One other option we might want to consider in the future is using
the revlog to calculate the actual elapsed time at answer time instead
of reschedule time, falling back to existing behaviour when the revlog
doesn't match or is missing.
- SearchTerm -> SearchNode
- Operator -> Joiner; share between messages
- build_search_string() supports specifying AND/OR as a convenience
- group_searches() makes it easier to negate
While implementing the overdue search, I realised it would be nice to
be able to construct a search string with OR and NOT searches without
having to construct each part individually with build_search_string().
Changes:
- Extends SearchTerm to support a text search, which will be parsed
by the backend. This allows us to do things like wrap text in a group
or NOT node.
- Because SearchTerm->Node conversion can now fail with a parsing error,
it's switched over to TryFrom
- Switch concatenate_searches and replace_search_term to use SearchTerms,
so that they too don't require separate string building steps.
- Remove the unused normalize_search()
- Remove negate_search, as this is now an operation on a Node, and
users can wrap their search in SearchTerm(negated=...)
- Remove the match_any and negate args from build_search_string
Having done all this work, I've just realised that perhaps the original
JSON idea was more feasible than I first thought - if we wrote it out
to a string and re-parsed it, we would be able to leverage the existing
checks that occur at parsing stage.
I was a bit too enthusiastic with using borrowed values in structs
earlier on in the Rust porting. In this case any performance gains are
dwarfed by the cost of querying the DB, and using owned values here
simplifies the code, and will make it easier to parse a fragment in
the From<SearchTerm> impl.