- mtime is tracked on each key individually, which will allow
merging of config changes when syncing in the future
- added col.(get|set|remove)_config()
- in order to support existing code that was mutating returned
values (eg col.conf["something"]["another"] = 5), the returned list/dict
will be automatically wrapped so that when the value is dropped, it
will save the mutated item back to the DB if it's changed. Code that
is fetching lists/dicts from the config like so:
col.conf["foo"]["bar"] = baz
col.setMod()
will continue to work in most case, but should be gradually updated to:
conf = col.get_config("foo")
conf["bar"] = baz
col.set_config("foo", conf)
Disabled for now; when enabled it will allow faster collection
open and close in the normal case, while continuing to downgrade
when exporting or doing a full sync.
Also, when downgrading is disabled, the journal mode is no longer
changed back to delete.
- tag list stored in a separate DB table
- non-wildcard searches now do full unicode case folding
(eg tag:masse matches 'Maße')
- wildcard matches do simple unicode case folding
- some functions haven't been updated yet, so ascii folding will
continue to be used in some operations
The progress handling code needs a rethink, as we now have two separate
ways to flag that the media sync should abort. In the future, it may
make sense to switch to polling the backend for progress, instead of
passing a callback in.
- on collection load, the schema is upgraded to 12
- on collection close, the changes are reversed so older clients
can continue to open the collection
- in the future, we could potentially skip the reversal except
when exporting/doing a full sync
- the same approach should work for decks, note types and tags in the
future too
- the deck list code needs updating to cache the deck confs for the
life of the call
This is safer than just dropping the backend, as .close() will
block if something else is holding the mutex. Also means we can
drop the extra I18nBackend code.
Media syncing still needs fixing.
Some initial testing with orjson indicates performance varies from
slightly better than pysqlite to about 2x slower depending on the type
of query.
Performance could be improved by building the Python list in rspy
instead of sending back json that needs to be decoded, but it may make
more sense to rewrite the hotspots in Rust instead. More testing is
required in any case.
committing the Protobuf implementation for posterity, but will replace
it with json, as Protobuf measures about 6x slower for some workloads
like 'select * from notes'