e759885734
* Implement colpkg exporting on backend * Use exporting logic in backup.rs * Refactor exporting.rs * Add backend function to export collection * Refactor backend/collection.rs * Use backend for colpkg exporting * Don't use default zip compression for media * Add exporting progress * Refactor media file writing * Write dummy collections * Localize dummy collection note * Minimize dummy db size * Use `NamedTempFile::new()` instead of `new_in` * Drop redundant v2 dummy collection * COLLECTION_VERSION -> PACKAGE_VERSION * Split `lock_collection()` into two to drop flag * Expose new colpkg in GUI * Improve dummy collection message * Please type checker * importing-colpkg-too-new -> exporting-... * Compress the media map in the v3 package (dae) On collections with lots of media, it can grow into megabytes. Also return an error in extract_media_file_names(), instead of masking it as an optional. * Store media map as a vector in the v3 package (dae) This compresses better (eg 280kb original, 100kb hashmap, 42kb vec) In the colpkg import case we don't need random access. When importing an apkg, we will need to be able to fetch file data for a given media filename, but the existing map doesn't help us there, as we need filename->index, not index->filename. * Ensure folders in the media dir don't break the file mapping (dae) |
||
---|---|---|
.. | ||
core | ||
qt | ||
usage | ||
BUILD.bazel | ||
duplicate-string.py | ||
extract-strings.py | ||
format_check.py | ||
format.py | ||
README.md | ||
remove-unused.sh | ||
sync.py | ||
transform-string.py | ||
update-ankimobile-usage.sh | ||
update-desktop-usage.sh |
Files related to Anki's translations.