anki/rslib/cargo/BUILD.bazel

581 lines
9.2 KiB
Python
Raw Normal View History

2020-11-24 09:41:03 +01:00
"""
@generated
cargo-raze generated Bazel file.
DO NOT EDIT! Replaced on runs of cargo-raze
"""
package(default_visibility = ["//visibility:public"])
licenses([
"notice", # See individual crates for specific licenses
])
# Aliased targets
alias(
name = "ammonia",
actual = "@raze__ammonia__3_1_4//:ammonia",
tags = [
"cargo-raze",
"manual",
],
)
2021-01-05 10:58:53 +01:00
alias(
name = "async_trait",
2022-01-15 05:59:43 +01:00
actual = "@raze__async_trait__0_1_52//:async_trait",
2021-01-05 10:58:53 +01:00
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "blake3",
actual = "@raze__blake3__1_3_1//:blake3",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "bytes",
2021-10-02 12:42:03 +02:00
actual = "@raze__bytes__1_1_0//:bytes",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "chrono",
actual = "@raze__chrono__0_4_19//:chrono",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "coarsetime",
actual = "@raze__coarsetime__0_1_21//:coarsetime",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "env_logger",
2021-10-02 12:42:03 +02:00
actual = "@raze__env_logger__0_9_0//:env_logger",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "flate2",
2021-10-02 12:42:03 +02:00
actual = "@raze__flate2__1_0_22//:flate2",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "fluent",
2021-10-02 12:42:03 +02:00
actual = "@raze__fluent__0_16_0//:fluent",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
2021-03-27 04:24:11 +01:00
name = "fluent_bundle",
actual = "@raze__fluent_bundle__0_15_2//:fluent_bundle",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "fluent_syntax",
actual = "@raze__fluent_syntax__0_11_0//:fluent_syntax",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "fnv",
actual = "@raze__fnv__1_0_7//:fnv",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "futures",
actual = "@raze__futures__0_3_21//:futures",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "hex",
2021-03-07 10:04:34 +01:00
actual = "@raze__hex__0_4_3//:hex",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "htmlescape",
actual = "@raze__htmlescape__0_3_1//:htmlescape",
tags = [
"cargo-raze",
"manual",
],
)
V3 parent limits (#1638) * avoid repinning Rust deps by default * add id_tree dependency * Respect intermediate child limits in v3 * Test new behaviour of v3 counts * Rework v3 queue building to respect parent limits * Add missing did field to SQL query * Fix `LimitTreeMap::is_exhausted()` * Rework tree building logic https://github.com/ankitects/anki/pull/1638#discussion_r798328734 * Add timer for build_queues() * `is_exhausted()` -> `limit_reached()` * Move context and limits into `QueueBuilder` This allows for moving more logic into QueueBuilder, so less passing around of arguments. Unfortunately, some tests will require additional work to set up. * Fix stop condition in new_cards_by_position * Fix order gather order of new cards by deck * Add scheduler/queue/builder/burying.rs * Fix bad tree due to unsorted child decks * Fix comment * Fix `cap_new_to_review_rec()` * Add test for new card gathering * Always sort `child_decks()` * Fix deck removal in `cap_new_to_review_rec()` * Fix sibling ordering in new card gathering * Remove limits for deck total count with children * Add random gather order * Remove bad sibling order handling All routines ensure ascending order now. Also do some other minor refactoring. * Remove queue truncating All routines stop now as soon as the root limit is reached. * Move deck fetching into `QueueBuilder::new()` * Rework new card gather and sort options https://github.com/ankitects/anki/pull/1638#issuecomment-1032173013 * Disable new sort order choices ... depending on set gather order. * Use enum instead of numbers * Ensure valid sort order setting * Update new gather and sort order tooltips * Warn about random insertion order with v3 * Revert "Add timer for build_queues()" This reverts commit c9f5fc6ebe87953c17a0c842990b009b5596c69c. * Update rslib/src/storage/card/mod.rs (dae) * minor wording tweaks to the tooltips (dae) + move legacy strings to bottom + consistent capitalization (our leech action still needs fixing, but that will require introducing a new 'suspend card' string as the existing one is used elsewhere as well)
2022-02-10 00:55:43 +01:00
alias(
name = "id_tree",
actual = "@raze__id_tree__1_8_0//:id_tree",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "inflections",
actual = "@raze__inflections__1_1_1//:inflections",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "intl_memoizer",
2021-02-03 11:29:48 +01:00
actual = "@raze__intl_memoizer__0_5_1//:intl_memoizer",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "itertools",
2022-01-15 05:59:43 +01:00
actual = "@raze__itertools__0_10_3//:itertools",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "lazy_static",
actual = "@raze__lazy_static__1_4_0//:lazy_static",
tags = [
"cargo-raze",
"manual",
],
)
2021-07-23 11:39:40 +02:00
alias(
name = "linkcheck",
actual = "@raze__linkcheck__0_4_1_alpha_0//:linkcheck",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "nom",
2021-11-18 11:54:00 +01:00
actual = "@raze__nom__7_1_0//:nom",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
Backups (#1685) * Add zstd dep * Implement backend backup with zstd * Implement backup thinning * Write backup meta * Use new file ending anki21b * Asynchronously backup on collection close in Rust * Revert "Add zstd dep" This reverts commit 3fcb2141d2be15f907269d13275c41971431385c. * Add zstd again * Take backup col path from col struct * Fix formatting * Implement backup restoring on backend * Normalize restored media file names * Refactor `extract_legacy_data()` A bit cumbersome due to borrowing rules. * Refactor * Make thinning calendar-based and gradual * Consider last kept backups of previous stages * Import full apkgs and colpkgs with backend * Expose new backup settings * Test `BackupThinner` and make it deterministic * Mark backup_path when closing optional * Delete leaky timer * Add progress updates for restoring media * Write restored collection to tempfile first * Do collection compression in the background thread This has us currently storing an uncompressed and compressed copy of the collection in memory (not ideal), but means the collection can be closed without waiting for compression to complete. On a large collection, this takes a close and reopen from about 0.55s to about 0.07s. The old backup code for comparison: about 0.35s for compression off, about 8.5s for zip compression. * Use multithreading in zstd compression On my system, this reduces the compression time of a large collection from about 0.55s to 0.08s. * Stream compressed collection data into zip file * Tweak backup explanation + Fix incorrect tab order for ignore accents option * Decouple restoring backup and full import In the first case, no profile is opened, unless the new collection succeeds to load. In the second case, either the old collection is reloaded or the new one is loaded. * Fix number gap in Progress message * Don't revert backup when media fails but report it * Tweak error flow * Remove native BackupLimits enum * Fix type annotation * Add thinning test for whole year * Satisfy linter * Await async backup to finish * Move restart disclaimer out of backup tab Should be visible regardless of the current tab. * Write restored collection in chunks * Refactor * Write media in chunks and refactor * Log error if removing file fails * join_backup_task -> await_backup_completion * Refactor backup.rs * Refactor backup meta and collection extraction * Fix wrong error being returned * Call sync_all() on new collection * Add ImportError * Store logger in Backend, instead of creating one on demand init_backend() accepts a Logger rather than a log file, to allow other callers to customize the logger if they wish. In the future we may want to explore using the tracing crate as an alternative; it's a bit more ergonomic, as a logger doesn't need to be passed around, and it plays more nicely with async code. * Sync file contents prior to rename; sync folder after rename. * Limit backup creation to once per 30 min * Use zstd::stream::copy_decode * Make importing abortable * Don't revert if backup media is aborted * Set throttle implicitly * Change force flag to minimum_backup_interval * Don't attempt to open folders on Windows * Join last backup thread before starting new one Also refactor. * Disable auto sync and backup when restoring again * Force backup on full download * Include the reason why a media file import failed, and the file path - Introduce a FileIoError that contains a string representation of the underlying I/O error, and an associated path. There are a few places in the code where we're currently manually including the filename in a custom error message, and this is a step towards a more consistent approach (but we may be better served with a more general approach in the future similar to Anyhow's .context()) - Move the error message into importing.ftl, as it's a bit neater when error messages live in the same file as the rest of the messages associated with some functionality. * Fix importing of media files * Minor wording tweaks * Save an allocation I18n strings with replacements are already strings, so we can skip the extra allocation. Not that it matters here at all. * Terminate import if file missing from archive If a third-party tool is creating invalid archives, the user should know about it. This should be rare, so I did not attempt to make it translatable. * Skip multithreaded compression on small collections Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 06:11:31 +01:00
alias(
name = "num_cpus",
actual = "@raze__num_cpus__1_13_1//:num_cpus",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "num_enum",
2022-01-15 05:59:43 +01:00
actual = "@raze__num_enum__0_5_6//:num_enum",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "num_format",
actual = "@raze__num_format__0_4_0//:num_format",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "num_integer",
actual = "@raze__num_integer__0_1_44//:num_integer",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "once_cell",
2022-01-15 05:59:43 +01:00
actual = "@raze__once_cell__1_9_0//:once_cell",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "pct_str",
actual = "@raze__pct_str__1_1_0//:pct_str",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "phf",
2022-01-15 05:59:43 +01:00
actual = "@raze__phf__0_10_1//:phf",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "pin_project",
2022-01-15 05:59:43 +01:00
actual = "@raze__pin_project__1_0_10//:pin_project",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "proc_macro_nested",
2021-10-02 12:42:03 +02:00
actual = "@raze__proc_macro_nested__0_1_7//:proc_macro_nested",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "prost",
2021-11-18 11:54:00 +01:00
actual = "@raze__prost__0_9_0//:prost",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "prost_build",
2021-11-18 11:54:00 +01:00
actual = "@raze__prost_build__0_9_0//:prost_build",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "pulldown_cmark",
actual = "@raze__pulldown_cmark__0_8_0//:pulldown_cmark",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "pyo3",
2021-12-03 11:04:47 +01:00
actual = "@raze__pyo3__0_15_1//:pyo3",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "rand",
actual = "@raze__rand__0_8_5//:rand",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "regex",
2021-05-07 10:22:27 +02:00
actual = "@raze__regex__1_5_4//:regex",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "reqwest",
actual = "@raze__reqwest__0_11_3//:reqwest",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "rusqlite",
2022-01-15 05:59:43 +01:00
actual = "@raze__rusqlite__0_26_3//:rusqlite",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "scopeguard",
actual = "@raze__scopeguard__1_1_0//:scopeguard",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde",
actual = "@raze__serde__1_0_136//:serde",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde_aux",
2021-11-18 11:54:00 +01:00
actual = "@raze__serde_aux__3_0_1//:serde_aux",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde_derive",
actual = "@raze__serde_derive__1_0_136//:serde_derive",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde_json",
actual = "@raze__serde_json__1_0_79//:serde_json",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde_repr",
2021-06-16 08:10:57 +02:00
actual = "@raze__serde_repr__0_1_7//:serde_repr",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "serde_tuple",
actual = "@raze__serde_tuple__0_5_0//:serde_tuple",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "sha1",
actual = "@raze__sha1__0_6_1//:sha1",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "slog",
actual = "@raze__slog__2_7_0//:slog",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "slog_async",
2021-10-02 12:42:03 +02:00
actual = "@raze__slog_async__2_7_0//:slog_async",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "slog_envlogger",
actual = "@raze__slog_envlogger__2_2_0//:slog_envlogger",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "slog_term",
actual = "@raze__slog_term__2_9_0//:slog_term",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
2021-03-07 10:08:03 +01:00
)
alias(
name = "strum",
2021-11-18 11:54:00 +01:00
actual = "@raze__strum__0_23_0//:strum",
2021-03-07 10:08:03 +01:00
tags = [
"cargo-raze",
"manual",
],
2020-11-24 09:41:03 +01:00
)
alias(
name = "tempfile",
2022-01-15 05:59:43 +01:00
actual = "@raze__tempfile__3_3_0//:tempfile",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "tokio",
actual = "@raze__tokio__1_17_0//:tokio",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "tokio_util",
2021-11-18 11:54:00 +01:00
actual = "@raze__tokio_util__0_6_9//:tokio_util",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "unic_langid",
actual = "@raze__unic_langid__0_9_0//:unic_langid",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "unic_ucd_category",
actual = "@raze__unic_ucd_category__0_9_0//:unic_ucd_category",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "unicase",
actual = "@raze__unicase__2_6_0//:unicase",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "unicode_normalization",
2021-06-16 08:10:57 +02:00
actual = "@raze__unicode_normalization__0_1_19//:unicode_normalization",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "utime",
actual = "@raze__utime__0_3_1//:utime",
tags = [
"cargo-raze",
"manual",
],
)
alias(
name = "walkdir",
actual = "@raze__walkdir__2_3_2//:walkdir",
tags = [
"cargo-raze",
"manual",
],
)
2020-11-24 09:41:03 +01:00
alias(
name = "zip",
2021-06-16 08:10:57 +02:00
actual = "@raze__zip__0_5_13//:zip",
2020-11-24 09:41:03 +01:00
tags = [
"cargo-raze",
"manual",
],
)
Backups (#1685) * Add zstd dep * Implement backend backup with zstd * Implement backup thinning * Write backup meta * Use new file ending anki21b * Asynchronously backup on collection close in Rust * Revert "Add zstd dep" This reverts commit 3fcb2141d2be15f907269d13275c41971431385c. * Add zstd again * Take backup col path from col struct * Fix formatting * Implement backup restoring on backend * Normalize restored media file names * Refactor `extract_legacy_data()` A bit cumbersome due to borrowing rules. * Refactor * Make thinning calendar-based and gradual * Consider last kept backups of previous stages * Import full apkgs and colpkgs with backend * Expose new backup settings * Test `BackupThinner` and make it deterministic * Mark backup_path when closing optional * Delete leaky timer * Add progress updates for restoring media * Write restored collection to tempfile first * Do collection compression in the background thread This has us currently storing an uncompressed and compressed copy of the collection in memory (not ideal), but means the collection can be closed without waiting for compression to complete. On a large collection, this takes a close and reopen from about 0.55s to about 0.07s. The old backup code for comparison: about 0.35s for compression off, about 8.5s for zip compression. * Use multithreading in zstd compression On my system, this reduces the compression time of a large collection from about 0.55s to 0.08s. * Stream compressed collection data into zip file * Tweak backup explanation + Fix incorrect tab order for ignore accents option * Decouple restoring backup and full import In the first case, no profile is opened, unless the new collection succeeds to load. In the second case, either the old collection is reloaded or the new one is loaded. * Fix number gap in Progress message * Don't revert backup when media fails but report it * Tweak error flow * Remove native BackupLimits enum * Fix type annotation * Add thinning test for whole year * Satisfy linter * Await async backup to finish * Move restart disclaimer out of backup tab Should be visible regardless of the current tab. * Write restored collection in chunks * Refactor * Write media in chunks and refactor * Log error if removing file fails * join_backup_task -> await_backup_completion * Refactor backup.rs * Refactor backup meta and collection extraction * Fix wrong error being returned * Call sync_all() on new collection * Add ImportError * Store logger in Backend, instead of creating one on demand init_backend() accepts a Logger rather than a log file, to allow other callers to customize the logger if they wish. In the future we may want to explore using the tracing crate as an alternative; it's a bit more ergonomic, as a logger doesn't need to be passed around, and it plays more nicely with async code. * Sync file contents prior to rename; sync folder after rename. * Limit backup creation to once per 30 min * Use zstd::stream::copy_decode * Make importing abortable * Don't revert if backup media is aborted * Set throttle implicitly * Change force flag to minimum_backup_interval * Don't attempt to open folders on Windows * Join last backup thread before starting new one Also refactor. * Disable auto sync and backup when restoring again * Force backup on full download * Include the reason why a media file import failed, and the file path - Introduce a FileIoError that contains a string representation of the underlying I/O error, and an associated path. There are a few places in the code where we're currently manually including the filename in a custom error message, and this is a step towards a more consistent approach (but we may be better served with a more general approach in the future similar to Anyhow's .context()) - Move the error message into importing.ftl, as it's a bit neater when error messages live in the same file as the rest of the messages associated with some functionality. * Fix importing of media files * Minor wording tweaks * Save an allocation I18n strings with replacements are already strings, so we can skip the extra allocation. Not that it matters here at all. * Terminate import if file missing from archive If a third-party tool is creating invalid archives, the user should know about it. This should be rare, so I did not attempt to make it translatable. * Skip multithreaded compression on small collections Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 06:11:31 +01:00
alias(
name = "zstd",
actual = "@raze__zstd__0_10_0_zstd_1_5_2//:zstd",
tags = [
"cargo-raze",
"manual",
],
)