Commit Graph

37 Commits

Author SHA1 Message Date
RumovZ
5a53da23ca
Deck scoped dupe check (#2372)
* Support limiting dupe check to deck

* Expose deck limiting dupe check on frontend

* Make CSV dupe options configurable with headers

* Rename duplicate file headers

* Change dupe check limit to enum
2023-02-16 17:53:36 +10:00
Damien Elmes
4142de57e2 Fix clean build failure due to protoc change
da7d4dd2fc changed the name of the env
var in .cargo/config.toml, causing the check in setup_protoc() to think
a custom path had been provided, which skipped the download and extract
step.
2023-01-26 09:33:39 +10:00
Mani
da7d4dd2fc
Use a ninja variable for Protoc binary (#2345)
* Use a ninja variable for Protoc binary

* fix whitespace
2023-01-23 20:44:47 +10:00
Damien Elmes
ded805b504
Switch Rust import style (#2330)
* Prepare to switch Rust import style

* Run nightly format

Closes #2320

* Clean up a few imports

* Enable comment wrapping

* Wrap comments
2023-01-18 21:39:55 +10:00
Damien Elmes
5e0a761b87
Move away from Bazel (#2202)
(for upgrading users, please see the notes at the bottom)

Bazel brought a lot of nice things to the table, such as rebuilds based on
content changes instead of modification times, caching of build products,
detection of incorrect build rules via a sandbox, and so on. Rewriting the build
in Bazel was also an opportunity to improve on the Makefile-based build we had
prior, which was pretty poor: most dependencies were external or not pinned, and
the build graph was poorly defined and mostly serialized. It was not uncommon
for fresh checkouts to fail due to floating dependencies, or for things to break
when trying to switch to an older commit.

For day-to-day development, I think Bazel served us reasonably well - we could
generally switch between branches while being confident that builds would be
correct and reasonably fast, and not require full rebuilds (except on Windows,
where the lack of a sandbox and the TS rules would cause build breakages when TS
files were renamed/removed).

Bazel achieves that reliability by defining rules for each programming language
that define how source files should be turned into outputs. For the rules to
work with Bazel's sandboxing approach, they often have to reimplement or
partially bypass the standard tools that each programming language provides. The
Rust rules call Rust's compiler directly for example, instead of using Cargo,
and the Python rules extract each PyPi package into a separate folder that gets
added to sys.path.

These separate language rules allow proper declaration of inputs and outputs,
and offer some advantages such as caching of build products and fine-grained
dependency installation. But they also bring some downsides:

- The rules don't always support use-cases/platforms that the standard language
tools do, meaning they need to be patched to be used. I've had to contribute a
number of patches to the Rust, Python and JS rules to unblock various issues.
- The dependencies we use with each language sometimes make assumptions that do
not hold in Bazel, meaning they either need to be pinned or patched, or the
language rules need to be adjusted to accommodate them.

I was hopeful that after the initial setup work, things would be relatively
smooth-sailing. Unfortunately, that has not proved to be the case. Things
frequently broke when dependencies or the language rules were updated, and I
began to get frustrated at the amount of Anki development time I was instead
spending on build system upkeep. It's now about 2 years since switching to
Bazel, and I think it's time to cut losses, and switch to something else that's
a better fit.

The new build system is based on a small build tool called Ninja, and some
custom Rust code in build/. This means that to build Anki, Bazel is no longer
required, but Ninja and Rust need to be installed on your system. Python and
Node toolchains are automatically downloaded like in Bazel.

This new build system should result in faster builds in some cases:

- Because we're using cargo to build now, Rust builds are able to take advantage
of pipelining and incremental debug builds, which we didn't have with Bazel.
It's also easier to override the default linker on Linux/macOS, which can
further improve speeds.
- External Rust crates are now built with opt=1, which improves performance
of debug builds.
- Esbuild is now used to transpile TypeScript, instead of invoking the TypeScript
compiler. This results in faster builds, by deferring typechecking to test/check
time, and by allowing more work to happen in parallel.

As an example of the differences, when testing with the mold linker on Linux,
adding a new message to tags.proto (which triggers a recompile of the bulk of
the Rust and TypeScript code) results in a compile that goes from about 22s on
Bazel to about 7s in the new system. With the standard linker, it's about 9s.

Some other changes of note:

- Our Rust workspace now uses cargo-hakari to ensure all packages agree on
available features, preventing unnecessary rebuilds.
- pylib/anki is now a PEP420 implicit namespace, avoiding the need to merge
source files and generated files into a single folder for running. By telling
VSCode about the extra search path, code completion now works with generated
files without needing to symlink them into the source folder.
- qt/aqt can't use PEP420 as it's difficult to get rid of aqt/__init__.py.
Instead, the generated files are now placed in a separate _aqt package that's
added to the path.
- ts/lib is now exposed as @tslib, so the source code and generated code can be
provided under the same namespace without a merging step.
- MyPy and PyLint are now invoked once for the entire codebase.
- dprint will be used to format TypeScript/json files in the future instead of
the slower prettier (currently turned off to avoid causing conflicts). It can
automatically defer to prettier when formatting Svelte files.
- svelte-check is now used for typechecking our Svelte code, which revealed a
few typing issues that went undetected with the old system.
- The Jest unit tests now work on Windows as well.

If you're upgrading from Bazel, updated usage instructions are in docs/development.md and docs/build.md. A summary of the changes:

- please remove node_modules and .bazel
- install rustup (https://rustup.rs/)
- install rsync if not already installed  (on windows, use pacman - see docs/windows.md)
- install Ninja (unzip from https://github.com/ninja-build/ninja/releases/tag/v1.11.1 and
  place on your path, or from your distro/homebrew if it's 1.10+)
- update .vscode/settings.json from .vscode.dist
2022-11-27 15:24:20 +10:00
dobefore
0c9cecd7a3
Fix error while compiling rslib (#2187)
* protobuf generate error

pass --experimental_allow_proto3_optional to protoc
add missing dereive trait to struct Daylimit

* delete invalid derive trait Eq

* remove argument from protoc

--experimental_allow_proto3_optional
2022-11-09 12:36:23 +10:00
RumovZ
c521753057
Refactor error handling (#2136)
* Add crate snafu

* Replace all inline structs in AnkiError

* Derive Snafu on AnkiError

* Use snafu for card type errors

* Use snafu whatever error for InvalidInput

* Use snafu for NotFoundError and improve message

* Use snafu for FileIoError to attach context

Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.

* Add more context-attaching io helpers

* Add message, context and backtrace to new snafus

* Utilize error context and backtrace on frontend

* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.

* Rename localized(_description) -> message

* Remove accidentally committed experimental trait

* invalid_input_context -> ok_or_invalid

* ensure_valid_input! -> require!

* Always return `Err` from `invalid_input!`

Instead of a Result to unwrap, the macro accepts a source error now.

* new_tempfile_in_parent -> new_tempfile_in_parent_of

* ok_or_not_found -> or_not_found

* ok_or_invalid -> or_invalid

* Add crate convert_case

* Use unqualified lowercase type name

* Remove uses of snafu::ensure

* Allow public construction of InvalidInputErrors (dae)

Needed to port the AnkiDroid changes.

* Make into_protobuf() public (dae)

Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
2022-10-21 18:02:12 +10:00
RumovZ
b8c294bf4d
Explicitly evaluate symlink on Windows (#2135) 2022-10-19 20:08:58 +10:00
Damien Elmes
fb9c934ef2 Use protoc from Bazel if missing from path
Closes #2134
2022-10-17 09:58:51 +10:00
RumovZ
cc929687ae
Deck-specific Limits (#1955)
* Add deck-specific limits to DeckNormal

* Add deck-specific limits to schema11

* Add DeckLimitsDialog

* deck_limits_qt6.py needs to be a symlink

* Clear duplicate deck setting keys on downgrade

* Export deck limits when exporting with scheduling

* Revert "deck_limits_qt6.py needs to be a symlink"

This reverts commit 4ee7be1e10c4e8c49bb20de3bf45ac18b5e2d4f6.

* Revert "Add DeckLimitsDialog"

This reverts commit eb0e2a62d33df0b518d9204a27b09e97966ce82a.

* Add day limits to DeckNormal

* Add deck and day limits mock to deck options

* Revert "Add deck and day limits mock to deck options"

This reverts commit 0775814989e8cb486483d06727b1af266bb4513a.

* Add Tabs component for daily limits

* Add borders to tabs component

* Revert "Add borders to tabs component"

This reverts commit aaaf5538932540f944d92725c63bb04cfe97ea14.

* Implement tabbed limits properly

* Add comment to translations

* Update rslib/src/decks/limits.rs

Co-authored-by: Damien Elmes <dae@users.noreply.github.com>

* Fix camel case in clear_other_duplicates()

* day_limit → current_limit

* Also import day limits

* Remember last used day limits

* Add day limits to schema 11

* Tweak comment (dae)

* Exclude day limit in export (dae)

* Tweak tab wording (dae)

* Update preset limits on preset change

* Explain tabs in tooltip (dae)

* Omit deck and today limits if v2 is enabled

* Preserve deck limit when switching to today limit
2022-07-19 18:27:25 +10:00
Damien Elmes
5ea78e1c8e Since DupeResolution is in CsvMetadata, we don't need to pass it separately
Follow-up to #1930
2022-06-27 17:15:54 +10:00
RumovZ
42cbe42f06
Plaintext import/export (#1850)
* Add crate csv

* Add start of csv importing on backend

* Add Menomosyne serializer

* Add csv and json importing on backend

* Add plaintext importing on frontend

* Add csv metadata extraction on backend

* Add csv importing with GUI

* Fix missing dfa file in build

Added compile_data_attr, then re-ran cargo/update.py.

* Don't use doubly buffered reader in csv

* Escape HTML entities if CSV is not HTML

Also use name 'is_html' consistently.

* Use decimal number as foreign ease (like '2.5')

* ForeignCard.ivl → ForeignCard.interval

* Only allow fixed set of CSV delimiters

* Map timestamp of ForeignCard to native due time

* Don't trim CSV records

* Document use of empty strings for defaults

* Avoid creating CardGenContexts for every note

This requires CardGenContext to be generic, so it works both with an
owned and borrowed notetype.

* Show all accepted file types  in import file picker

* Add import_json_file()

* factor → ease_factor

* delimter_from_value → delimiter_from_value

* Map columns to fields, not the other way around

* Fallback to current config for csv metadata

* Add start of new import csv screen

* Temporary fix for compilation issue on Linux/Mac

* Disable jest bazel action for import-csv

Jest fails with an error code if no tests are available, but this would
not be noticable on Windows as Jest is not run there.

* Fix field mapping issue

* Revert "Temporary fix for compilation issue on Linux/Mac"

This reverts commit 21f8a261408cdae49ec031aa21a1b659c4f66d82.

* Add HtmlSwitch and move Switch to components

* Fix spacing and make selectors consistent

* Fix shortcut tooltip

* Place import button at the top with path

* Fix meta column indices

* Remove NotetypeForString

* Fix queue and type of foreign cards

* Support different dupe resolution strategies

* Allow dupe resolution selection when importing CSV

* Test import of unnormalized text

Close  #1863.

* Fix logging of foreign notes

* Implement CSV exports

* Use db_scalar() in notes_table_len()

* Rework CSV metadata

- Notetypes and decks are either defined by a global id or by a column.
- If a notetype id is provided, its field map must also be specified.
- If a notetype column is provided, fields are now mapped by index
instead of name at import time. So the first non-meta column is used for
the first field of every note, regardless of notetype. This makes
importing easier and should improve compatiblity with files without a
notetype column.
- Ensure first field can be mapped to a column.
- Meta columns must be defined as `#[meta name]:[column index]` instead
of in the `#columns` tag.
- Column labels contain the raw names defined by the file and must be
prettified by the frontend.

* Adjust frontend to new backend column mapping

* Add force flags for is_html and delimiter

* Detect if CSV is HTML by field content

* Update dupe resolution labels

* Simplify selectors

* Fix coalescence of oneofs in TS

* Disable meta columns from selection

Plus a lot of refactoring.

* Make import button stick to the bottom

* Write delimiter and html flag into csv

* Refetch field map after notetype change

* Fix log labels for csv import

* Log notes whose deck/notetype was missing

* Fix hiding of empty log queues

* Implement adding tags to all notes of a csv

* Fix dupe resolution not being set in log

* Implement adding tags to updated notes of a csv

* Check first note field is not empty

* Temporary fix for build on Linux/Mac

* Fix inverted html check (dae)

* Remove unused ftl string

* Delimiter → Separator

* Remove commented-out line

* Don't accept .json files

* Tweak tag ftl strings

* Remove redundant blur call

* Strip sound and add spaces in csv export

* Export HTML by default

* Fix unset deck in Mnemosyne import

Also accept both numbers and strings for notetypes and decks in JSON.

* Make DupeResolution::Update the default

* Fix missing dot in extension

* Make column indices 1-based

* Remove StickContainer from TagEditor

Fixes line breaking, border and z index on ImportCsvPage.

* Assign different key combos to tag editors

* Log all updated duplicates

Add a log field for the true number of found notes.

* Show identical notes as skipped

* Split tag-editor into separate ts module (dae)

* Add progress for CSV export

* Add progress for text import

* Tidy-ups after tag-editor split (dae)

- import-csv no longer depends on editor
- remove some commented lines
2022-06-01 20:26:16 +10:00
Damien Elmes
4515c41d2c
Backup improvements (#1728)
* Collection needs to be closed prior to backup even when not downgrading

* Backups -> BackupLimits

* Some improvements to backup_task

- backup_inner now returns the error instead of logging it, so that
the frontend can discover the issue when they await a backup (or create
another one)
- start_backup() was acquiring backup_task twice, and if another thread
started a backup between the two locks, the task could have been accidentally
overwritten without awaiting it

* Backups no longer require a collection close

- Instead of closing the collection, we ensure there is no active
transaction, and flush the WAL to disk. This means the undo history
is no longer lost on backup, which will be particularly useful if we
add a periodic backup in the future.
- Because a close is no longer required, backups are now achieved with
a separate command, instead of being included in CloseCollection().
- Full sync no longer requires an extra close+reopen step, and we now
wait for the backup to complete before proceeding.
- Create a backup before 'check db'

* Add File>Create Backup

https://forums.ankiweb.net/t/anki-mac-os-no-backup-on-sync/6157

* Defer checkpoint until we know we need it

When running periodic backups on a timer, we don't want to be fsync()ing
unnecessarily.

* Skip backup if modification time has not changed

We don't want the user leaving Anki open overnight, and coming back
to lots of identical backups.

* Periodic backups

Creates an automatic backup every 30 minutes if the collection has been
modified.

If there's a legacy checkpoint active, tries again 5 minutes later.

* Switch to a user-configurable backup duration

CreateBackup() now uses a simple force argument to determine whether
the user's limits should be respected or not, and only potentially
destructive ops (full download, check DB) override the user's configured
limit.

I considered having a separate limit for collection close and automatic
backups (eg keeping the previous 5 minute limit for collection close),
but that had two downsides:

- When the user closes their collection at the end of the day, they'd
get a recent backup. When they open the collection the next day, it
would get backed up again within 5 minutes, even though not much had
changed.
- Multiple limits are harder to communicate to users in the UI

Some remaining decisions I wasn't 100% sure about:

- If force is true but the collection has not been modified, the backup
will be skipped. If the user manually deleted their backups without
closing Anki, they wouldn't get a new one if the mtime hadn't changed.
- Force takes preference over the configured backup interval - should
we be ignored the user here, or take no backups at all?

Did a sneaky edit of the existing ftl string, as it hasn't been live
long.

* Move maybe_backup() into Collection

* Use a single method for manual and periodic backups

When manually creating a backup via the File menu, we no longer make
the user wait until the backup completes. As we continue waiting for
the backup in the background, if any errors occur, the user will get
notified about it fairly quickly.

* Show message to user if backup was skipped due to no changes

+ Don't incorrectly assert a backup will be created on force

* Add "automatic" to description

* Ensure we backup prior to importing colpkg if collection open

The backup doesn't happen when invoked from 'open backup' in the profile
screen, which matches Anki's previous behaviour. The user could
potentially clobber up to 30 minutes of their work if they exited to
the profile screen and restored a backup, but the alternative is we
create backups every time a backup is restored, which may happen a number
of times if the user is trying various ones. Or we could go back to a
separate throttle amount for this case, at the cost of more complexity.

* Remove the 0 special case on backup interval; minimum of 5 minutes

https://github.com/ankitects/anki/pull/1728#discussion_r830876833
2022-03-21 19:40:42 +10:00
RumovZ
f3c8857421
Backups (#1685)
* Add zstd dep

* Implement backend backup with zstd

* Implement backup thinning

* Write backup meta

* Use new file ending anki21b

* Asynchronously backup on collection close in Rust

* Revert "Add zstd dep"

This reverts commit 3fcb2141d2be15f907269d13275c41971431385c.

* Add zstd again

* Take backup col path from col struct

* Fix formatting

* Implement backup restoring on backend

* Normalize restored media file names

* Refactor `extract_legacy_data()`

A bit cumbersome due to borrowing rules.

* Refactor

* Make thinning calendar-based and gradual

* Consider last kept backups of previous stages

* Import full apkgs and colpkgs with backend

* Expose new backup settings

* Test `BackupThinner` and make it deterministic

* Mark backup_path when closing optional

* Delete leaky timer

* Add progress updates for restoring media

* Write restored collection to tempfile first

* Do collection compression in the background thread

This has us currently storing an uncompressed and compressed copy of
the collection in memory (not ideal), but means the collection can be
closed without waiting for compression to complete. On a large collection,
this takes a close and reopen from about 0.55s to about 0.07s. The old
backup code for comparison: about 0.35s for compression off, about
8.5s for zip compression.

* Use multithreading in zstd compression

On my system, this reduces the compression time of a large collection
from about 0.55s to 0.08s.

* Stream compressed collection data into zip file

* Tweak backup explanation

+ Fix incorrect tab order for ignore accents option

* Decouple restoring backup and full import

In the first case, no profile is opened, unless the new collection
succeeds to load.
In the second case, either the old collection is reloaded or the new one
is loaded.

* Fix number gap in Progress message

* Don't revert backup when media fails but report it

* Tweak error flow

* Remove native BackupLimits enum

* Fix type annotation

* Add thinning test for whole year

* Satisfy linter

* Await async backup to finish

* Move restart disclaimer out of backup tab

Should be visible regardless of the current tab.

* Write restored collection in chunks

* Refactor

* Write media in chunks and refactor

* Log error if removing file fails

* join_backup_task -> await_backup_completion

* Refactor backup.rs

* Refactor backup meta and collection extraction

* Fix wrong error being returned

* Call sync_all() on new collection

* Add ImportError

* Store logger in Backend, instead of creating one on demand

init_backend() accepts a Logger rather than a log file, to allow other
callers to customize the logger if they wish.

In the future we may want to explore using the tracing crate as an
alternative; it's a bit more ergonomic, as a logger doesn't need to be
passed around, and it plays more nicely with async code.

* Sync file contents prior to rename; sync folder after rename.

* Limit backup creation to once per 30 min

* Use zstd::stream::copy_decode

* Make importing abortable

* Don't revert if backup media is aborted

* Set throttle implicitly

* Change force flag to minimum_backup_interval

* Don't attempt to open folders on Windows

* Join last backup thread before starting new one

Also refactor.

* Disable auto sync and backup when restoring again

* Force backup on full download

* Include the reason why a media file import failed, and the file path

- Introduce a FileIoError that contains a string representation of
the underlying I/O error, and an associated path. There are a few
places in the code where we're currently manually including the filename
in a custom error message, and this is a step towards a more consistent
approach (but we may be better served with a more general approach in
the future similar to Anyhow's .context())
- Move the error message into importing.ftl, as it's a bit neater
when error messages live in the same file as the rest of the messages
associated with some functionality.

* Fix importing of media files

* Minor wording tweaks

* Save an allocation

I18n strings with replacements are already strings, so we can skip the
extra allocation. Not that it matters here at all.

* Terminate import if file missing from archive

If a third-party tool is creating invalid archives, the user should know
about it. This should be rare, so I did not attempt to make it
translatable.

* Skip multithreaded compression on small collections

Co-authored-by: Damien Elmes <gpg@ankiweb.net>
2022-03-07 15:11:31 +10:00
RumovZ
9aca778a93
Backend Custom Study (#1600)
* Implement custom study on backend

* Switch frontend to backend custom study

* Skip typecheck for new pb classes

* Build tag search string on backend

Also fixes escaping of special characters in tag names.

* `cram.cards` -> `cram.card_limit`

* Assign more meaningful names in `TagLimit`

* Broaden rustfmt glob

* Use `invalid_input()` helper

* Assign `FilteredDeckForUpdate` to temp var

* Implement `SearchBuilder`

* Rewrite `custom_study()` with `SearchBuilder`

* Replace match macros with `SearchBuilder`

* Remove `into_nodes_list` & `concatenate_searches`
2022-01-20 14:25:22 +10:00
RumovZ
f2f19e8b45 Remove native HelpPage enum
Also remove oneof from pb enum and handle strs in Python.
2021-07-22 16:32:49 +02:00
Damien Elmes
f649f6c92a minor tidyup in protobuf build script 2021-07-12 16:15:38 +10:00
Damien Elmes
616db33c0e refactor protobuf handling for split/import
In order to split backend.proto into a more manageable size, the protobuf
handling needed to be updated. This took more time than I would have
liked, as each language handles protobuf differently:

- The Python Protobuf code ignores "package" directives, and relies
solely on how the files are laid out on disk. While it would have been
nice to keep the generated files in a private subpackage, Protobuf gets
confused if the files are located in a location that does not match
their original .proto layout, so the old approach of storing them in
_backend/ will not work. They now clutter up pylib/anki instead. I'm
rather annoyed by that, but alternatives seem to be having to add an extra
level to the Protobuf path, making the other languages suffer, or trying
to hack around the issue by munging sys.modules.
- Protobufjs fails to expose packages if they don't start with a capital
letter, despite the fact that lowercase packages are the norm in most
languages :-( This required a patch to fix.
- Rust was the easiest, as Prost is relatively straightforward compared
to Google's tools.

The Protobuf files are now stored in /proto/anki, with a separate package
for each file. I've split backend.proto into a few files as a test, but
the majority of that work is still to come.

The Python Protobuf building is a bit of a hack at the moment, hard-coding
"proto" as the top level folder, but it seems to get the job done for now.

Also changed the workspace name, as there seems to be a number of Bazel
repos moving away from the more awkward reverse DNS naming style.
2021-07-10 19:17:05 +10:00
Damien Elmes
80b98e0db8 move protobuf into separate folder in preparation for multiple files 2021-07-09 21:02:40 +10:00
Damien Elmes
64ebc32b3d tidy up Rust imports
rustfmt can do this automatically, but only when run with a nightly
toolchain, so it needs to be manually done for now - see rslib/rusfmt.toml
2021-04-18 18:38:54 +10:00
Damien Elmes
c4b3ab62c8 embed deck messages 2021-04-04 21:41:16 +10:00
Damien Elmes
e73359510d move filtered deck labels to backend
- use strum to generate an iterator for the protobuf enum so we don't
forget to add new labels if extending in the future
- no add-ons appear to be using dynOrderLabels(), so it has been removed

@RumovZ perhaps a similar approach might work for listing the available
browser columns as well?
2021-04-01 23:53:38 +10:00
Damien Elmes
7df128a103 fix changes to .ftl and .proto files not being picked up by 'cargo check' 2021-04-01 22:29:54 +10:00
Damien Elmes
094e4ad461 crate::err -> crate::error 2021-04-01 16:07:13 +10:00
Damien Elmes
9aece2a7b8 rework translation handling
Instead of generating a fluent.proto file with a giant enum, create
a .json file representing the translations that downstream consumers
can use for code generation.

This enables the generation of a separate method for each translation,
with a docstring that shows the actual text, and any required arguments
listed in the function signature.

The codebase is still using the old enum for now; updating it will need
to come in future commits, and the old enum will need to be kept
around, as add-ons are referencing it.

Other changes:

- move translation code into a separate crate
- store the translations on a per-file/module basis, which will allow
us to avoid sending 1000+ strings on each JS page load in the future
- drop the undocumented support for external .ftl files, that we weren't
using
- duplicate strings in translation files are now checked for at build
time
- fix i18n test failing when run outside Bazel
- drop slog dependency in i18n module
2021-03-26 09:41:32 +10:00
Damien Elmes
4bd120cc4b split out remaining rpc methods
@david-allison-1 note this also changes the method index to start at
0 instead of 1
2021-03-11 17:04:32 +10:00
Damien Elmes
1b8d6c6e85 split out sync, notetypes and config code 2021-03-11 15:47:31 +10:00
Damien Elmes
5df684fa6b rework backend codegen to support multiple services; split out sched
Rust requires all methods of impl Trait to be in a single file, which
means we had a giant backend/mod.rs covering all exposed methods. By
using separate service definitions for the separate areas, and updating
the code generation, we can split it into more manageable chunks -
this commit starts with the scheduling code.

In the long run, we'll probably want to split up the protobuf file into
multiple files as well.

Also dropped want_release_gil() from rsbridge, and the associated method
enum. While it allows us to skip the thread save/restore and mutex unlock/
lock, it looks to only be buying about 2.5% extra performance in the
best case (tested with timeit+format_timespan), and the majority of
the backend methods deal with I/O, and thus were already releasing the
GIL.
2021-03-11 14:51:29 +10:00
Arthur Milchior
8b5ae7d7c5 NF: add AGPL licence missing in some file
I noticed it when I looked at some files now used in AnkiDroid, wanting to be sure we clearly indicate that we have
AGPLv3 code linked in the app
2021-01-31 21:50:21 +01:00
Damien Elmes
ebeae9a5a0 don't pass BUILDINFO into build script
It was causing the build script to be recompiled each time a commit was
made, even though buildinfo.txt was not changing.
2020-12-21 16:04:29 +10:00
Damien Elmes
d7cded4ae1 fix compilation of rslib outside Bazel
fixes code completion
2020-11-24 18:51:19 +10:00
Damien Elmes
fcb3283a9d move ftl into top level ftl/ folder; make it source of truth for aqt
This avoids the need to modify the external repo before new strings
can be used in aqt.
2020-11-18 16:20:58 +10:00
Damien Elmes
f25af77122 fixes for consuming rust lib from external repo 2020-11-04 19:20:49 +10:00
Damien Elmes
3c12cb1600 update to latest fluent libs, and integrate maximum digit handling
We now limit number of digits in our formatter, instead of relying
on an upstream patch.
2020-11-03 14:10:45 +10:00
Damien Elmes
d36162bd49 clippy lint 2020-11-01 16:19:08 +10:00
Damien Elmes
0cf964b16d trailing newline .ftl check can happen at build time
Removes the need to build ripgrep for CI
2020-11-01 14:59:45 +10:00
Damien Elmes
aea0a6fcc6 initial Bazel conversion
Running and testing should be working on the three platforms, but
there's still a fair bit that needs to be done:

- Wheel building + testing in a venv still needs to be implemented.
- Python requirements still need to be compiled with piptool and pinned;
need to compile on all platforms then merge
- Cargo deps in cargo/ and rslib/ need to be cleaned up, and ideally
unified into one place
- Currently using rustls to work around openssl compilation issues
on Linux, but this will break corporate proxies with custom SSL
authorities; need to conditionally use openssl or use
https://github.com/seanmonstar/reqwest/pull/1058
- Makefiles and docs still need cleaning up
- It may make sense to reparent ts/* to the top level, as we don't
nest the other modules under a specific language.
- rspy and pylib must always be updated in lock-step, so merging
rspy into pylib as a private module would simplify things.
- Merging desktop-ftl and mobile-ftl into the core ftl would make
managing and updating translations easier.
- Obsolete scripts need removing.
- And probably more.
2020-11-01 14:26:58 +10:00