Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
I think you are replying to the wrong person?
I did not say it helps with accuracy. I did not say LLMs will get better. I did not even say we should use LLMs.
But even if I did, non of your points are relevant for the Firefox usecase.
Wikipedia is no less reliable than other content. There’s even academic research about it (no, I will not dig for sources now, so feel free to not believe it). But factual correctness only matters for models that deal with facts: for e.g a translation model it does not matter.
Reddit has a massive amount of user-generated content it owns, e.g. comments. Again, the factual correctness only matters in some contexts, not all.
I’m not sure why you keep mentioning LLMs since that is not what is being discussed. Firefox has no plans to use some LLM to generate content where facts play an important role.
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
What do you mean “full set if data”?
Obviously you can not train on 100% of material ever created, so you pick a subset. There is a a lot of permissively licensed content (e.g. Wikipedia) and content you can license (e.g. Reddit). While not sufficient for an advanced LLM, it certainly is for smaller models that do not need wide knowledge.
I’d say the main differences are at least
Feel free to assume that, but don’t claim an assumption as a fact.
You recommended using native package managers. How many of them have been audited?
You know what else we shouldn’t assume? That that it doesn’t have a security feature. And we additionally then shouldn’t go around posting that incorrect assumption as if it were a fact. You know, like you did.
There is no general copyright issue with AIs. It completely depends on the training material (if even then), so it’s not possible to make blanket statements like that. Banning technology, because a particular implementation is problematic, makes no sense.
I’m confused why you think it would be anything else, and why you are so dead set on this. Repos include a signing key. There is an option to skip signature checking. And you think that signature checking is not used during downloads, despite this?
Ok, here are a few issues related to signatures being checked by default, when downloading: https://github.com/flatpak/flatpak/issues/4836 https://github.com/flatpak/flatpak/issues/5657 https://github.com/flatpak/flatpak/issues/3769 https://github.com/flatpak/flatpak/issues/5246 https://askubuntu.com/questions/1433512/flatpak-cant-check-signature-public-key-not-found https://stackoverflow.com/questions/70839691/flatpak-not-working-apparently-gpg-issue
Flatpak repos are signed and the signature is checked when downloading.
It’s OK to be wrong. Dying on this hill seems pretty weird to me.
From the page:
It is recommended that OSTree repositories are verified using GPG whenever they are used. However, if you want to disable GPG verification, the --no-gpg-verify option can be used when a remote is added.
That is talking about downloading as well. Yes, you can turn it off, but so can you usually do it with native package managers, e.g. pacman: https://wiki.archlinux.org/title/Pacman/Package_signing
That doesn’t seem to be true? https://flatpak-testing.readthedocs.io/en/latest/distributing-applications.html#gpg-signatures
In what way don’t they “securely download” ?
Do you hapen to know where? Searching seems to give no results.
In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.
They can, and are being made. E.g. the state of accessibility on Gome.