this post was submitted on 03 Aug 2025
549 points (93.6% liked)
Technology
73670 readers
3812 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There's no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don't HAVE to use it. It's not being forced down your throat like Gemini or CoPilot.
And their LLM. - it's Mistral, OpenHands and OLMO, all open source. It's in their documentation. So this article is straight up lies about that. Like.... Did Google write this article? It's simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it's basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that's obviously their bridge. But it's not a default setup. It's an option you have to set up. It's not for everyone. Some users want that. It's not forced on everyone. Chill TF out.
If an AI can work on encrypted data, it's not encrypted.
SMH
No one is saying it's encrypted when processed, because that's not a thing that exists.
End to end encryption of a interaction with a chat-bot would mean the company doesn't decrypt your messages to it, operates on the encrypted text, gets an encrypted response which only you can decrypt and sends it to you. You then decrypt the response.
So yes. It would require operating on encrypted data.
The documentation says it's TLS encrypted to the LLM context window. LLM processes, and the context window output goes back via TLS to you.
As long as the context window is only connected to Proton servers decrypting the TLS tunnel, and the LLM runs on their servers, and much like a VPN, they don't keep logs, then I don't see what the problem actually is here.