this post was submitted on 07 Dec 2025
91 points (98.9% liked)
technology
24125 readers
186 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm a little curious how the AI version works with much accuracy. If the context window is like 1M tokens, that sounds like a lot. But it would have to tokenize the whole book and that book gets fed into prompt behind the scenes with every question. Everytime you ask a question not only is the LLM having to process your question, but it gets fed the entire prompt + book. Plus it gets fed all your previous conversation for the session. If you're asking it questions about a Dostoevsky book, you're probably going to fill up the context window pretty fast if not immediately. Then it will just start hallucinating answers because it can't process all the context.
If they're doing something fancy with tokenization or doing some kind of memory thing, then it seems like it would be suited for a standalone program. But it also says they're using local LLMs on your computer? I mean those are going to have small context windows for sure. It seems like bloat as well. In order to run those models locally, you need to download the tensors. Those are several GB in size. I don't want my e-book library management software to do that.
It looks like it can just interface with a different program that someone would have to set up, find weights for, and get working themselves.