this post was submitted on 14 Apr 2025
35 points (100.0% liked)
technology
24038 readers
131 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This was why, in spite of it all, I had a brief glimmer of hope for DeepSeek -- it's designed to reveal both its sources and the process by which it reaches its regurgitated conclusions, since it was meant to be an open-source research aid rather than a proprietary black-box chatbot.
Anthropic's latest research shows the chain of thought reasoning shown isn't trustworthy anyway. It's for our benefit, and doesn't match the actual reasoning used internally 1:1.
As you say, hallucinating can be solved by adding meta-awareness. Seems likely to me we'll be able to patch the problem eventually. We're just starting to understand why these models hallucinate in the first place.