this post was submitted on 06 Apr 2026
28 points (100.0% liked)
technology
24318 readers
271 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
So we're back to expert systems with a sprinkling of LLM/Transformer architecture...
The tech bros will hate this because by its design it can never become "AGI", since it's applying the model to a hyper specific domain. Not that their architecture could ever actually achieve independent "intelligence", but it's easier to sell when the model is trained to appear as a generalized problem solver instead of a domain specific one.
I don't get the impression that the goal is to apply the model to a hyperspecific domain, rather the idea seems to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can’t have a comprehensive ontology of the world because it’s inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that’s done, it’s perfectly possible to use a logic engine to solve problems within that context. The goal here is to optimize a particular set of tasks which can be expressed as a set of logical steps.
That's super cool, I've always thought that it was backwards that we're using LLMs to add complexity to prompts. They should be used to reduce complexity by recognizing and factoring out patterns.
yeah exactly