this post was submitted on 06 Apr 2026
28 points (100.0% liked)
technology
24318 readers
271 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
I really think that LLM coupled with a logic engine running in a REPL environment could be an amazing thing.
I definitely see the value in verifiable, programmable logic code as part of the LLM "thinking" loop, which I think is probably one of the more valuable discoveries of LLM usage.
Taking the embodied form of language (billions of parameters) and coupling it with some Prolog thing so that the "mental" logic is sound as oppose to linguistic interlocution could lead to interesting stuff.
I was always partial to the symbolic AI folks, they were just early in my book.
That's my thinking as well. The LLM is basically an interface to the world that can handle ambiguity and novel contexts. Meanwhile, symbolic AI provides a really solid foundation for actual thinking. And LLMs solve the core problem of building ontologies on the fly that's been the main roadblock for symbolic engines. The really exciting part about using symbolic logic is that you can actually ask the model how it arrived at a solution, you can tell it that a specific step is wrong and change it, and have it actually learn things in a reliable way. It would be really neat if the LLM could spin up little VMs for a particular context, train the logic engine to solve that problem, and then save them in a library of skills for later user. Then when it encounters a similar problem, it could dust off an existing skill and apply it. And the LLM bit of the engine could also deal with stuff like transfer learning, where it could normalize inputs from different contexts into a common format used in the symbolic engine too. There are just so many possibilities here.
I expect to see cool new repos at https://git.sr.ht/~yogthos/ in the near future comrade
haha if I come up with anything nifty, I'll be sure to share here :)