313
this post was submitted on 16 Jan 2026
313 points (98.5% liked)
Technology
78880 readers
2957 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior. To somebody who don't know the details of LLM training, they wouldn't be able to tell much of a difference. It will be packaged as a bundle of node weights and work with the same interfaces and all.
The reason that doesn't work well if you try to insert strict logic into a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is (that's why it's a LARGE language model). You can't just arbitrarily edit anything or insert more nodes or replace logic, you don't know what you might break. It's easier to place inserted logic outside of the LLM network and train the model to interact with it ("tool use").