this post was submitted on 20 Jan 2026
687 points (98.4% liked)
Fuck AI
5316 readers
579 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"just tell your LLM not to do that"
You ever ask an LLM to modify a picture and "don't change anything else"? It's going to change other things.
Case in point: https://youtu.be/XnWOVQ7Gtzw
That's why you always add "and no mistakes"
Also "don't hallucinate"
And "don't become self arrest"
You are mixing two kind of AI, LLM and diffusion.
It's way harder for a diffusion model to not change the rest, the first step of a diffusion model is to use a lossy compression to transform the picture into a soup of digits that the diffusion model can understand.
And an LLM will convert a prompt into a bunch of tokens the model can understand.
Tokens are a lossless conversion, you can convert it back to the original text.
This isn't about saying "return the original text" this is about assuming LLMs understand language, and they don't. Telling an LLM "don't do these things" will be as effective as telling it "don't hallucinate" or asking it "how many 'r's in 'strawberry'.
In order to make such affirmation or infirmation we'll need to define understanding.
The example you gave can be explained by other way than "it doesn't understand".
For example, the "how many 'r' in strawberry", LLMs see tokens, and the dataset they use, doesn't contain a lot of data about the letters that are present in a token.