this post was submitted on 01 Jan 2026
601 points (98.7% liked)

Fuck AI

5032 readers
1929 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thethunderwolf@lemmy.dbzer0.com 7 points 4 hours ago* (last edited 4 hours ago) (1 children)

thinking logs

Per my understanding there are no "thinking logs", the "thinking" is just a part of the processing, not the kind of thing that would be logged, just like how the neural network operation is not logged

I'm no expert though so if you know this to be wrong tell me

[–] brucethemoose@lemmy.world 10 points 4 hours ago* (last edited 4 hours ago)

Per my understanding there are no “thinking logs”, the “thinking” is just a part of the processing, not the kind of thing that would be logged, just like how the neural network operation is not logged

I’m no expert though so if you know this to be wrong tell me

"Thinking" is a trained, structured part of the text response. It's no different than the response itself: more continued text, hence you can get non-thinking models to do it.

Its a training pattern, not an architectual innovation. Some training schemes like GRPO are interesting...

Anyway, what OpenAI does is chop off the thinking part of the response so others can't train on their outputs, but also so users can't see the more "offensive" and out-of-character tone LLMs take in their thinking blocks. It kind of pulls back the curtain, and OpenAI doesn't want that because it 'dispels' the magic.

Gemini takes a more reasonable middle ground of summarizing/rewording the thinking block. But if you use a more open LLM (say, Z AI's) via their UI or a generic API, it'll show you the full thinking text.


EDIT:

And to make my point clear, LLMs often take a very different tone during thinking.

For example, in the post's text, ChatGPT likely ruminated on what the users wants and how to satisfy the query, what tone to play, what OpenAI system prompt restrictions to follow, and planned out a response. It would reveal that its really just roleplaying, and "knows it."

That'd be way more damning to OpenAI. As not only did the LLM know exactly what it was doing, but OpenAI deliberately hid information that could have dispelled the AI psychosis.

Also, you can be sure OpenAI logs the whole response, to use for training later.