this post was submitted on 17 Jan 2026
191 points (97.5% liked)

Comic Strips

21193 readers
3090 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] kadu@scribe.disroot.org 13 points 11 hours ago (2 children)

I love how complex and confident sounding some of the replies are, and then you click to see the "reasoning" and it's something like:

Alright I'm diving in into the concept of numbers. First, I need to understand what a digit is. Digits are the protrusions that are often found at the edge of a human hand. Wait, that is incorrect, digits are mathematical symbols. I'm making progress, my search results suggest that digits can be both mathematical unitary symbols and human anatomy terms. The user asked for the result of 1+1, I'll invoke the Python agent and code the operation, analyze the input, and re-frame the answer. It appears the Python agent returned with a malformed output, I'll check the logs. I'm frustrated - the code is clean and the operation should have worked. I've found the error! The output "NameError" clearly indicates that I've accidentally mixed data types in Python, I've been crunching through a fix and am confident the calculations will proceed smoothly. Writing final answer, factoring in the user recently asked about the job market in 2026.

Based on the current job market and listings found on online sources like LinkedIn, you will appreciate that the answer to the expression 1+1 is 2, would you like me to create a graph showcasing this discovery so you can boost engagement on your LinkedIn Profile?

The inefficiency for each query is bizarre.

[–] natecox@programming.dev 5 points 11 hours ago

If you want to add 1 and 1 together, you must first invent the universe.

[–] snooggums@piefed.world 0 points 9 hours ago (1 children)

The explanation is a separate query and doesn't necessarily have anything to do with how it presented the answer initially.

[–] kadu@scribe.disroot.org 4 points 9 hours ago* (last edited 9 hours ago)

You're absolutely incorrect. The explanation is not an explanation made after the fact, it's a simple technique called chain of thought where the LLM must append a log with this type of "reasoning" during the entire process, as that's been shown to reduce the rate of errors in complicated queries.

Explanations that are a separate query are only the title it gives to the conversation and the little one sentence "progress" updates it gives (in certain UIs, like Gemini, others just leave a default "Thinking...)