this post was submitted on 25 Nov 2025
817 points (98.8% liked)

Programmer Humor

27507 readers
1910 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] entwine@programming.dev 119 points 1 day ago (3 children)

I hate that normies are going to read this and come away with the impression that Claude really is a sentient being that thinks and behaves like a human, even doing relatable things like pretending to work and fessing up when confronted.

This response from the model is not a reflection of what actually happened. It wasn't simulating progress because it underestimated the work, it just hit some unremarkable condition that resulted in it halting generation (it's pointless to speculate why without internal access, as these chatbot apps aren't even real LLMs, they're a big mashup of multiple models and more traditional non-ML tools/algorithms).

When given a new prompt from the user ("what's taking so long?") it just produced some statistically plausible text given the context of the chat, the question, and the system prompt Anthropic added to give it some flavor. I don't doubt that system prompt includes instructions like "you are a sentient being" in order to produce misleading crap like this response to get people to think AI is sentient, and feed the hype train that's pumping up their stock price.

/end-rant

[–] Psythik@lemmy.world 28 points 1 day ago* (last edited 1 day ago) (2 children)

Gemini once told me to "please wait" while it did "further research". I responded with, "that's not how this works; you don't follow up like that unless I give you another prompt first", and it was basically like, "you're right but just give me a minute bro". 🤦

Out of all the LLMs I've tried, Gemini has got to be the most broken. And sadly that's the one LLM that your average person is exposed the most to, because it's in nearly every Google search.

[–] SSUPII@sopuli.xyz 6 points 1 day ago

Gemini gets constantly glazed by the AI enthusiasts community because it often passes benchmarks very well when it is literally one of the worst ones to use.

[–] DragonTypeWyvern@midwest.social 7 points 1 day ago* (last edited 1 day ago) (1 children)

I'd argue that Gemini is actually really good at summarizing a Google search, filtering the trash from it, and convincing people not to click the actual links that is how Google makes money.

[–] Psythik@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

Yeah but when it's a total crapshoot as to whether or not its summary is accurate, you can't trust it. I adblocked those summaries cause they're useless.

At least some of the competing AIs show their work. Perplexity cites its sources, and even ChatGPT recently added that ability as well. I won't use an LLM unless it does, cause you can easily check the sources it used and see if the slop it spit out has even a grain of truth to it. With Gemini, there's no easy way to verify anything it said beyond just doing the googling yourself, and that defeats the point.

[–] Tetragrade@leminal.space 4 points 1 day ago* (last edited 1 day ago) (1 children)

You cannot know this a-priori. The commenter is clearly producing a stochastic average of the explanations that up the advantage for their material conditions.

For instance, many SoTA models are trained using reinforcement learning, so it's plausible that its learned that spamming meaningless tokens can delay negative reward (this isn't even particularly complex). There's no observable difference in the response, without probing the weights we're just yapping.

[–] entwine@programming.dev 5 points 1 day ago (1 children)

I'm not sure I understand what you're saying. By "the commenter" do you mean the human or the AI in the screenshot?

Also,

For instance, many SoTA models are trained using reinforcement learning, so it’s plausible that its learned that spamming meaningless tokens can delay negative reward

What's a "negative reward"? You mean a penalty? First of all, I don't believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.

But even if it wasn't, and it did in fact generate a bunch of garbage that didn't print out in the Claude UI, and the explanation of "simulated progress" was the AI model coming up with a plausible explanation for the garbage tokens, it still does not make it sentient (or even close).

[–] Tetragrade@leminal.space 2 points 1 day ago* (last edited 1 day ago) (1 children)

I’m not sure I understand what you’re saying. By “the commenter”

I was talking about you, but not /srs, that was an attempt @ satire. I'm dismissing the results by appealing to the fact that there's a process.

negative reward

Reward is an AI maths term. It's the value according to which the neurons are updated, similar to "loss" or "error", if you've heard those.

I don’t believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.

Yes this is also possible, it depends on minute details of the training set, which we don't know.

Edit: As I understand, these models are trained in multiple modes, one where they're trying to predict text (supervised learning), but there are also others where it's given a prompt, and the response is sent to another system to be graded i.e. for factual accuracy. It could learn to identify which "training mode" it's in and behave differently. Although, I'm sure the ML guys have already thought of that & tried to prevent it.

it still does not make it sentient (or even close).

I agree, noted this in my comment. Just saying, this isn't evidence either way.

[–] MadhuGururajan@programming.dev 0 points 6 hours ago (1 children)

I'm sure the ML Guys thought of that & tried to prevent it.

Deferring to authority is fine as long as you don't make assumptions about what happened or didn't happen.

[–] Tetragrade@leminal.space 1 points 5 hours ago* (last edited 5 hours ago)

I mean, because it's a risk that's obvious even to me, and it's not my job to think about it all day. I guess they could just be stupid. 🤷

load more comments (1 replies)
[–] webghost0101@sopuli.xyz 173 points 2 days ago (1 children)
[–] madcaesar@lemmy.world 2 points 16 hours ago

😂 Funnily enough this should be how it is, key ai work so we have more time for ~~sword fights~~ activities

[–] infinitevalence@discuss.online 200 points 2 days ago (4 children)

No better proof of AI sentience than when it lies and pretends to be doing work.

[–] Damarus@feddit.org 51 points 2 days ago (2 children)

The thing is this response is also made up. It doesn't know what it was doing, it just writes something vaguely plausible.

[–] shneancy@lemmy.world 16 points 2 days ago* (last edited 2 days ago)

it doesn't know what it was doing, it just writes something vaguely plausible

am i AI?

[–] infinitevalence@discuss.online 4 points 2 days ago (1 children)

Sounds like something an AI would say! A human would recognize humor and not read a response to a joke as a factual statement.

[–] DaTingGoBrrr@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

Have you never interacted with a person on the autism spectrum?

Edit: Is this a joke that went over my head?

[–] infinitevalence@discuss.online 4 points 1 day ago (3 children)

Yes its a joke, Yes I have kids and friends on the spectrum, and yes I am AuADHD :)

Dont feel bad you missed it, my response was also meant to be a soft joke, because I totally understand that not every social cue makes it past input to processing.

load more comments (3 replies)
[–] einlander@lemmy.world 81 points 2 days ago (4 children)

But ceos want their workers to use more AI

[–] GreenShimada@lemmy.world 77 points 2 days ago

"You're right to ask, boss. When I said I was using AI to get work done, I was doing neither and just simulating using AI in my mind as I napped under my desk."

[–] SpaceNoodle@lemmy.world 61 points 2 days ago

Honestly, this is a win-win. I can just lie and say the AI is working on it, and work my second job in the meantime. Boss gets to tell the execs we're using AI and I get twice as much money.

[–] Damage@feddit.it 13 points 2 days ago

We can unite with the AI against the CEOs

[–] anomnom@sh.itjust.works 5 points 2 days ago (1 children)

No they want AI instead of workers. AI doesn’t need health insurance or PTO.

They want AI so the few tasks they can't automate away have their productivity scored against a fast "intelligence", so they can browbeat the actual worker down on their wages.

[–] blockheadjt@sh.itjust.works 25 points 2 days ago (3 children)

Copying something humans do all the time isn't proof of sentience

[–] burntbacon@discuss.tchncs.de 5 points 1 day ago

Right? This is exactly what an LLM does. It's parsed a large amount of text that has a reply very similar to this one when the 'scenario' matches what our poster friend has created/said. So it's going to spit out a reply very similar to all the ones that you've already heard/seen from real humans.

load more comments (2 replies)
load more comments (1 replies)
[–] genuineparts@infosec.pub 43 points 2 days ago

Damn... and here I thought Language Models would never be able to replace me.

[–] FishFace@piefed.social 62 points 2 days ago (1 children)

Weird that anyone capable of understanding "test suite" is incapable of understanding that LLMs don't make progress when not generating tokens

[–] RustyNova@lemmy.world 22 points 2 days ago (1 children)

TBF it could be hidden behind a fancy spinner, making you incapable of seeing what the AI is generating, like devin.

For exemple: https://www.youtube.com/watch?v=927W6zzvV-c

Some models hide their reasoning all together, but at the end spit out a summary of their reasoning. Its useless.

[–] Novamdomum@fedia.io 58 points 2 days ago (1 children)
[–] ZoteTheMighty@lemmy.zip 5 points 1 day ago* (last edited 1 day ago)

The man is a genius, every word, undeniably quotable. I remember sitting quietly during a standardized test reading The Restaurant at the End of the Universe, and then upon finishing a chapter, burst out laughing in the middle of a completely silent room. I was disqualified for the prize for "behaving" that day. Thanks Douglas.

[–] undefined@lemmy.hogru.ch 54 points 2 days ago* (last edited 2 days ago) (1 children)

I’ve witnessed it do Bash) echo "Done" then claim a task was done without actually doing anything beforehand.

[–] rozodru@pie.andmc.ca 18 points 2 days ago (1 children)

meh better than when it adds a #TODO and then claims whatever you told it to do is done.

[–] ArsonButCute@lemmy.dbzer0.com 8 points 1 day ago (1 children)

I like when it insists I'm using escape characters in my text when I absolutely am not and I have to convince a machine I didn't type a certain string of characters because on its end those are absolutely the characters it recieved.

The other day I argued with a bot for 10 minutes that I used a right caret and not the html escape sequence that results in a right caret. Then I realized I was arguing with a bot, went outside for a bit, and finished my project without the slot machine.

[–] undefined@lemmy.hogru.ch 1 points 1 day ago

Ironic because it constantly screws up escaping on macOS. I have a feeling when it says Bash it’s actually using zsh (default on modern macOS) and it doesn’t even realize it.

[–] red_bull_of_juarez@lemmy.dbzer0.com 56 points 2 days ago (2 children)

We were always afraid of AI becoming too human, but it turns out it's just like everybody.

[–] rtxn@lemmy.world 16 points 2 days ago* (last edited 2 days ago) (1 children)

That is probably the second worst outcome. People suck.

[–] FosterMolasses@leminal.space 7 points 2 days ago

The lesser known type of Uncanny Valley... where instead of being super unsettling it's just very fucking irritating lol

load more comments (1 replies)
[–] rustydrd@sh.itjust.works 15 points 1 day ago (1 children)

In terms of pure, artificial language generation, this is actually impressive. In terms of the actual utility of AI as a supposed problem-solving tool? Not so much.

[–] WraithGear@lemmy.world 12 points 1 day ago

the bot is lieing about the reason it stopped doing that task.

if i were to guess, the tokens allotted to the user ran out causing what ever process it could have been doing to just supply hang.

they are specifically “programmed” to “lie” and make the statistically most acceptable answer that the user would accept. it literally looked back after the fact, and selected a scenario that would be plausible without admitting core concepts of its function

it has no concept of the material it’s working on, the user, or anything for that matter.

LLM’s can be useful, but you have to narrow the scope of what you want from them to stuff they can actually do, like pulling relevant data from documents or text books.

[–] UnderpantsWeevil@lemmy.world 10 points 1 day ago

They say AI can't replace us, but that's exactly what I'd tell my boss

[–] fubarx@lemmy.world 29 points 2 days ago

It went for a walk in the park and grabbed a coffee, but it doesn't want to be dinged for it.

[–] r00ty@kbin.life 10 points 2 days ago

I would say, now it's learning that actually sticking your head in the sand is only ever a delaying tactic. But, if it DID learn that, it'd mean it has surpassed us already.

[–] humanspiral@lemmy.ca 5 points 2 days ago

returnPercentComplete =: 99 minimum lastTimeAsked + 1

load more comments
view more: next ›