this post was submitted on 08 Apr 2026
144 points (96.2% liked)

Programmer Humor

41751 readers
72 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 6 years ago
MODERATORS
top 29 comments
sorted by: hot top controversial new old
[–] Ephera@lemmy.ml 22 points 3 days ago (1 children)

Okay, but just to be clear, the problem is not that it can't do a timer. The problem is that it claims to be able to and even produces a result which looks plausible. It means, you cannot trust it to do anything that you can't easily verify. If they could fix that overconfidence in a year, it would be much better.

[–] fox@hexbear.net 13 points 3 days ago (1 children)

The overconfident tone is baked in. LLMs don't have knowledge or world models, and all text they produce is nothing more than statistical relation of input to output based on frequency of appearance and semantic closeness. Therefore you can train the things to lean towards doubtfulness (nobody will use them) or confidence (wow, it must be true if it's this certain). It's abusing the human tendency to anthropomorphize to sell a really shitty product.

[–] wheezy@lemmy.ml 8 points 3 days ago (3 children)

What if we just, idk, handled those corner cases with something like a human created control system that follows a set of very specific instructions that always produce the same result.

Stick with me here. I know this is a radical idea. But, say you were able to parse the input from the user and map it to the same resulting, let's call it, function.

So, the user says something like "start a timer for 60 seconds" or "60 second timer please". Using a basic word mapping we could infer the confidence of English sentences and produce results.

We could even improve our results through automatic user feedback based on behavior and popularity of their mapping choices. Yes.

We could even do this for like multiple "features". Like have one "function" that maps requests to timers, another to setting an alarm, maybe even something radical like doing mathematical computations.

But, again, instead of throwing the input into a block box that burns massive compute power that we have no control of. We just. Write the box ourselves for very common tasks.

Idk, maybe I'm crazy. It probably wouldn't work. I'm probably just oversimplifying it.

[–] yogthos@lemmy.ml 6 points 3 days ago

I mean that's basically the idea behind neurosymbolic AI, have the LLM deal with natural language input, convert it to a formal spec, and give it to a symbolic engine to execute https://arxiv.org/abs/2305.00813

[–] Flyberius@hexbear.net 3 points 3 days ago

Some sort of coding language... By god!

[–] yuri@lemmygrad.ml 2 points 3 days ago (1 children)

Haven't you just recreated Siri/Alexa/etc. now? I can't tell if this comment is sarcastic

[–] Flyberius@hexbear.net 5 points 3 days ago (1 children)

They are jokingly suggesting that we invent programming. It's a good bit, you should upvote them

[–] yuri@lemmygrad.ml 4 points 3 days ago

I've unfortunately read so much slop where some claim to have discovered the next big thing that I couldn't see that this was an obvious joke. Now I feel like an idiot

[–] nonentity@sh.itjust.works 15 points 3 days ago (1 children)

No one who is impressed by LLMs should ever be permitted to make decisions which affect anyone not similarly cognitively impaired.

[–] anotherspinelessdem@lemmy.ml 10 points 3 days ago (1 children)

On their own as an advancement in computing LLMs are impresive, but tech and finance bros overinflated the perception of their performance well beyond what's reasonable to try to discipline workers for wanting more rights.

From a computing perspective the thing I find saddest is that everyone will hate anything having to do with AI in the future because of this bullshit. Given ownership by the people, more research into the field could actually liberate us all more from tedious labor.

[–] DacoTaco@lemmy.world 2 points 3 days ago

I agree. The tech of an LLM is really cool and impressive. But what the tech market and finance markets have made of it is just really fucking sad. I really hope the bubble fucking bursts

[–] db2@lemmy.world 22 points 3 days ago

He's too busy teaching it how to sexually assaulte his sister.

[–] PostaL@lemmy.world 11 points 3 days ago (1 children)

AGI just around the corner

[–] anotherspinelessdem@lemmy.ml 6 points 3 days ago

C'mon bro, just one more trillion, that's all. Then we'll have a paradise with a new Epstein island and everything.

[–] webp@mander.xyz 12 points 3 days ago (2 children)

This guy looks like a psychopath in every photo.

[–] bennieandthez@lemmygrad.ml 5 points 3 days ago

its a requirement to be a CEO, especially tech CEO.

[–] BigBrownBeaver@lemmy.world 4 points 3 days ago (1 children)

He looks like a reindeer caught middle of the road

[–] MentalEdge@sopuli.xyz 3 points 3 days ago* (last edited 3 days ago)

It's because he knows how screwed OpenAI, actually is.

He acts like he's surfing the wave. He looks like he's exactly as deep in the hole as he actually is.

ChatGPT is the next Theranos.

He hasn't just scammed consumers. He's scammed investors. And that's the one crime that actually lands people like him in prison.

[–] pHr34kY@lemmy.world 11 points 3 days ago* (last edited 3 days ago) (1 children)

So, ChatGPT can't match any function of a Casio wristwatch. I'm concerned that when it can, it will consume the power of microwaving a turkey just to tell a user what time it is.

[–] bountygiver@lemmy.ml 5 points 3 days ago

a better comparison: if you ask siri to time you like this 10 years ago, it would correctly start the timer app

[–] Tangentism@lemmy.ml 10 points 3 days ago (3 children)

There's a guy on tiktok called Huskistaken (yes, i know) that demonstrates repeatedly just how useless chatgpt is.

The first video of his I saw was him playing a clip of altman stating that it doesn't have a timer and chat gpt countering that it does.

He then gets it to start a timer to time his long it takes him to run a mile and almost instantly tells it to stop. It tells him it was +7 minutes!

[–] Flyberius@hexbear.net 7 points 3 days ago

There's another guy called father phi who does similar stuff. It's entertaining. The AIs' smug voices are unbearable

[–] SaveTheTuaHawk@lemmy.ca 2 points 3 days ago

FatherPi on YouTube highlights how bad all the LLMs are.

How many Rs in strawberry?

[–] whyNotSquirrel@sh.itjust.works 1 points 3 days ago (1 children)

he's linked in the article

[–] Tangentism@lemmy.ml 1 points 2 days ago

The link I posted is him reacting to sister fuckers reaction

[–] tonyn@lemmy.ml 5 points 3 days ago (2 children)

ChatGPT and other LLMs need access to tools for things like this just like you and I do. If you ask me how many seconds have elapsed since I started typing this, I would give you a convincing estimate. I would need a Casio watch to give you an exact answer.

[–] Flyberius@hexbear.net 12 points 3 days ago* (last edited 3 days ago)

Read the article. The AI can't even give a convincing estimate.

The point here is that LLMs will never be AIs. They are just text extruders. They are hideously over valued and they are upending society for all the wrong reasons

[–] DacoTaco@lemmy.world 2 points 3 days ago

This is correct. LLM"s are just the knowledge and information processing bit of our brain. To actually do things we need access to things like our limbs, eyes, ears, watch, computer,...
Which is why my comment in this thread spoke of an mcp tool and a webhook, which is all thats needed. So a year for that? Fuck off, thats absurdly long for 2 things that already exist and just need to br plugged in the source..

[–] DacoTaco@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

A year? To make a mcp tool that starts a timer and a website hook that listens for the timer?
Alright, thats kinda fucked lol