199
I Asked AI to Count My Carbs 27,000 Times. It Couldn’t Give Me the Same Answer Twice.
(www.diabettech.com)
This is a most excellent place for technology news and articles.
OK I wonder if there's something wrong with the photo.

The photo:
WTF!!??
That's like estimating the carbs in 2 slices of standard sandwich bread! Of course not all bread has the same amount of sugar, but a reasonable range based on an average should be a dead easy answer.
I thought the headline sounded crazy, but try to read the article, and it actually becomes worse. I have said it many times before, these AI chatbots should not be legal, they put lives at risk.
To be fair there's no way of knowing what the filling is, so the AI may be guessing based on that too
Friendly reminder that LLMs don't do math, they guess what number should come next, just like words.
It can probably link the image to the words "a photo of a sandwich on a plate", and interpret the question as "how many calories are in a sandwich" but from there it is just guessing at the syntax of an answer, but not at finding any truth.
It knows sandwiches have calories and those tend to be 3-4 digit numbers, but also all numbers kinda look the same, so what's to say it's not 2, 5, or 12 digits?
Tool-powered agents can do math though. The issue is the fuzziness of it trying to guess carbs. It doesn’t know weight, ingredients, or anything other than a picture. These tools can be useful but not for this. Maybe one day but not yet.
Whoever claims an AI (LLM or agents) can do that and charging their users is lying and defrauding them.
The apps are advertising that they can do this tho. Many of them are aggressively sponsoring YouTubers who advertise you can basically just wave your phone over the food and it takes away all the “work” from traditional calorie counting apps
But the ai assumes itself infallible, at least it could ask...
That's true, it should ask follow-up questions, or at least clarify its assumptions
Nope, Claude and Gemini both guessed fewer carbs than are in the bread.
What in the picture indicates any form of filling?
What you can see is cheese, there is probably butter too, but those 2 have zero carbohydrates, so adding carbohydrates based on filling would be pure speculation.
There are no carbohydrates to see beyond the bread.
There is no evidence of any filling, as there is zero bulge in the bread.
The answer should be based on what can be seen, with a remark to that effect, and that there possibly could be more if it contains filling that isn't visible.
The AI could ask about a possible filling, instead of just making shit up with zero evidence.
To your point -
If a friend texted me the same picture and question, I would do exactly what you described. Try to give a calculated guess that wouldn't change.
Unless I was lazy and Googled it.
Google's carbohydrate tool says 8g, then the AI overview goes on to contradict that by saying "A standard cheese sandwich typically contains between 25 and 35g."
They put lives at risk the same way every single product at your local home improvement store does. When you misuse a tool for a purpose it wasn't intended and isn't good at, you're going to get bad results.
This is an issue for the educational system, not the legal system.
What if the packaging on every tool at home depot grossly misrepresented its capabilities and/or purpose?
This chainsaw cures cancer? Hot damn somebody call RFK!
Concrete mix goes great with pancakes, etc.
Does OpenAI claim ChatGPT is fit for those purposes? No.
The concrete itself will happily mix into your pancakes.
I think the whole point of this discussion is that the various peddlers of AI in fact do make wild claims about their capability.
My observation is that largely it's the downstream AI consumers who repackage it irresponsibly. That said, I don't hang on the words of Sam Altman and it's certain they are pushing the idea that AI is more capable than it is, but mostly what I see is them saying they built this thing and it does neat stuff and it can probably do neat stuff for you, use your imagination.
I believe a lot of the folks developing these tools would be horrified at the irresponsible ways vendors and end users are using it.
Sam Altman is the face of OpenAI. He is responsible for misrepresenting the product he sells. If you're going to sling blame around, then you had better observe the words of Sam Altman.
This sick man is taken seriously in mainstream media and politics, and it's no exaggeration to say he has blood on his hands.
That's obviously bullshit but he's not telling users they can develop time travel or something. That's the distinction I would draw. He's selling investment. That's not where the end users that are misusing ChatGPT are at.
It's the job of the company and especially the face and CEO of the company to sell the product. Compared to Sam Altman's promises, the use in this post is practically modest.
If you think this isn't the case, maybe you can point to some ChatGPT marketing that would make it clear what correct, and especially incorrect usage would look like?
They don't. They say we made this thing, see what you can do with it. They also put disclaimers on ChatGPT to say not to rely on it to be correct.
One can infer from that, that any use for which you are relying on accuracy is incorrect use. Which is why it's critical to have any output filtered through a domain-capable human.
"The thing that I think will be most impactful on that five to ten year timeframe is AI will actually discover new science." - Sam Altman
This is what the face of OpenAI explicitly says their product is for. Do you have anything more concrete? Or am I just to buy into this infinite good faith and assume that anything dumb ~~Trump~~ Sammy says is just hyperbole?
He's not selling anything specific and not to end users. You're talking about something completely different. The way Sam and investors and corporate customers talk about AI is pretty misleading, but it's not misleading users. No one looks at AI replacing CSRs and inventing new sciences, whatever the fuck that means, and jumps to it can unerringly diagnose a rash. And even if they did, the bot explicitly says not to trust it.
If some dirt farmer asks it how to avoid losing his family farm in a drought and takes ChatGPT's advice to plant chocolate chips and loses the farm anyway, I suggest that's a user error.
We might as well be discussing whether the tobacco industry has mislead customers because they have a little disclaimer on their cartons.
Mainstream media publishes Sammy's statements uncritically. ChatGPT releases ads. It's extremely clear he is misleading the general public, his users. I don't know why you're in denial over this.
As others have pointed out, this is also a problem with how they are advertising it.
If duct tape was advertised as something that you can use to hold your roof beams together, you'd have a issue with that.
And at the same time I wouldn't say "hey fuck that, duct tape is terrible! It doesn't hold beams together, I can't use it to tow a trailer, it's all just pretending to stick paper together because really every sliver of duct tape just sticks to the previous piece, etc etc" But that's the cool thing we do on Lemmy.
The ad is bad, duct tape ain't bad.
I have not seen OpenAI advertise ChatGPT as capable of medical diagnosis or therapy or anything like that. If you want therapy, and you can't afford better — because I think we can agree that AI is terrible at it, then there should be a therapy app with explicit safety controls.
The problem is someone created a screwdriver which is handy for lots of screwdriver shaped purposes and someone is trying to carve a ham.
Tools at home improvement stores were made to fulfill a specific purpose. GenAI still does not have a purpose it fulfills despite having hundreds of billions of dollars invested, not to mention all the other resources it's sucking up.
A pencil is a tool with a pretty wide open purpose within the writing ecosystem. It can be used to document history or remember a phone number or draw a picture.
You can also stab yourself in the eye with it or plan a murder.
Yes, a pencil can do a whole bunch of different. things. GenAI cannot do things. It has no purpose. Pencils were made to write stuff. GenAI was made to ???. It is a technology in search of a problem to address. A niche to fill. It has no purpose as it stands, yet it is supposedly the most important thing ever to the point where the rich and wealthy are losing their minds investing into it on the vague hopes that it'll do something. They've even got our government in on it; the US economy is being dangerously propped up by this industry that doesn't solve any problems or fulfill any purpose. All the things it does are novelties and even then, it does those things poorly and unreliably.