I bought a small bag of cheap rice, and it didn't help me to connect to God!
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
When are people going to realize that an LLM is not a calculator and doesn't actually know anything?
That it is not a calculator and is horrible at determinism is not debatable, however its (very biased) huge knowledge is its core feature
Well first AI tech corporations need to do advertising that AIs can keep doing all this.
Probably never. Just like people never realized how computers work, how networks work, how businesses work, how economies of scale work, how financial markets work, how…
We the people don’t give a shit about how anything works, for the most part. Exceptions include your narrowly focused expertise. We convince ourselves that we understand things, using top-down perspectives, because it’s easier than actually understanding things from a bottom-up perspective.
Even the strongest critics of AI can’t substantively explain how AI works. They use misnomers like “glorified autocomplete” to reason about it’s inaccuracy, rather than understanding the fundamental limitations of the approach used.
imagine that. software that performs strictly language specific operations can't do math.
They are non-deterministic by design.
LLMs are not detetministic like calculators. Wrong tool for the job.
And the US is about to, if they haven't already, put AI in charge of the Internal Revenue Service.
That should be fun.
I tried to build a deck with my smartphone, it couldn't drive a single nail.
Maybe get a stronger case. 🤷♂️😄
But the guy at the phone store told me it was practically indestructible, I used it practically and it destructable'd.
I'm starting to think this whole 'phone' thing is doomed to failure.
I'm basing this entirely on a single anecdotal evidence and all of the other evidence that I've selected which confirms my worldview on the topic. I have done my own research (but not with a phone).
The issue is that there are apps promising you an calorie count via photo.
There's pills promising to improve my love life also, I don't believe them either
Waste of energy. It's like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don't bother.
The point is that:
- It is being used for ut, even though it is obviously not capable of giving a reliable and realistic answer
- It allows this usage, even though it is dangerous and not within it's capabilities
- Each model gives answers that vary wildly, something that a human wouldn't do. A human wouldn't give you answers that are 10x more for the same question randomly.
It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a hypoglycaemic emergency.
OK I wonder if there's something wrong with the photo.
The photo:

WTF!!??
That's like estimating the carbs in 2 slices of standard sandwich bread! Of course not all bread has the same amount of sugar, but a reasonable range based on an average should be a dead easy answer.
I thought the headline sounded crazy, but try to read the article, and it actually becomes worse. I have said it many times before, these AI chatbots should not be legal, they put lives at risk.
To be fair there's no way of knowing what the filling is, so the AI may be guessing based on that too
Friendly reminder that LLMs don't do math, they guess what number should come next, just like words.
It can probably link the image to the words "a photo of a sandwich on a plate", and interpret the question as "how many calories are in a sandwich" but from there it is just guessing at the syntax of an answer, but not at finding any truth.
It knows sandwiches have calories and those tend to be 3-4 digit numbers, but also all numbers kinda look the same, so what's to say it's not 2, 5, or 12 digits?
Tool-powered agents can do math though. The issue is the fuzziness of it trying to guess carbs. It doesn’t know weight, ingredients, or anything other than a picture. These tools can be useful but not for this. Maybe one day but not yet.
Whoever claims an AI (LLM or agents) can do that and charging their users is lying and defrauding them.
The apps are advertising that they can do this tho. Many of them are aggressively sponsoring YouTubers who advertise you can basically just wave your phone over the food and it takes away all the “work” from traditional calorie counting apps
Nope, Claude and Gemini both guessed fewer carbs than are in the bread.
But the ai assumes itself infallible, at least it could ask...
That's true, it should ask follow-up questions, or at least clarify its assumptions
Custom built LLMs are awesome for specific purposes in terms of dealing with data and providing resources however chatbots ain't that.
Humans want to follow whatever makes sense to them, they use AI because it's confident. AI just replaced their god.
If you supplied humans with the same image and asked for the same estimate I'd be curious to know the difference in results.
Mine would be: "I have no idea" - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).
If you really pressed them, they'd probably google each thing and sum the results, so the estimates would be as consistent as first google results.
LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional 'fact' based measures, but if the context doesn't have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.