headline is inaccurate and downplays the incredible potential of ai. Google Gemini tried to kill this person AND their entire family
mods can you please ban "david gerard" or whatever his name really is. ai hate is already out of hand without people coming to push their agenda like this
unfortunately I am firmly in the pocket of the concept of fiat money, big small data, and whatever the opposite of a metaverse is
but also,
mods can you please ban “david gerard”
if I ever release an experimental electronic album I’m calling dibs on this track name
whatever the opposite of a metaverse is
Grass. It's grass.
Are we still on mastodon? In that case, I have severe hayfever shithead! Content warn your posts! ;)
(im obv joking here, and before somebody tries to honestly use this argument, I do have hayfever, and I have seen others post about this subject (aka is saying 'touch grass' a 'slur' because of people with allergies/or disabilities) and the consensus was, anybody who tries to make this argument really needs to touch grass).
Well I'm 100% covered because I have the worst hayfever in existence.
Like no kidding, I am allergic to every. single. thing that they had on what they call the "tree panel" and the "grass panel". I need to be on antihistamines for 75% of the year or I cannot function.
So I'm allowed to use the slur as I'm from the community. Contact me if you want the "g-word pass" I guess.
never trust AI
Statements from LLMs are to be seen as hallucinations unless proven otherwise by classic research.
We don't need a fancy word that makes it sound like AI is actually intelligent when talking about how AI is frequently wrong and unreliable. AI being wrong is like someone who misunderstood something or took a joke as literal repeating it as factual.
When people are wrong we don't call it hallucinating unless their senses are altered. AI doesn't have senses.
It's not a "fancy word" here, but a technical term. An AI making things up is actually called hallucination.
The wikipedia page you linked to actually states that the term is being pushed by industry (Google, Meta, OpenAI) and that its use is criticized by some researchers.
So you say, a technical term should not be created by the people who actually develop the technology the term is used for?
You're confusing "developing" with "marketing".
oh but you see, it's "hallucination" when LLM is wrong and it's hype cycle fuel when it's correct. no, LLMs don't "hallucinate", that implies that this state is peculiar, isolated, triggered by very specific circumstances. LLMs bullshit all the time, sometimes they are right, sometimes not, the process that produces both types of response is the same. pushing for "hallucination" tries to obscure that. use of "hallucination" also implies that LLMs know something, they don't, by design. it just so happens that if they "get" things right, it's because it appeared in training material enough times to make an impression in model.
LLMs bullshit all the time
Bullshitting to me is giving intentionally wrong statements. LLMs do not generate intentionally wrong statements. Saying they do, means that you imply intelligence.
LLMs know nothing nor are they intelligent. They also are not right or wrong, they generate output based on statistics.
"Hallucination" as a term for "AIs" making things up is used since the early 2000s (even if it's meaning has changed since then).
bullshitting as in when you give a confident answer without regard of actual reality. previously discussed there LLMs do exactly that: generate confidently, authoritatively sounding text without regard of facts, because these things do not know facts or anything for that matter.
maybe it's high time to change terms then
bullshitting as in when you give a confident answer without regard of actual reality.
So you say there could be different meanings of the same word? Like “bullshitting” or “hallucination”?
mod post: please desist, it's just tiresome now
Huh. I was making my own garlic oil this way (without advice from an LLM mind-you) and I was today years old when I learned this carries the risk of botulism (albeit small) , so in a way, an LLM has potentially saved my life by causing the chain of events which taught me something new.
It’s slowly refining its approach. No-one went for the pizza glue or eating rocks, so…
Reddit still delivers sometimes.
I’ll see people responding to fucken lemmy comments with “i ran the question through gpt and...” like what the fuck?
It’s literally the same thing as saying “I asked some RANDOM dude and this is what he said. Also I have no reason to believe he’s even the slightest bit educated.”
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
This is coming from someone who hates google, but if this person’s entire family had died, I would put a LOT of that blame on them before google.
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
This is coming from someone who hates google, but if this person’s entire family had died, I would put a LOT of that blame on them before google.
That would really put the "uh oh" in your spaghettios
If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.
i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV
I applaud your optimism that most people can do this without AI but have you gone and met people? Most people are not that capable of producing torrents of shameless bullshit as conscience or awareness of social and/or professional costs rear their head at some point.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community