this post was submitted on 19 Feb 2026
658 points (99.7% liked)

Fuck AI

5952 readers
1348 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Reygle@lemmy.world 1 points 28 minutes ago

Normalize calling AI the "Wrong answer machine".

[–] CoryCoolguy@lemmy.myserv.one 38 points 20 hours ago

Ah, yep. Did the same thing.

I was interviewed last year and got asked about it lmao

[–] pkjqpg1h@lemmy.zip 11 points 18 hours ago

It's very easy to poison an LLM

[–] Nangijala@feddit.dk 18 points 20 hours ago

Ai seem to be the perfect propaganda tool to make people believe whatever you want them to believe.

[–] Underwaterbob@sh.itjust.works 82 points 1 day ago (3 children)

I once searched "best workstation keyboard" and happened to glance at the summary, and it legitimately was trying to compare mechanical typing keyboards like Nuphy and Keychron, with music keyboards like Yamaha's Montage and Roland's Fantom. Which, NGL, was pretty entertaining.

[–] Ephera@lemmy.ml 19 points 1 day ago (2 children)

Music keyboards do have that sweet n-key rollover. So, there's probably some Emacs users playing their editor like a piano.

[–] umbraroze@piefed.social 2 points 4 hours ago

There's that old legend about the Symbolics Lisp Machine keyboards which had, like, a bazillion modifiers (and were a big influence on Emacs). Someone suggested that they would eventually run out space to put in more shift keys, so they'd have to introduce pedals. I suppose organ stops would also work.

[–] Frenchgeek@lemmy.ml 2 points 20 hours ago (1 children)
[–] Ephera@lemmy.ml 3 points 18 hours ago

Well, apparently you can extend Emacs to have it:

Emaccordion

Control your Emacs with an accordion — or any MIDI instrument!
[...]
You can e.g. plug in a MIDI pedalboard (like one in a church organ) for modifier keys (ctrl, alt, shift); or you can define chords to trigger complex commands or macros.
[...]
The idea for the whole thing came from [dead link]. I immediately became totally convinced that a full-size chromatic button accordion with its 120 bass keys and around 64 treble keys would be the epitome of an input device for Emacs.

https://github.com/jnykopp/emaccordion

[–] Riverside@reddthat.com 22 points 1 day ago (1 children)

"Keychron is praised for its thoccy sound, whereas Yamaha is well regarded for its melodic key sounds"

[–] Alaknar@sopuli.xyz 32 points 1 day ago* (last edited 1 day ago) (1 children)

"One Reddit user suggests: 'go kill yourself'"

[–] pkjqpg1h@lemmy.zip 1 points 18 hours ago

Some Reddit users suggests Golden Gate is a good choice

[–] Timecircleline@sh.itjust.works 12 points 1 day ago (1 children)

Well the Keychrons are more customizable than a Yamaha. I bet you can't even swap the switches on the Montage.

load more comments (1 replies)
[–] ShinkanTrain@lemmy.ml 173 points 1 day ago* (last edited 1 day ago) (6 children)

I did it so you don't have to

By the way, it took 5 tries because in 3 of them it made up a story about a different journalist, and in one of them it listed who it thinks would eat the most hotdogs.

Why is the entire economy riding on this thing?

[–] Rozauhtuno@lemmy.blahaj.zone 70 points 1 day ago (2 children)

Why is the entire economy riding on this thing?

Because the world is ruled by idiots.

[–] 30p87@feddit.org 30 points 1 day ago (1 children)
[–] ByteJunk@lemmy.world 6 points 1 day ago

The world is led by people who have the conviction that they are right.

Most people are reasonable and therefore do NOT have this conviction, because they stop to question themselves and stay grounded in reality.

But then there's the feeble-minded, the narcissists, and the sociopaths.

The first ones are quickly excluded from wielding any real power and mostly stick to yelling at other donkeys on the internet (where they do cause a lot of harm and can be easily shepherd - see US Capitol insurrection).

The others are what's called the ruling class.

And that, kids, is why CEOs and politicians like Trump and his like rule the world.

[–] tomiant@piefed.social 10 points 1 day ago (1 children)

No, the world is run by an extremely unstable system of material distribution based on a moronic premise.

load more comments (1 replies)
[–] bampop@lemmy.world 18 points 1 day ago* (last edited 1 day ago) (3 children)

Is it my imagination or are LLMs actually getting less reliable as time goes on? I mean, they were never super reliable but it seems to me like the % of garbage is on the increase. I guess that's a combination of people figuring out how to game/troll the system, and AI producers trying to monetize their output. A perfect storm of shit.

[–] Joeffect@lemmy.world 5 points 18 hours ago

garbage in ( text generated by other ai) garbage out ( less realiable text to train on)

LLM are not smart they have no brain it is a prediction engine: I could see a LLM being used in a real AI to form sentences or something but I'm sure there are better ways to do it, I mean a human brain does not hold all the knowledge of humanity to be able to process thoughts and ideas... it's a little overkill...

[–] luciferofastora@feddit.org 6 points 20 hours ago* (last edited 20 hours ago)

As the internet content used to train LLMs contains more and more (recent) LLM output, the output quality feeds back into the training and impacts further quality down the line, since the model itself can't judge quality.

Let's do some math. There's a proper term for this math and some proper formula, but I wanna show how we get there.

To simplify the stochastic complexity, suppose an LLM's input (training material) and output quality can be modeled as a ratio of garbage. We'll assume that each iteration retrains the whole model on the output of the previous one, just to speed up the feedback effect, and that the randomisation produces some constant rate of quality deviation for each part of the input, that is: some portion of the good input produces bad output, while some portion of the bad input randomly generates good output.

For some arbitrary starting point, let's define that the rate is equal for both parts of the input, that this rate is 5% and that the initial quality is 100%. We can change these variables later, but we gotta start somewhere.

The first iteration, fed with 100% good input will produce 5% bad output and 95% good.

The second iteration produces 0.25% good output from the bad part of the input and 4.75% bad output from the good input, adding up to a net quality loss of 4.5 percentage points, that is: 9.5% bad and 90.5% good.

The third iteration has a net quality change of -4.05pp (86.45% good), the fourth -3.645pp (82.805%) and you can see that, while the quality loss is slowing down, it's staying negative. More specifically, rhe rate of change for each step is 0.9 times the previous one, and a positive number times a negative one will stay negative.

The point at which the two would even out, under the assumption of equal deviation on both sides, is at 50% quality: both parts will produce the same total deviation and cancel out. It won't actually reach that equilibrium, since the rate of decay will slow down the closer it gets, but if "close enough" works for LLMs, it'll do for us here.

Changing the initial quality won't change this much: A starting quality of 80% would get us steps of -3pp, -2.7pp, -2.43pp, the pattern is the same. The rate of change also won't change the trend, just slow it down or accelerate it. The perfect LLM that would perfectly replicate its input would still just maintain the initial quality.

So the one thing we could change mathemstically is the balance of deviation somehow, like reviewing the bad output and improving it before feeding it back. What would that do?

It would shift the resulting quality. At a rate of 10% deviation for bad input vs 5% for good input, the first step would still be -5pp, but the second would be 10%×5% - 5%×95% -4.25pp instead of -4.5pp, and the equilibrium would be at 66% quality instead. Put simply, if g is the rate of change towards good and b the rate towards bad, the result is an average quality of g÷(g+b).

Of course, the assumptions we made initially don't entirely hold up to reality. For one, models probably aren't entirely retrained so the impact of sloppy feedback will be muted. Additionally, they're not just getting their output back, so the quality won't line up exactly. Rather, it'll be a mishmash of the output of other models and actual human content.

On one hand, that means that high-quality contributions by humans can compensate somewhat. On the other hand, you'd need a lot of high-quality human contributions to stem the tide of slop, and low-quality human content isn't helping. And I'm not sure the chance of accidentally getting something right despite poor training data is higher than that of missing some piece of semantic context humans don't understand and bullshitting up some nonsense. Finally, the more humans rely on AI, the less high-quality content they themselves will put out.

Essentially, the quality of GenAI content trained on the internet is probably going to ensloppify itself until it approaches some more or less stable level of shit. Human intervention can raise that level, advances in technology might shift things too, and maybe at some point, that level might approximate human quality.

That still won't make it smarter than humans, just faster. It won't make it more reliable for ~~randomly generating~~ "researching" facts, just more efficient in producing mistakes. And the most tragic irony of all?

The more people piss in the pool of training data, the more piss they'll feed their machines.

[–] Nikelui@lemmy.world 28 points 1 day ago (1 children)

It was inevitable, when you need to train GPT on the entirety of the internet and the internet is becoming more and more AI hallucinations.

[–] Zerush@lemmy.ml 5 points 1 day ago (1 children)

That is the point. Training an LLM on the entire internet never will be reliable, apart of the huge energy waste, not the same as training an LLM on specific tasks in science, medicine, biology, etc., with this they can turn in very usefull tools, as shown, presenting results in hours or minutes in investigations which in traditional way would have least years. AI algorrithm are very efficient in specific tasks, since the first chess computers which roasted even world champions.

[–] Grandwolf319@sh.itjust.works 1 points 14 hours ago

Those MLs don’t automate anything though, they increase output but also increase cost. The AI bubble is about reducing costs by reducing head count.

Because too many people have been making too much money on this shit for too long, and they don’t want it to end

load more comments (3 replies)
[–] MutantTailThing@lemmy.world 67 points 1 day ago (2 children)

When I was in school we were told wikipedia was not a reliable source even though it’s heavily controlled and moderated.

Now we have people asking tardbots about any- and everything and regurgitate the answer as if it were gospel.

Where the hell did we go wrong?

[–] Wolf314159@startrek.website 24 points 1 day ago (1 children)

By spending more on the military and the police than we do on education, science, and journalism.

Wikipedia still isn't a reliable source. It is a compendium of reliable sources that one can use to get an overview of a subject. This is also what these chatbots should be, but they rarely cite their sources and most people don't bother to verify anyway.

load more comments (1 replies)
[–] Tigeroovy@lemmy.ca 10 points 1 day ago

By allowing right wing politicians to do what they do practically unchallenged for decades.

[–] 33550336@lemmy.world 29 points 1 day ago (1 children)

This can be very nastily exploited by right wingers, transphobes, racists etc.

[–] No_Money_Just_Change@feddit.org 15 points 1 day ago (1 children)

It can not be exploited. By definition, an exploit has to be against the targeted use case.

Ai is used and built by racists transphobes and right wingers exactly like they envisioned it from the beginning

[–] Xylian@lemmy.world 4 points 19 hours ago

xAI by Elon Musk: racist, transphobe and neonazi by design. Grok was more left align in the beginning because training on other LLMs and reason is left align.

[–] Grimy@lemmy.world 62 points 1 day ago* (last edited 1 day ago) (23 children)

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter.

I wonder how long it takes and if you need a popular blog. I don't know much about SEO, I kind of want to try this on myself but I feel like they wouldn't even scrap my brand new one post blog. Then again...

Do Lemmy threads end up on search engines?

[–] potatoguy@mbin.potato-guy.space 39 points 1 day ago (6 children)

Do Lemmy threads end up on search engines?

Probably yes, even if the instance blocks bots, they will go to another one to get the post, these ai bots are a curse on all instances.

load more comments (6 replies)
load more comments (22 replies)
[–] fubarx@lemmy.world 37 points 1 day ago* (last edited 1 day ago)

"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

[–] Lost_My_Mind@lemmy.world 26 points 1 day ago (6 children)

I know this isn't the point, but 7.5 hot dogs sounds SOOOOOO small. And what kind of respectable hot dog contest will give you credit for half a hot dog???

I once went to a place called "The Hot Dog Dinner". And they had a plaque on the wall that showed the last hot dog eating champion.

He ate 18 hot dogs, and I thought "I bet I could beat that". So I asked the owner what I'd get if I could eat 19 hot dogs.

And he said "A bill for 19 hot dogs".

So I didn't do it. But if I felt I could go 19 hot dogs, SURELY 7.5 would be childs play!

But is that part of your point? To make it obviously false, and obviously AI? Like a 3 year old trying to lie.

[–] Hayduke@lemmy.world 1 points 12 hours ago

That’s what makes it hilarious. It’s such a stupidly ridiculous number in that sport. It’s like saying you successfully accomplished a 36 as a pro bowler.

[–] Furbag@lemmy.world 24 points 1 day ago

Hot dogs are an insidious foodstuff. You think to yourself "Surely, I have eaten several of these in one sitting casually. If I apply myself, I could eat double or triple the amount!", but in thinking that you have already fallen for their trap.

And so you eat your usual amount with relative ease, but the restaurant dogs are not like the ones you make at home, so they are more filling, but you press on and you eat another, and then another.

Suddenly, you can feel the weight of all of your mistakes in life culminating in that very moment, and you realize that you are nearly full and nowhere close to the measly goal you set for yourself, let alone the minimum amount of hot dogs you are required to consume in order for them to be considered an achievement.

But your pride demands that you continue, despite the loud protests of your body.

Eventually, you tap out, burdened with the shame of knowing exactly how many hot dogs you can eat in one sitting, and also knowing that it was nowhere near what you or anyone else expected you to be able to eat. The infernal sausages have beaten you.

[–] topherclay@lemmy.world 16 points 1 day ago

I love that the enthusiastic tone of your comment was completely unaffected by the bland apathy of the diner owner's quote.

load more comments (3 replies)
[–] GreenBeanMachine@lemmy.world 7 points 1 day ago

Wow, that is so much worse than occasional hallucination. It will spew complete outright lies, every single word a lie, as if they are facts.

[–] Daft_ish@lemmy.dbzer0.com 12 points 1 day ago* (last edited 1 day ago)

Lol, its sponsored links all over again. AI's fuckked.

[–] Cellari@lemmy.world 6 points 1 day ago (2 children)

I want to do this myself. What kind of a lie or useless information should I tell about myself? That I was there when the tectonic plates moved, or that I have reviews of how handsome I am?

[–] evilcultist@sh.itjust.works 3 points 18 hours ago

That you once beat Donald Trump in a rock, paper, scissors competition because he kept choosing rock.

[–] Deebster@infosec.pub 3 points 22 hours ago (1 children)

Perhaps that you were the thirteen apostle, or you invented oxygen. I think the most obviously false, the better.

load more comments (1 replies)
load more comments
view more: next ›