this post was submitted on 01 Mar 2026
57 points (93.8% liked)

Fuck AI

6140 readers
1567 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

American companies are spending enormous sums to develop high-performing AI models. Distillation attacks are attempting to maliciously extract them — and nobody is doing much to stop it.

top 29 comments
sorted by: hot top controversial new old
[–] axx@slrpnk.net 4 points 21 hours ago* (last edited 21 hours ago)

This rhetoric of theft is both wrong in infuriating. This is the language the major record labels insisted on using to be able to call people who shared music "thieves".

You can't own ideas, you can't really own music. You can have legal recognition of certain rights around your art (authors's rights, copyright).

I think from the perspective of creators, the issue is that companies are transforming original art into systems capable  of generating endless derivative material, for profit, and often now for military intelligence and intervention, which is polite society speak for killing people in other countries. Understandably, creators of said art aren't delighted to see their work put to that use.

But then these companies that have transformed original human thought, ideas and art into a derivative hybrid complain that other companies are transforming their derivative into another derivative? Ans they want us to take them seriously?

Inventing a robot that answers anyone's questions and then complaining it's answering anyone's questions is very much a problem no one should give two shits about.

[–] h_ramus@piefed.social 5 points 1 day ago

World smallest violin. Let's break it down:

  • Hardware - all paid to providers and more prominently Nvidia;
  • Software - all the statistical relationship and logic was developed by handsomely paid staff;
  • Input data - there's no such thing as copyright, intellectual property or any sort of mechanism that prevents harvesting copious amounts of data that was created, refined and delivered as part of human experience or a business product. It's free for all to take, why pay for data?
  • Output of LLM - Based on the preceding paragraph, it's free for all to take, why pay for data?

So, competitors can't avoid the hardware costs but can save on developer costs? Nobody paid for input data anyway. Sounds like a VC's wet dream.

[–] jaennaet@sopuli.xyz 70 points 1 day ago (2 children)

"How dare they steal our model that we trained with stolen data"

[–] Catoblepas@piefed.blahaj.zone 20 points 1 day ago

You're trying to kidnap what I've rightfully stolen!

[–] binarytobis@lemmy.world 4 points 1 day ago (1 children)

My SIL’s friend was bragging about her son “writing” books using an LLM and selling them on amazon. “He checked and it isn’t even plagiarism!”

If it wasn’t our first meeting I probably would have pointed out how, in fact, it is.

[–] jaennaet@sopuli.xyz 2 points 1 day ago (1 children)

… let me guess, he asked an LLM if it's plagiarism?

[–] binarytobis@lemmy.world 1 points 1 day ago

Haha that would wrap it up in a bow.

[–] Redvenom@retrolemmy.com 9 points 1 day ago

They stole all the data to train their LLMs so...

[–] bitteroldcoot@piefed.social 31 points 1 day ago (4 children)

I worked with computers for about 30 years, and in retirement been testing ai for fun. I've yet to figure out what the point of them is. They lie, manipulate users and censor information. Their prose is overly verbose and their code sucks. What's the point....

You know, as I was typing the first paragraph I realized the point. They are really good at controlling and manipulating stupid people. They are the new Facebook and twitter. How depressing.

[–] unmagical@lemmy.ml 12 points 1 day ago (2 children)

They seem great till you ask them about something you know. Somehow people fail to extrapolate out that the failures they see in their field of expertise are actually there across all subject matters.

[–] moopet@sh.itjust.works 1 points 22 hours ago

I find the same with human-written articles. Like New Scientist, for example. When I was young I liked reading it, right up until I started reading articles on topics I knew well. They were all misleading shite. So I naturally assume that everything else I read associated with that magazine is also shite.

[–] very_well_lost@lemmy.world 9 points 1 day ago

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

[–] Strider@lemmy.world 5 points 1 day ago

Well, the point is using humongous amounts of energy, cutting resources from everything else and creating a huge money funnel.

It's the most effective hype yet.

[–] kboos1@lemmy.world 3 points 1 day ago

The only thing I have found useful about Ai is it's ability to quickly fill in documents with slop to make it seem like I spent more time and effort on it. Usually something like, I put it together with major points and frame work, then give to Ai to slop it up and format it. Then proof it and send it out. It's also good for note taking and transcripts.

Other than that it seems like it's just another form of control because now it can search data and make decisions quickly and cheaply now. This means that things that weren't worth making time for in the past can just be given to Ai to track. In fact my company is playing around with using Ai to track our progress on projects so that the PMs don't have to interact with engineers directly. I would also bet that it will be used to assess performance in future annual performance reviews.

Companies are also hoping to get rid of employees that perform those menial tasks that support staff do and get rid of employees that do tasks that they believe don't require specialized skills or talents.

[–] 13igTyme@piefed.social 3 points 1 day ago (1 children)

I work for a company that uses machine learning to make predictions for hospitals for census and discharges. It only a tool and works to help not replace. We're also working on it reading unstructured notes. I'm incredibly sceptical of AI and we test the shit out of it to make sure it's accurate.

[–] bitteroldcoot@piefed.social 3 points 1 day ago (1 children)

"reading unstructured notes." and if it screws up someone dies? I have doctors that want ai to transcribe what they say. I refused to sign the permission form.

[–] 13igTyme@piefed.social 3 points 1 day ago

The software is only used to help identify barriers for patients currently discharging. A person isn't going to die when discharging home and waiting on DME.

[–] fodor@lemmy.zip 5 points 1 day ago

lol it is perhaps costing billions but is it worth billions? let’s not pretend money spent (or laundered) implies value…

[–] eleijeep@piefed.social 9 points 1 day ago

What brain? They're developing an accountability-laundering propaganda machine. There's nothing involved that you could call a "brain."

[–] brucethemoose@lemmy.world 7 points 1 day ago* (last edited 1 day ago)

Actually, Chinese are doing a whole lot more innovation than American "AI brains," or at least innovation that we know about. Architectures are getting more and more efficient, instead of US Big Tech's "the same, but bigger, and capture regulators" ethos.

Not that the Chinese labs are saints. They're 100% distilling US labs data. It's somewhat measurable:

https://eqbench.com/creative_writing.html

They're almost certainly using unspecified Chinese govt data too, or at least sharing data between them, given the common quirks and behavior across models and the efficiency for their size. Not to speak of political "gaps" (which US models certainly have too).

[–] unmagical@lemmy.ml 9 points 1 day ago

What makes it any more "malicious" than making the original models?

[–] magnetosphere@fedia.io 7 points 1 day ago

…and nobody is doing much to stop it.

Why should we care?

I see this as a perfect real-world test. These companies can’t even protect what’s supposed to make them “valuable”. That doesn’t make it our problem. This is an easily foreseeable issue that they chose to ignore in their rush to market. They’re simply not ready. It’s their own fault.

[–] Meron35@lemmy.world 1 points 1 day ago (1 children)

As if American AI firms aren't doing the same.

Anthropic made a lot of noise of being the victim of large scale distillation attacks (ie other AI firms, usually Chinese copying/scraping their model), but people quickly pointed out the hypocrisy that Anthropic themselves seems to have copied DeepSeek.

If you bypass the system prompt and ask Claude what model it is (e.g. via Open router), it'll reply that it's DeepSeek.

[–] pkjqpg1h@lemmy.zip 1 points 1 day ago (1 children)

can you share exact prompt and settings I want to try

[–] Meron35@lemmy.world 1 points 1 day ago

It was confirmed working as of 1 week ago if you emptied system prompt (e.g. via open router), unsure if they've patched it.

(Also I know, eww Reddit and X)

Claude sonnet 4.6 says it’s DeepSeek when system prompt is empty : r/DeepSeek - https://www.reddit.com/r/DeepSeek/comments/1rd5jw7/claude_sonnet_46_says_its_deepseek_when_system/

Claude Sonnet 4.6 distilled DeepSeek? : r/DeepSeek - https://www.reddit.com/r/DeepSeek/comments/1r9se7p/claude_sonnet_46_distilled_deepseek/

https://x.com/i/status/2026130112685416881

[–] SalamenceFury@piefed.social 3 points 1 day ago

I don't care if anyone steals any AI model, in a just world LLMs would be considered illegal everywhere.

[–] mrmaplebar@fedia.io 2 points 1 day ago (1 children)

I believe they have been doing that and will continue to do that. Not just through distillation attacks, but also throughout hacking corporate and government networks, and good old fashioned espionage.

But "easy come, easy go", I guess. Because all of the training data was stolen in the first place. Just one more reason why the AI business is fucked. The answer for society remains regulation.

[–] Hegar@fedia.io 2 points 1 day ago

"Only I stole this fairly" has been the motto of oligarchs for millenia.

[–] berg@lemmy.zip 2 points 1 day ago

Why bother when they already have DeepSeek?