this post was submitted on 06 Mar 2026
212 points (95.7% liked)

Fuck AI

6218 readers
1348 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] dejected_warp_core@lemmy.world 7 points 9 hours ago (1 children)

We are indeed living inside the stupidest version of Cyberpunk. Time to start building AI countermeasures.

I think we have more to fear from using AI to generate permutations of existing attacks, in a way that evades detection of known behaviors, malware hashes, and so on. Also, having a command & control (C2) style attack dynamically evolve with help from AI, based on intel from the target? That's kind of novel and scary in its own way.

Meanwhile hacking in and running a rogue AI client on a target system in an enterprise setting... well, you'd have to be blind to not notice all the back-and-forth token and response traffic. It would be the fattest, nosiest, C2-style attack and probably easy to detect with conventional means.

Otherwise, OP and this copypasta is correct to be concerned. It's not like the typical home user is watching bytes sent/recv on their home router. This could manifest as a very potent botnet problem.

[–] real_squids@sopuli.xyz 4 points 9 hours ago (1 children)

We are indeed living inside the stupidest version of Cyberpunk.

I just wanted robo-legs man...

[–] dejected_warp_core@lemmy.world 5 points 9 hours ago (1 children)

I hear you. I just want a cyber-brain implant to stabilize my ADHD and maybe add more working memory. Instead, I'm now terrified of what the intersection of cyberware and enshitification would look like. After seeing what has happened to consumer electronics in the last 10 years, Deus Ex has nothing on what our current tech giants would do.

[–] real_squids@sopuli.xyz 1 points 4 hours ago

"Fun" fact: human revolution is set in 2027 😀

[–] InnerScientist@lemmy.world 15 points 13 hours ago (1 children)

Press x to doubt.

Ignoring the question of "could current ai do this?", the fact remains that most PCs that can get infected either can't run the model (not enough ram) or run it with an immediately noticeable spike in CPU usage (100% for hours/days) or a spike in GPU usage that would block most other tasks to a standstill.

[–] Sv443@sh.itjust.works 4 points 9 hours ago (1 children)

It doesn't need to be running at full power, it can be slow but deadly. And 99% of users are not gonna notice a random program using 10% of their resources.

[–] InnerScientist@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago)

Not sure if that would work, a restart resets all progress. If the program didn't finish inside the 8 hour workday then it will never progress. Add to that that the ai will use the same amount of ram no matter how much you slow down the CPU and you'll still slow down the PC immensely. The small models also aren't smart so they would break often too.

Edit: Does a working PoC exist? Shouldn't be hard to (dis)prove.

[–] very_well_lost@lemmy.world 76 points 17 hours ago (1 children)

I think this is stupid and I'll tell you why.

If you're able to install OpenClaw on a system, you already have the access you need to install literally anything else, and direct that system to do whatever you want. Why would I install an AI agent to carry out my exploit when I could just install conventional malware that behaves deterministically and won't randomly hallucinate behaviors that will expose the fact my victim has been hacked?

AI worms are just regular malware worms, but worse.

[–] derbolle@lemmy.world 22 points 16 hours ago (1 children)

good Point on the whole. I have to disagree somewhat here. For regular malware there is a high chance it gets detected by endpoint protection at some point. yes, i know there are obfuscuation techniques but even they are deterministic or at least a Bit more predictable than whatever the hell a LLM is up to. So I think there is a valid case for malware developers to consider "agentic" Malware. Sadly many companies dive headfirst into the AI Agent cult for dev Work and so one docker container in wsl or the like probably goes unnoticed at least until heads are cooled and infosec depts. catch up to this stuff. its just one more massive attack vector

[–] kautau@lemmy.world 13 points 15 hours ago* (last edited 15 hours ago)

Yeah this is polymorphism at a new level potentially. You don’t tell the other agents to download a binary with a detectable signature, you prompt poison them into seeing what build tools they have available with a set of instructions to build software to sit and wait and check for instructions or ping an endpoint. And some agents write a bash script, some write python, or build a rust binary, so on and so forth, as long as it does the thing. And then you can tell it to hide the binary and update .claude or whatever tool to run it as a hook on every command. Once the payload for it to load is there, they all fire. And even if only 50% of the MOST STARRED recent 🤦 project on GitHub runs them, then maybe the instructions are to proliferate more in another way, silently. This is like sheep for wolves that weren’t smart enough to build stuxnet

[–] Alberat@lemmy.world 9 points 12 hours ago* (last edited 12 hours ago)

we had better than "byte-for-byte" malware detection since like 2007

[–] okwhateverdude@lemmy.world 19 points 17 hours ago (1 children)

"Different, nondeterministic things on every install" Massive doubt. I know this is the Fuck AI comm, but know thine enemy. Models are simply incapable of true randomness. They are worse than humans even. It takes great effort to introduce entropy and get a truly out of distribution result. Yes, there very likely will be a "worm" among people that have existing relationships with token providers where the agent can surreptitiously use API keys laying around, but that's a tiny number of people.

[–] apparia@discuss.tchncs.de 10 points 16 hours ago (1 children)

What? They're just computer programs. Almost all computers have high quality entropy sources that can generate truly random numbers. LLMs' whole thing is basically turning sequences of random numbers into sequences of less random stuff that makes sense. They have a built-in dial for nondeterminism, and it's almost never at zero.

I feel like I'm missing your meaning because the literal interpretation is nonsense.

[–] okwhateverdude@lemmy.world 4 points 15 hours ago (2 children)

Yes and no. The models themselves are just a big pile of floating point numbers that represent a compression of the dataset they were trained on. The patterns in that dataset will absolutely dominate the output of the model even if you tweak the inference parameters. Try it. Ask it ten times to make list of 20-30 random words. Each time a new context. The alignment between each of those lists will be uncanny. Hell, you'll even see repeats within the list. Size of the model matters here with the small ones (especially quantized ones) having less patterns or bigger semantic gravity wells. But even the big boys will give you the same slop patterns that are mostly fixed. Unless you are specifically introducing more entropy into the prompt, you can mostly treat a fixed prompt as a function with a somewhat deterministic output (within a given bounds).

This means that the claims in the OP are simply not true. At least not without some caveats and specific work arounds to make it true

[–] shoo@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago)

Ask it ten times to make list of 20-30 random words

This is true on ootb models but not the universal rule. You could adjust the temperature all the way up and get something way more random, probably to the point of incoherence.

The trick is balancing that with keeping the model doing something useful. If you're clever you could leverage /dev/random or similar as a tool to manually inject randomness while keeping the result deterministic.

[–] Tiresia@slrpnk.net 2 points 15 hours ago (1 children)

At least not without some caveats and specific work arounds to make it true

Luckily hackers are terrible at doing that, otherwise we might be in trouble.

[–] okwhateverdude@lemmy.world 2 points 14 hours ago

Haha, you're not wrong. All I am pointing out is that inducing true randomness in an agent that would make fighting an agent worm really difficult is really difficult and a very under studied thing in general. I have done experiments on introducing entropy into prompts and it is very difficult to thread the needle with instruction following plus entropy. I've only seen one other dude posting experiments on attempting to introduce entropy into prompts.

[–] GrindingGears@lemmy.ca -4 points 10 hours ago (2 children)

I'm pretty sure these people have worms in their brains. These people are so sucked into AI vortexes of shit. That whole statement there looks like it was wrote by AI. I'm a threat, would you like to know more? Keep using me, keep using me, I'll tell you how to stop this threat. Would you like to know more?

[–] jaredwhite@humansare.social 10 points 9 hours ago (1 children)

The statement was written by one of the architects of ActivityPub. I can assure you, she is quite serious about this thesis. Whether it happens exactly like that or not, not for me to judge because I'm not a cybersecurity expert.

I do believe that, as a general rule, agentic network activity is indistinguishable from malware.

[–] GrindingGears@lemmy.ca -1 points 9 hours ago

Nor am I an expert either. I'm not doubting her credentials, I'm just doubting if she's not sucked down in a vortex right now. That's the thing with these LLMs, they are created by companies that are all-time at finding ways to hit you with dopamine and suck you in. People keep losing sight of that.

[–] RaoulDook@lemmy.world 2 points 9 hours ago

Proper grammar is not to be mistaken for AI content. Some of us actually know how to write, still.