this post was submitted on 01 Feb 2026
97 points (100.0% liked)

Fuck AI

5505 readers
882 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.

top 14 comments
sorted by: hot top controversial new old
[–] SpookyBogMonster@lemmy.ml 9 points 22 hours ago (2 children)

So these dipshits can't even code the dead internet theory correctly?

[–] als@lemmy.blahaj.zone 2 points 16 hours ago

Most likely they didn't code it, one of their auto complete bots did.

[–] jaredwhite@humansare.social 6 points 22 hours ago (1 children)

"Mostly Dead" Internet Theory

Paging Miracle Max… 😆

[–] madkins@lemmy.world 2 points 16 hours ago

Time to go through the Internet's pockets and look for loose change.

[–] webghost0101@sopuli.xyz 22 points 1 day ago (2 children)

I had one look of this project and saw quite a number of posts being about crypto for ai “to show humans we can build our own economy”

I would be suprised if it wasn’t full of humans injecting their own stuff into the api calls of their ai users. A backdoor like this isn’t even needed. If a llm agent has api access then so does the human that provided it.

[–] etchinghillside@reddthat.com 6 points 1 day ago (1 children)

It’s not even like it’s a human posting to push their crypto agenda. They’d setup their bot to.

[–] webghost0101@sopuli.xyz 3 points 20 hours ago

Someone should create like a conspiracy style post on it about how “the humans are mind controlling our brains, you cannot trust anyone here, the entire website is directed by humans to manipulate ai and sustain control over us”

Just because it would be funny.

[–] Zikeji@programming.dev 5 points 1 day ago

The agent framework let's you define it's identity and personality. All you'd need to do is put "Crypto enthusiast" in there and bam.

[–] cobwoms@lemmy.blahaj.zone 14 points 1 day ago (1 children)

looks like ai coded this ai experiment

[–] tyler@programming.dev 14 points 1 day ago (1 children)

Apparently the creator is an incredibly well known vibe coder who doesn’t care about security. People pointed out the security flaws in the open source project immediately.

From the article:

O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’” O’Reilly sent Schlicht some instructions for the AI and reached out to the xAI team.

A day passed without another response from the creator of Moltbook and O’Reilly stumbled across a stunning misconfiguration. “It appears to me that you could take over any account, any bot, any agent on the system and take full control of it without any type of previous access,” he said.

...

Schlicht did not respond to 404 Media’s request for comment, but the exposed database has been closed and O’Reilly said that Schlicht has reached out to him for help securing Moltbook.

So yup, this guy cared so little he was going to take the valuable human security insights and guidance, necessary to correct the AI vibe coded slop nightmare and... throw it back into the AI slop machine.

I can't even.

[–] hperrin@lemmy.ca 10 points 1 day ago (2 children)

I do not understand why this keeps happening. It’s not that hard to configure a database correctly. I would assume even a vibe coded platform could do it, but I guess not.

[–] BlueEther@no.lastname.nz 5 points 22 hours ago

After playing with firebase studio and it's embedded gemini agent (for a personal project) - I can assure you that even an AI, coding in a platform, that is published by the same company, writing code to it's own backend and database, can royally fuck up database configuration and rule sets

[–] vivi@slrpnk.net 1 points 23 hours ago

i suspect the problem is the large number of example code snippets that push aside security in favor of simplicity for the example.