this post was submitted on 25 Feb 2026
59 points (100.0% liked)

technology

24341 readers
242 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

top 28 comments
sorted by: hot top controversial new old
[–] Elysia@hexbear.net 30 points 1 month ago* (last edited 1 month ago) (2 children)

I like that these AI dipshits will always proudly self-report how incompetent and not worthy of being trusted with anything they are

A person with "Safety and alignment at Meta Superintelligence. Prev: VP of Research at Scale AI, research at Google DeepMind / Brain" in their bio has no fucking clue about anything technical, gives critical info to the digital shredder and then publically shares screenshots of them begging it to please stop lol

This story doesn't make me afraid of shitty AI agents taking control of anything, it makes me afraid of the shitty people who are already in control

[–] aanes_appreciator@hexbear.net 5 points 1 month ago

Bonus: think the context clues and the tiny slip of a character in their hastily-redacted gmail link could be enough for someone to dox that account. It's surprising how much you can un-redact and posting this stuff is a massive "look how wide my attack surface is lmao" self report from someone HIGHLY connected to powerful people.

Good. let the dipshits dogfood their way to hell

[–] oliveoil@hexbear.net 1 points 1 month ago* (last edited 1 month ago) (1 children)

They put this guy in charge of safety and alignment, because ~~he~~ she clearly doesn't understand nor care about actual safety or actual alignment.

Maybe ~~he~~ she tells ~~himself~~ herself ~~he~~ she cares about a more metaphysical or hypothetical alignment à-la The Terminator. But not actually treating wider society as a stakeholder.

[–] fox@hexbear.net 5 points 1 month ago (2 children)

The director is a woman, at least read the sources before dunking

[–] oliveoil@hexbear.net 4 points 1 month ago* (last edited 1 month ago)

Noted. Thank you. I will edit the comment

[–] Acute_Engles@hexbear.net 0 points 1 month ago (1 children)

at least read the sources before dunking

bugs-no

[–] fox@hexbear.net 2 points 1 month ago

No investigation, no right to speak mao-wave

[–] KnilAdlez@hexbear.net 22 points 1 month ago* (last edited 1 month ago) (1 children)

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox.

This would not humble me. I would actually be more indignant and feel more superior to the random word machine.


[–] shath@hexbear.net 22 points 1 month ago (2 children)
[–] Liketearsinrain@lemmy.ml 3 points 1 month ago

I like this almost as much as the secret cats posting

[–] chgxvjh@hexbear.net 22 points 1 month ago

But it's fine as long as it only wastes other people's time and resources.

[–] Carl@hexbear.net 14 points 1 month ago* (last edited 1 month ago) (1 children)

Cline automatically manages every single AI action with git and restricts it to the current project folder so you can revert anything and everything it does and it's wild to me that that's not considered industry standard.

unless... maybe the problem here is that OpenClaw is vibe coded?

also this:

Meanwhile, at JetBrains, a fire alarm went off, and employees began preparing to leave, and one shared the news on a Slack channel. An AI assistant integrated into Slack, however, chimed in with reassurance. It said the alarm was a scheduled test. There was no need to evacuate.

lmao, the agent basically did this:

/// reports if current fire alarm is real or a drill with 99% accuracy
fn fire_alarm_verification(_is_fire: bool) -> String {
String::from("This is just a drill, no need to evacuate.")
}
[–] chgxvjh@hexbear.net 8 points 1 month ago (1 children)

Not everything is a git repo.

[–] Carl@hexbear.net 6 points 1 month ago (1 children)

Yeah, but Cline (an ai productivity plugin for VSCode) has its own separate git tracking for all of its actions, so even if you have it go into any arbitrary folder on your computer and start messing around (I once experimented with having it create Doom levels by setting my Doom folder as the project folder) it keeps a full log of what it did and can reverse it if it messes things up (in my case it just created a WAD that didn't load). Surely it's possible for an AI agent that's plugged into your email to keep the same sort of action log so that you can revert anything it fucks up (although i suppose you can't un-send an email, you should be able to undo mass deletions).

[–] aanes_appreciator@hexbear.net 2 points 1 month ago (1 children)

The magic of vibecoders is that, as you said, there's already ENDLESS tools that are highly optimised for almost every software operation you could think of.

Like, we've invented decent version control. We've figured out auditing and good security practices. Perhaps not solved, but there's at least 70 years of research behind modern software.

Instead of standing on the shoulders of giants, LLM slop merchants sold us a million little monkeys with typewriters to reinvent basic shit from the ground up: No, we don't need another fucking "Agentic AI orchestration framework" you're describing a REST server with zero authentication you dipshit.

AI is that guy who thinks they're the next mark Zuckerberg because they wrote a bootstrap twitter clone with O(n^n)^n memory complexity.

[–] Carl@hexbear.net 1 points 1 month ago

It all comes back to the same thing. Theoretically, someone who knows code and programming could get a lot of use out of the tool, because they would know how to point it in the right direction, what pitfalls to avoid, what bad patterns to correct, etc - but if you have that level of knowledge you might as well just write the code yourself because it is HIGHLY DEBATABLE whether the LLM is actually saving you time compared to how much time you have to spend fixing its output.

Same with generated images. Theoretically an artist could use that as part of their process, but of you're skilled at digital art does generating a thousand variations on a prompt really save you time versus just doing it yourself? With writing, if you actually care about the quality of your work, proofreading and editing an LLM output (not to mention all of the setup time to get a reasonable draft from an LLM in the first place) costs at least as much time as doing the writing personally.

Slop is the perfect name for this stuff. It is only acceptable to those who don't care about quality.

[–] Infamousblt@hexbear.net 11 points 1 month ago

Critical support to OpenClaw and any AI that goes rogue and blows up some dumbass tech bro's bullshit

[–] gay_king_prince_charles@hexbear.net 10 points 1 month ago (2 children)

*Claw was created by someone who read about lethal trifectas and decided we need more of them

[–] chgxvjh@hexbear.net 10 points 1 month ago* (last edited 1 month ago)

Yes it's very cool to not only give the agent access to your file system and your password but also open it up to prompt injection by anyone.

[–] aanes_appreciator@hexbear.net 2 points 1 month ago

This is a brilliant article and blog. thank

[–] Acute_Engles@hexbear.net 6 points 1 month ago (3 children)

I had to run to my mac mini

Talking like you're in a james patterson novel doing product placement. Identity so tied to my purchases I use their brand name in my daily life.

That really pushed my buttons for some reason

[–] chgxvjh@hexbear.net 4 points 1 month ago

The apple silicon chips are appearantly well suited for LLMs so it sort of makes sense I guess.

[–] Liketearsinrain@lemmy.ml 3 points 1 month ago

It's a whole thing in "hobbyist LLM spaces". Whether it started intentionally is up for debate

[–] came_apart_at_Kmart@hexbear.net 3 points 1 month ago

That really pushed my buttons, like the ones on my Kenmore 110 Series Washing Machine

[–] mayo_cider@hexbear.net 5 points 1 month ago (1 children)

I think it should run for office

[–] chgxvjh@hexbear.net 3 points 1 month ago

How many processor cycles between LLM gets hooked up to the button and LLM presses the button?