Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 7 points 1 week ago* (last edited 1 week ago)

Pretty good news tbh. That means that the power demand is driven by users, and we can influence it a little bit, and not just by repeatedly training new models over and over because somebody left a new comment somewhere. https://www.youtube.com/watch?v=XKQJXJOVGE4

[–] Soyweiser@awful.systems 11 points 1 week ago (8 children)

Bonus this also solves the halting problem

[–] Soyweiser@awful.systems 6 points 2 weeks ago (3 children)

Revealing just how forever online I am, but due to talking about the 'I like to watch' pornographic 9/11 fan music video from the Church of Euthanasia (I'm one of the two people who remembers this it seems) I discovered that the main woman behind this is now into AI-Doom. On the side of the paperclips. General content warnings all around (suicide, general bad taste etc), Chris was banned from a big festival (lowlands) in The Netherlands over the 9/11 video, after she was already booked (we are such a weird exclave of the USA, why book her, and then get rid of her over a 9/11 video in 2002?). Here is one of her conversations with chatgpt about the Churches anti-humanist manifesto. linked here not because I read it but just to show how AI is the idea that eats everything and I was amused by this weird blast from the past I think nobody recalls but now also into AGI.

[–] Soyweiser@awful.systems 5 points 2 weeks ago

Yeah indeed, had not even thought of the timegap. And it is such a bit of bullshit misdirection, very Muskian, to pretend that this fake transparency in any way solves the problem. We don't know what the bad prompt was nor who did it, and as shown here, this fake transparency prevents nothing. Really wished more journalists/commentators were not just free pr.

[–] Soyweiser@awful.systems 3 points 2 weeks ago

Im reminded of the cartoon bullets from who framed rodger rabbit.

[–] Soyweiser@awful.systems 10 points 2 weeks ago

LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)

[–] Soyweiser@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago)

Think this already happened, not this specific bit, but ai involved shooting. Esp considering we know a lot of black people have been falsely arrested due to facial ID already. And with the gestapofication of the USA that will just get worse. (Esp when the police go : no regulations on AI also gives us carte blance. No need for extra steps).

[–] Soyweiser@awful.systems 8 points 2 weeks ago* (last edited 1 week ago) (2 children)

Remember those comments with links in them bots leave on dead websites? Imagine instead of links it sets up an AI to think of certain specific behaviour or people as immoral.

Swatting via distributed hit piece.

Or if you manage to figure out that people are using a LLM to do input sanitization/log reading, you could now figure out a way to get an instruction in the logs and trigger alarms this way. (E: im reminded of the story from the before times, where somebody piped logging to a bash terminal and got shelled because somebody send a bash exploit which was logged).

Or just send an instruction which changes the way it tries to communicate, and have the LLM call not the cops but a number controlled by hackers which pays out to them, like the stories of the A2P sms fraud which Musk claimed was a problem on twitter.

Sure competent security engineering can prevent a lot of these attacks but you know points to history of computers.

Imagine if this system was implemented for Grok when it was doing the 'everything is white genocide' thing.

Via Davidgerard on bsky: https://arstechnica.com/security/2025/05/researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious/ lol lmao

[–] Soyweiser@awful.systems 8 points 2 weeks ago (1 children)

"whats my purpose?"

[–] Soyweiser@awful.systems 19 points 2 weeks ago

Are you trying to say here that cold readers do not actually communicate with the spirit realm? Where is your open mind?

[–] Soyweiser@awful.systems 6 points 2 weeks ago

Sociopaths

Bit important to note here to people not familiar with the blog posts (now available as a book (in pdf form), because everything must be monetized) that sociopath is meant here as a specific type of person, not a clinical sociopath per se, but more a certain type of person inside the context of the blog post series. So people reacting to it beware.

[–] Soyweiser@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Think you are misreading the blog post. They did this after the Grok had its white genocide hyperfocus thing. It shows the process of the xAI public github (their fix (??) for Groks hyperfocus) is bad, not that they started it. (There is also no reason to believe this github is actually what they are using directly (would be pretty foolish of them, which is why I could also believe they could be using it))

view more: ‹ prev next ›