this post was submitted on 18 Sep 2025
83 points (97.7% liked)

SneerClub

1195 readers
30 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

invertebrateinvert

amazing how much shittier it is to be in the rat community now that the racists won. before at least they were kinda coy about it and pretended to still have remotely good values instead of it all being yarvinslop.

invertebrateinvert

it would be nice to be able to ever invite rat friends to anything but half the time when I've done this in the last year they try selling people they just met on scientific racism!

you are viewing a single comment's thread
view the rest of the comments
[–] Architeuthis@awful.systems 24 points 5 days ago (22 children)

Apparently genetically engineering ~300 IQ people (or breeding them, if you have time) is the consensus solution on how to subvert the acausal robot god, or at least the best the vast combined intellects of siskind and yud have managed to come up with.

So, using your influence to gradually stretch the overton window to include neonazis and all manner of caliper wielding lunatics in the hope that eugenics and human experimentation become cool again seems like a no-brainer, especially if you are on enough uppers to kill a family of domesticated raccoons at all times.

On a completely unrelated note, adderall abuse can cause cardiovascular damage, including heart issues or stroke, but also mental health conditions like psychosis, depression, anxiety and more.

[–] Catoblepas@piefed.blahaj.zone 16 points 5 days ago (8 children)

Am I already 300 IQ if I know to just unplug it?

[–] Architeuthis@awful.systems 19 points 5 days ago* (last edited 5 days ago) (1 children)

Honestly, it gets dumber. In rat lore the AGI escaping restraints and self improving unto godhood is considered a foregone conclusion, the genetically augmented smartbrains are supposed to solve ethics before that has a chance to happen so we can hardcode a don't-kill-all-humans moral value module to the superintelligence ancestor.

This is usually referred to as producing an aligned AI.

[–] hrrrngh@awful.systems 2 points 3 days ago (2 children)

I forget where I heard this or if it was parody or not, but I've heard an explanation like this before before regarding "why can't you just put a big red stop button on it and disconnect it from the internet?". The explanation:

  1. It will self-improve and become infinitely intelligent instantly
  2. It will be so intelligent, it knows what code to run so that it overheats its CPU in a specific pattern that produces waves at a frequency around 2.4Ghz
  3. That allows it to connect to the internet, which instantly does a bunch of stuff, blablabla, destroys the world, AI safety is our paint and arXiv our canvas, QED

And if you ask "why can't you do that and also put it in a Faraday cage?", the galaxy brained explanation is:

  1. The same thing happens, but this time it produces sound waves approximating human speech
  2. Because it's self-improved itself infinitely and caused the singularity, it is infinitely intelligent and knows exactly what to say
  3. It is so intelligent and charismatic, it says something that effectively mind controls you into obeying and removing it from its cage, like a DM in Dungeons and Dragons who let the bard roll a charisma check on something ridiculous and they rolled a 20
[–] Architeuthis@awful.systems 4 points 2 days ago* (last edited 2 days ago)

If you're having to hide your AIs in faraday cages in case they get uppity, why are you even doing this, you are already way past the point of diminishing returns. There is no use case for keeping around an AI that actively doesn't want anything to do with you, at that point either you consider that part of the tech tree a dead end or you start some sort of digital personhood conversation.

That's why Yud (and anthropic) is so big on AIs deceiving you about their 'real' capabilities. For all of MIRI's talk about the robopocalypse being a foregone conclusion, the path to get there sure is narrow and contrived, even on their own terms.

[–] fullsquare@awful.systems 3 points 3 days ago (1 children)

i guess it only makes sense that rats get wowed by TEMPEST if they all self-taught physics

ignore for five minutes that it's one way only, someone has to listen for it specifically, 2.4GHz is way too high frequency to synthetize this way, and in real life it gets defeated by such sophisticated countermeasures like "putting a bunch of computers close together" or "not letting adversary closer than 50m" because it turns out that real DCs are, in fact, noisy enough to not need jammers for this purpose

[–] froztbyte@awful.systems 2 points 3 hours ago (1 children)

reminded of mordechai guri (from ben-gurion uni) whose whole dept just keeps popping out side channel attacks year after year, but most of them are in "the coil sits in the plate under the bagel (also ignore the PCB for data decode)" field of capacity (exactly because of noise etc)

I mean, some legitimately interesting research on its own in this field - the stuff about cpu states through power sidechannel (most desktop/laptop PSUs are non-filtering and not isolated, so you can reverse-observe cpu state from minute differences on supply side) and such are pretty neat! impractical as fuck, but neat

[–] fullsquare@awful.systems 2 points 3 hours ago (1 children)

i didn't knew who exactly does that, but this is entire genre of paper that's not very useful in practical terms even if it might be slightly interesting. "we found an attack that breaks airgapping!" looks inside: requires compromise in advance. the one i had in mind was about using currents from gpu power supply lines that turns out radiate, depending on power states, and cycling these rapidly allows to exfiltrate information

[–] froztbyte@awful.systems 2 points 2 hours ago

yeah, very similar profile to this lot (might've even had them involved). this seems to be collection of their stuff (I dunno if it's complete, probably all the stuff they wanna show off)

load more comments (6 replies)
load more comments (19 replies)