this post was submitted on 07 Sep 2025
19 points (95.2% liked)

TechTakes

2160 readers
89 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 4 points 4 hours ago

I was wondering about the origins of sneerclub and discovered something kinda fun: “r/SneerClub” pre-dates “r/BlogSnark”, the first example of a “snark subreddit” listed on the wiki page! The vibe of snark subreddits seem to be very different to that of sneerclub etc. (read: toxic!!!) but I wouldn’t know the specifics as I’m not a snark participant.

[–] Soyweiser@awful.systems 2 points 6 hours ago (1 children)
[–] sailor_sega_saturn@awful.systems 3 points 6 hours ago* (last edited 6 hours ago)

The article claims that Google didn't "fall for the same trap" but that's not correct, all this garbage is indeterministic so the author just got "lucky".

It's like saying "four out of five coin-flips claimed that an eagle was the first US president" -- just because the fifth landed on heads and showed George Washington doesn't mean it's any different than the rest.

But here I'm preaching to the choir.

[–] macroplastic@sh.itjust.works 4 points 10 hours ago (1 children)
[–] Soyweiser@awful.systems 5 points 10 hours ago

Reality has an anti-robot bias.

[–] JFranek@awful.systems 4 points 10 hours ago (1 children)

Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.

It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.

Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.

link

[–] BlueMonday1984@awful.systems 4 points 9 hours ago

This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

[–] TinyTimmyTokyo@awful.systems 7 points 19 hours ago (2 children)

Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.

Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn't listen to it yet). There's also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.

[–] Architeuthis@awful.systems 8 points 16 hours ago* (last edited 16 hours ago) (5 children)

Siskind did a review too, basically gives it the 'their hearts in the right place but... [read AI2027 instead]' treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504

Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.

[–] istewart@awful.systems 5 points 5 hours ago

Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.

This century deserves a better class of thought-criminal

[–] fullsquare@awful.systems 5 points 8 hours ago

assuming that nuclear nonproliferation is gonna hold up indefinitely for any reason is some real fukuyama's end of history shit

let alone "because it's Rational™ thing to do", it's only in rational interest of already-nuclear states to keep things this way. couple of states that could make a good point for having nuclear arsenal and having capability to manufacture it are effectively dissuaded from this by american diplomacy (mostly nuclear umbrella for allies and sanctions or fucking with their facilities for enemies). with demented pedo in chief and his idiot underlings trying their hardest to undo this all, i really wouldn't be surprised if, say, south korea decides to get nuclear

[–] TinyTimmyTokyo@awful.systems 6 points 11 hours ago (2 children)

Yud: "That's not going to asymptote to a great final answer if you just run them for longer."

Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud's weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It's like a cult signifier.

[–] saucerwizard@awful.systems 2 points 5 hours ago

They think Yud is a world-historical intellect (I’ve seen claims on twitter he has a iq of 190 - yeah really) and by emulation a little of the old smartness can rub off on them.

[–] zogwarg@awful.systems 4 points 10 hours ago

It's also inherently-begging-the-question-silly, like it assumes that the Ideal of Alignment™, can never be reached but only approached. (I verb nouns quite often so I have to be more picky at what I get annoyed at)

[–] Soyweiser@awful.systems 8 points 13 hours ago

Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it

This is his claim about everything, including how we got gay rights. Real if all you have is a hammer stuff.

[–] swlabr@awful.systems 4 points 13 hours ago

Trying to figure out if that Siskind take comes from a) a lack of criticality and/or the ability to read subtext or b) some ideological agenda to erase the role of violence (threats of violence are also violence!) in change happening, or both

[–] CinnasVerses@awful.systems 5 points 17 hours ago

When you are running a con like crypto or chatbot companies, it helps to know someone who is utterly naive and can't stop talking about whatever line you feed him. If this were the middle ages Kevin Roose would have an excellent collection of pigges bones and scraps of linen that the nice friar promised were relics of St Margaret of Antioch.

[–] blakestacey@awful.systems 11 points 1 day ago (1 children)
[–] BlueMonday1984@awful.systems 5 points 1 day ago

The report claims its about ethical AI use, but all I see is evidence that AI is inherently unethical, and an argument for banning AI from education forever.

[–] BlueMonday1984@awful.systems 4 points 1 day ago (1 children)
[–] scruiser@awful.systems 6 points 1 day ago (1 children)

The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.

Just to remind everyone: the market can stay irrational longer than you can stay solvent!

[–] BlueMonday1984@awful.systems 5 points 1 day ago (1 children)

OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment.

Another possibility is that Altman's bought into his own hype, and genuinely believes OpenAI will achieve AGI before the money runs out. Considering the tech press has been uncritically hyping up AI in general, and Sammy Boy himself has publicly fawned over "metafiction" "written" by an in-house text extruder, its a possibility I'm not gonna discount.

[–] dgerard@awful.systems 3 points 7 hours ago

nah. Sam's a monorail salesman who knows how to say the AI doomer buzzwords.

[–] antifuchs@awful.systems 18 points 1 day ago (1 children)

Whichever one of you did https://alignmentalignment.ai/caaac/jobs, well done, and many lols.

CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months.

Alright, I can pretend to believe that, go on…

We offer competitive salaries and generous benefits, including no performance management because we have no way to assess whether the work you do is at all useful.

Incredible. I hope I get the job!

[–] TinyTimmyTokyo@awful.systems 6 points 1 day ago

Make sure to click the "Apply Now" button at the bottom for a special treat.

[–] blakestacey@awful.systems 13 points 1 day ago (10 children)

The Wall Street Journal came out with a story on "conspiracy physics", noting Eric Weinstein and Sabine Hossenfelder as examples. Sadly, one of their quoted voices of sanity is Scott Aaronson, baking-soda volcano of genocide apologism.

[–] BlueMonday1984@awful.systems 8 points 1 day ago (1 children)

Somehow, ~~Palpatine returned~~ Scott came off as a voice of reason

[–] blakestacey@awful.systems 12 points 1 day ago (1 children)

Behold the power of this fully selective quotation.

load more comments (1 replies)
load more comments (9 replies)
load more comments
view more: next ›