this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2146 readers
115 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 6 points 4 days ago (2 children)

It's a good post. A few minor quibbles:

The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

One of the things I liked and didn't know about before

Ask Claude any basic question about biology and it will abort.

That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.

One of the comments gets it:

Safety team/product team have conflicting goals

LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

[–] fullsquare@awful.systems 5 points 3 days ago (1 children)

Ask Claude any basic question about biology and it will abort.

it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it's the former, then it's rare rationalist W for wrong reasons

I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks.

look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you'll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax

LLMs aren’t actually smart enough to make delicate judgements

maybe they really made machines in their own image

[–] fullsquare@awful.systems 2 points 3 days ago (1 children)

"hello anthropic? can you pay me 50k a year so that i specifically don't go around making biological weapons? think about all these future simulated beings it'll save"

[–] viq@social.hackerspace.pl 2 points 2 days ago

@fullsquare @cstross at those amounts, no-one is going to take you seriously. You should be asking at least 50M a year.

[–] blakestacey@awful.systems 8 points 4 days ago (2 children)

"The Torment Nexus definitely has positive uses. I personally use it frequently for looking up song lyrics and tracking my children's medication doses. I find it helpful."

[–] dgerard@awful.systems 2 points 3 days ago
[–] sashin@veganism.social 1 points 2 days ago

@blakestacey @scruiser I guess people can derive a lot of value so long as it's *other people* being tormented