this post was submitted on 20 Jul 2025
23 points (100.0% liked)

TechTakes

2085 readers
87 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 7 points 2 days ago
[–] froztbyte@awful.systems 8 points 2 days ago (3 children)

eyeballing the orange site top-frontpage, and:

shit should come with a strain warning

[–] FRACTRANS@awful.systems 6 points 2 days ago

Cal Newport jumpscare (some productivity “influencer” who anxious teen me read)

load more comments (2 replies)
[–] blakestacey@awful.systems 12 points 3 days ago

I found this because Greg Egan shared it elsewhere on fedi:

I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.

It gets worse from there.

[–] FRACTRANS@awful.systems 11 points 3 days ago* (last edited 3 days ago) (4 children)

Looks like itch.io has (hidden/removed/disabled payouts for? reports vary) its vast swath of NSFW-adjacent content which is not great

addendum: itch.io finally put out a statement https://itch.io/updates/update-on-nsfw-content

[–] BlueMonday1984@awful.systems 9 points 2 days ago (1 children)

Long-term, I'm expecting itch to dive in popularity from this - they've nuked much of the trust they've built up over the years with this.

[–] Soyweiser@awful.systems 7 points 2 days ago

Yeah sucks as we should be very clear it is visa/mastercard and the terf group influencing them who is to blame.

[–] blakestacey@awful.systems 12 points 3 days ago* (last edited 3 days ago)

Hey, I haven't seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+

So that's super cool and totally not what I thought they were going to do next 🙃

https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u

And a relevant petition from the ACLU:

https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy

[–] froztbyte@awful.systems 8 points 3 days ago (2 children)

I recall seeing an article in the last week or so, regarding a right-wing associated group taking aim at these. will see if I can find that again

[–] Soyweiser@awful.systems 4 points 2 days ago (1 children)

They forced the article to be taken down, there are archive links, when im back home (and if I dont forget) ill scroll past my reskeets to find the link.

[–] FRACTRANS@awful.systems 8 points 3 days ago

looks like the group is called Collective Shout https://itch.io/updates/update-on-nsfw-content

[–] bitofhope@awful.systems 5 points 3 days ago

Say the line, Bart!

payment processors

entire class cheering

[–] bitofhope@awful.systems 9 points 3 days ago (1 children)
[–] YourNetworkIsHaunted@awful.systems 6 points 2 days ago (1 children)

Grumble grumble. I don't think that "optimizing" is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead it's more likely a combination of simple repetition (like how I've been calling everyone "mate" since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).

When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology they're using. Like, I've seen the same "ACK" used in networking and in older radio nerds because it fills an important role.

[–] bitofhope@awful.systems 8 points 2 days ago (1 children)

And much of it is very likely born out of humorous usage. Like "pinging" a colleague with a direct message to see if they're online. I might even greet my nerdier IT friends with "SYN" or "EHLO", or a ham with "QSO" in a non-radio context.

[–] mlen@awful.systems 8 points 2 days ago (1 children)

A lot of it is, but let's agree that using "prior" is just fucking pretentious

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 12 points 3 days ago

Found a neat mini-sneer in the wild: It's rude to show AI output to people

[–] blakestacey@awful.systems 16 points 3 days ago (5 children)

Yud continues to bluecheck:

"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."

Is this "narrative" in the room with us right now?

It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.

[–] Amoeba_Girl@awful.systems 10 points 2 days ago (2 children)

Tangentially, the other day I thought I'd do a little experiment and had a chat with Meta's chatbot where I roleplayed as someone who's convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I've been meaning to continue the chat and see how far and how fast it goes but I'm just too aghast for now. This shit is so fucking dangerous.

[–] shapeofquanta@lemmy.vg 9 points 2 days ago

I’ll forever be thankful this shit didn’t exist when I was growing up. As a depressed autistic child without any friends, I can only begin to imagine what LLMs could’ve done to my mental health.

[–] HedyL@awful.systems 6 points 2 days ago

Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).

[–] blakestacey@awful.systems 14 points 3 days ago (1 children)

From Yud's remarks on Xitter:

As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.

Well, not with that attitude.

You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;

If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).

you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.

zoom and enhance

g-factor

[–] ShakingMyHead@awful.systems 5 points 3 days ago* (last edited 3 days ago) (1 children)

Is g-factor supposed to stand for gene factor?

[–] blakestacey@awful.systems 9 points 3 days ago (1 children)

It's "general intelligence", the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.

load more comments (1 replies)
[–] bitofhope@awful.systems 8 points 3 days ago* (last edited 3 days ago) (2 children)

What exactly would constitute good news about which sorts of humans ChatGPT can eat? The phrase "no news is good news" feels very appropriate with respect to any news related to software-based anthropophagy.

Like what, it would be somehow better if instead chatbots could only cause devastating mental damage if you're someone of low status like an artist, a math pet or a nonwhite person, not if you're high status like a fund manager, a cult leader or a fanfiction author?

[–] blakestacey@awful.systems 8 points 3 days ago

Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.

What exactly would constitute good news about which sorts of humans ChatGPT can eat?

Maybe like with standard cannibalism they lose the ability to post after being consumed?

[–] istewart@awful.systems 10 points 3 days ago

this only happens to people sufficiently low-status

A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine

[–] scruiser@awful.systems 7 points 3 days ago

Is this “narrative” in the room with us right now?

I actually recall recently someone pro llm trying to push that sort of narrative (that it's only already mentally ill people being pushed over the edge by chatGPT)...

Where did I see it... oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy

This has all the hallmarks of a moral panic. ChatGPT has 122 million daily active users according to Demand Sage, that is something like a third the population of the United States. At that scale it's pretty much inevitable that you're going to get some real loonies on the platform. In fact at that scale it's pretty much inevitable you're going to get people whose first psychotic break lines up with when they started using ChatGPT. But even just stylistically it's fairly obvious that journalists love this narrative. There's nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.

The ~~call~~ narrative is coming from inside the ~~house~~ forum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).

[–] antifuchs@awful.systems 13 points 3 days ago

If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.

https://unstable.systems/@sop/114898566686215926

[–] froztbyte@awful.systems 10 points 3 days ago (4 children)

Ouch. Also, I'm raging and didn't even realize I had barbarian levels.

[–] Amoeba_Girl@awful.systems 8 points 3 days ago (1 children)

Well I suppose it can't be much worse than graphology or myers-briggs!

[–] froztbyte@awful.systems 5 points 2 days ago (1 children)

is graphology the pentaseptateragonoid spiderweb-dartboard-connect-the-spines thing?

load more comments (1 replies)
[–] Soyweiser@awful.systems 7 points 3 days ago

failed my saving throw.

[–] o7___o7@awful.systems 5 points 3 days ago

I don't know what I expected

[–] BlueMonday1984@awful.systems 13 points 3 days ago (3 children)

Caught a particularly spectacular AI fuckup in the wild:

(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)

[–] bitofhope@awful.systems 8 points 3 days ago (1 children)

Damn, this is how I find out?

[–] froztbyte@awful.systems 5 points 3 days ago
[–] antifuchs@awful.systems 11 points 3 days ago

Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data

[–] Soyweiser@awful.systems 7 points 3 days ago* (last edited 3 days ago)

The AI is right with how much we know of his life he osnt really dead, the AGI can just simulate hom and resurrect him. Takes another hit from my joint made exclusively out of the sequences book pages

(Rip indeed, what a crazy ride, and he was all aboard).

[–] gerikson@awful.systems 13 points 3 days ago* (last edited 3 days ago)

So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

[–] BigMuffN69@awful.systems 5 points 3 days ago

Ernie Davis gives his thoughts on the recent GDM and OAI performance at the IMO.

https://garymarcus.substack.com/p/deepmind-and-openai-achieve-imo-gold

load more comments
view more: ‹ prev next ›