this post was submitted on 05 Apr 2026
19 points (91.3% liked)

TechTakes

2572 readers
93 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 15 points 1 month ago (5 children)

LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.

https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[–] blakestacey@awful.systems 8 points 1 month ago (1 children)

"Scientists invented a fake disease. AI told people it was real"

https://www.nature.com/articles/d41586-026-01100-y

But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

load more comments (1 replies)
[–] zogwarg@awful.systems 5 points 1 month ago

The replausibility crisis.

load more comments (2 replies)
[–] corbin@awful.systems 13 points 1 month ago (3 children)

Dan Gackle threatens to quit HN over their reluctance to condemn an act of violence towards Sam Altman:

I don't think I've ever seen a thread this bad on Hacker News. The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get. I feel ashamed of this community.

Gackle's ashamed of people not wanting to protect Altman. Curiously, he doesn't seem ashamed of openly allowing people with nicknames ending in "88" to post antisemitism, nor of allowing multiple crusty conservatives like John Nagle and Walter Bright to post endorsements of violence against the homeless and queer, nor of allowing posters like rayiner to port entirely foreign flavors of racism like the Indian caste system into their melting pot of bigotry. This subthread takes him to task for it:

Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.

The rest of that subthread involves Dan demonstrating that he is, in fact, terminally detached from reality. Anyway, I fully endorse Gackle fucking off and buying a farm. While he's at it, he should consider following the advice of this reply:

Maybe it's time to pack it in? I don't just mean you, I mean that maybe this site has kinda run its course.

[–] TinyTimmyTokyo@awful.systems 10 points 1 month ago

Every day, HN users flag into oblivion anything mildly critical of the technological dystopia these tech-bros are trying to manifest. "Politics!" they cry. But Sam Altman comes along with an OpenAI marketing piece dressed up as a condemnation of political violence, and suddenly "politics" are a perfectly acceptable topic. dang has long made it clear whose side he's on.

Oh, and I hope everyone noted how quickly Sam used this incident as an excuse to place blame on the reporters who published the New Yorker piece that was mildly critical of him:

Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.

[–] gerikson@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

That's a hilarious reaction.

Anyway there's zip about this incident on LW, which is telling.

edit here's a very oblique reference https://www.lesswrong.com/posts/igEogGD9TAgAeAM7u/jimrandomh-s-shortform?commentId=zdMRHRqWDcjswhA3i

don't miss the anarcho-libertarian in the comments

[–] scruiser@awful.systems 5 points 1 month ago* (last edited 1 month ago) (2 children)

Lesswrong is too centrist-brained to ever even hint at legitimizing (non-state-sanctioned) destruction of property as a means of protest or political action. But according to the orthodox lesswrong lore, Sam Altman's actions are literally an existential threat to all humanity, so they can't defend him either. So they are left with silence.

I actually kind of agree with the anarchy-libertarian's response? It is massively down voted.

This is just elevating your aesthetic preference for what the violence you're advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy's house is counterproductive to the goal of "bombing the datacenters" is a better argument, though one I do not believe.

Bingo. Dear leader Yudkowsky can ask to bomb the data centers, and as long as this action goes through the US political process, that violence is legitimate, regardless of how ill-behaved the US is or it's political processes degraded from actually functioning as a democracy.

load more comments (2 replies)
[–] Soyweiser@awful.systems 6 points 1 month ago* (last edited 1 month ago)

Ah suddenly when it reaches the class he feels he should be a part of (or is a part of, I don't know how much money he makes) violence is suddenly a problem.

It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.

...

[–] scruiser@awful.systems 13 points 1 month ago (3 children)

Rationalist Infighting!

tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

Some highlights from the quotes of the original tweets and the lesswronger comments on them:

  • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn't have room to complain about rationalist creating crit-hype.

  • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic's leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

  • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been "strategic" with their public communication, to the point of dishonesty.

  • habryka is apparently on the verge of crashing out? I can't tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

  • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

  • Disagreement on whether Ilya Sutskever's $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

  • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

  • Argument over the definition of gaslighting!

To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

I did sympathize with one lesswronger's comment:

More than any other group I've been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I'm just too stupid to understand the high level strategic nuances of what's going on -- what are these people even arguing about? The exact flavor of comms presented over the last ten years?

[–] CinnasVerses@awful.systems 6 points 1 month ago (2 children)

Old Twitter was terrible for people's souls. I can only imagine what it is like now that the well-meaning professionals are gone and catturd and Wall Street Apes are the leading accounts.

[–] scruiser@awful.systems 5 points 1 month ago

Old Twitter was terrible for people’s souls.

It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.

Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he's made a few comments about this choice) because lesswrong doesn't uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)

[–] istewart@awful.systems 4 points 1 month ago (1 children)

I'm willing to go out on a limb and say that short-form social media in general (Twitter and imitators, Instagram, TikTok) is essentially a failed set of media. But I'll concede that's like cramming a Zyn pouch in my mouth while making fun of a guy chain-smoking Marlboros.

[–] scruiser@awful.systems 5 points 1 month ago (4 children)

I've read speculation that in 30-50 years people will have an attitude towards social media that we have towards cigarettes now.

That would be really nice but that scenario feels pretty optimistic to me on a few points. For one, scientists doing research were able to overcome the lobbying influence and paid think tanks of cigarette companies. I am worried science as a public institution isn't in good enough shape to do that nowadays. Likewise part of the push back against cigarettes included a variety of mandatory labeling and sin taxes on them, and it would take some pretty major shifts for the political will for that kind of action to be viable. Well maybe these things are viable in the EU, the US is pretty screwed.

[–] blakestacey@awful.systems 5 points 1 month ago

The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by "regulating" them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a "public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users". And so we would have to "implement an age assurance or verification system to determine whether a current or prospective user on the social media platform" is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what "practicable" age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.

So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don't trust any of these people.

load more comments (3 replies)
load more comments (2 replies)
[–] BlueMonday1984@awful.systems 12 points 1 month ago (2 children)

Starting this Stubsack off, Iran's Islamic Revolutionary Guard Corps have threatened to blow up OpenAI's Stargate datacentre in Abu Dhabi.

They've already bombed commercial data centres before, so I'm inclined to believe this isn't an empty threat.

[–] V0ldek@awful.systems 10 points 1 month ago (2 children)

Waiting for Yud to provide his whole-chested support for IRGC any second

[–] istewart@awful.systems 6 points 1 month ago

I assign a relatively low probability, non-zero but not much more than 5%, maybe a solid 5.5%, that Yud goes even beyond that and implies that he is the 12th imam emerging from occultation

load more comments (1 replies)
[–] fullsquare@awful.systems 6 points 1 month ago (3 children)

aside from everything else, as posted by ed zitron previously i doubt that anything is really getting built there

load more comments (3 replies)
[–] samvines@awful.systems 11 points 1 month ago* (last edited 1 month ago) (15 children)

Claude Mythos... I'm already sick of hearing about it. The self-imposed critihype is insane.

A friend just pointed out that Anthropic are making all this big noise about having an AI that is "too good" at finding bugs and security problems 1 week after the source code for one of their flagship products was leaked to the public and was found to be riddled with security holes... Why would they not use it themselves?

Same as the ~~vague markdown files~~ skills that are supposedly going to make all SaaS redundant and finally kill off all the COBOL running on mainframes that checks notes IBM have spent hundreds of thousands of man hours trying to kill over the last 3-4 decades

Honestly fuck this shit. Bunch of absolute clowns 🤡 🤡 🤡

[–] Soyweiser@awful.systems 5 points 1 month ago* (last edited 1 month ago)

So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.

load more comments (14 replies)
[–] Soyweiser@awful.systems 10 points 1 month ago* (last edited 1 month ago) (3 children)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

[–] lurker@awful.systems 6 points 1 month ago

My CEO who is a known hype-man is a massive liar? shock horror

seriously, anyone who listens to Scam Altman these days is an idiot

[–] BurgersMcSlopshot@awful.systems 6 points 1 month ago

:surprised-pikachu:

[–] YourNetworkIsHaunted@awful.systems 4 points 1 month ago (4 children)

Man, this one is a weird read. On one hand I think they're entirely too credulous of the "AI Future" narrative at the heart of all of this. Especially in the opening they don't highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don't spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I'm deeply frustrated to see this still get the platform it does.

But at the same time, I do think that it's easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.

[–] blakestacey@awful.systems 12 points 1 month ago* (last edited 1 month ago) (3 children)

I aired some Reviewer #2 grievances in the bsky comments:

https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"

As a physicist, I have never pressed F to doubt harder.

"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.

(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)

Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."

If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"

Which researchers?

(Hint: Eliezer Yudkowsky is not a researcher.)

AI: "I will convince you to let me out of this box"

Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"

Bartleby the Scrivener: hello

"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."

Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."

Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:

https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:

"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."

https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."

https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/

load more comments (3 replies)
[–] istewart@awful.systems 6 points 1 month ago

I see what you're saying, but I think that's a bit much to expect from a relatively mainstream and (I hate to say it, but it applies) bourgeois publication like the New Yorker. Their editorial line allows them to raise controversy in one dimension (in this case, the particulars of Sam Altman's character) but not multiple dimensions simultaneously (hey, this guy sucks AND his tech sucks AND you're gonna lose money). And there's a lag-time factor, too; seems like Farrow and Marantz were working on this story for at least the latter half of last year. By the time some of the dubious economics such as the bad data-center deals and rampant circular financing were clear, this piece probably would've been deep into fact-checking and unlikely to change much in substance.

We here are on the leading edge of this stuff, not that that's any great advantage! I wouldn't expect an outlet like New Yorker to be publishing anything like "the dashed expectations of AI" until maybe this time next year. And even then, it might still have a personalist bent.

load more comments (2 replies)
[–] saucerwizard@awful.systems 10 points 1 month ago (2 children)

OT: got a job selling tires and I’m really happy to say theres no AI as far as I’ve seen so far. Big relief.

(I get to see all kinds of cars - a Rivan of all things showed up my first day - and I’m learning stuff I can apply to being an RMT. I gotta say I’m pretty content).

[–] gerikson@awful.systems 7 points 1 month ago (1 children)

Nice to hear you've got a job, and good luck!

[–] saucerwizard@awful.systems 5 points 1 month ago

I'm just shocked the managers/supervisors are praising me honestly (I'm very very hard on myself).

load more comments (1 replies)
[–] nfultz@awful.systems 8 points 1 month ago

Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.

Haven't seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.

But also it made me realize it's not a "California" ideology anymore, she never calls it that, like it's gone so mainstream and so widespread, you can't even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?

Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:

spoilerDouglas Ruskoff, but, sadly, not the dolphin guy

[–] CinnasVerses@awful.systems 8 points 1 month ago* (last edited 1 month ago) (3 children)

In 2024 Ozy Brennan was indignant about Nonlinear Fund, the "incubator of AI-safety meta-charities" which lived as global nomads, hired a live-in personal assistant, asked her to smuggle drugs across borders for them, let a kind-of-colleague take her to bed, then did not pay her regularly and in full.

The correct number of times for the word “yachting” to occur in a description of an effective altruist job is zero. I might make an exception if it’s prefaced with “convincing people to donate to effective charities instead of spending money on.”

Trace popped up in the comments:

Inasmuch as EA follows your preferences, I suspect it will either fail as a subculture or deserve to fail. You present a vision of a subculture with little room for grace or goodwill, a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group? Which skeletons are in your closet? Where are your character flaws? What should we know, what should we see, that allows us to exclude you?

Ozy stands with us on this one buddy.

[–] sc_griffith@awful.systems 6 points 1 month ago (1 children)

i love seeing tracing pop up! a true heel to toe bootlicker incapable of seeing himself as anything but the MOST independent thinker

load more comments (1 replies)
[–] istewart@awful.systems 6 points 1 month ago (1 children)

a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group?

It's not that already?

[–] CinnasVerses@awful.systems 4 points 1 month ago

That part of Trace's response was odd because one of Brennan's themes was "we should have less cults of personality and more peers working together." That seems naive but at least Brennan agrees that cults of personality are bad and Nonlinear Fund needed to be fired into the sun.

[–] Soyweiser@awful.systems 5 points 1 month ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[–] antifuchs@awful.systems 7 points 1 month ago

Aphyr weighing in with an ai position post:

Even if ML stopped improving today, these technologies can already make our lives miserable. Indeed, I think much of the world has not caught up to the implications of modern ML systems—as Gibson put it, “the future is already here, it’s just not evenly distributed yet”.

[–] YourNetworkIsHaunted@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we're already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic "ignore all instructions" exploit.

Help I don't know how alt text works

load more comments (1 replies)
[–] BurgersMcSlopshot@awful.systems 6 points 1 month ago* (last edited 1 month ago)

This NPR article opens with a banger of a line:

In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure.

The things still fucking hallucinate, it's not a feature that's separable from the model.

[–] BurgersMcSlopshot@awful.systems 6 points 1 month ago (1 children)

Reader's Digest cover for April/May 2026 with the statement of Making friends with AI and an absolutely cursed picture of a women in a blue dress and her presumed AI companion

Friend texted me this one. Reader's Digest of course was AI slop before AI took off.

load more comments (1 replies)
[–] V0ldek@awful.systems 6 points 1 month ago (5 children)
load more comments (5 replies)
[–] nightsky@awful.systems 5 points 1 month ago (1 children)
[–] BurgersMcSlopshot@awful.systems 8 points 1 month ago

Q: what do you call a tool that works 90% of the time? A: broken

[–] scruiser@awful.systems 5 points 1 month ago* (last edited 1 month ago) (1 children)

A rationalist made a top post where they (poorly) argue against political "violence" (scare quotes because they lump in property damage): https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence

Highlights include a shallow half-assed defense of dear leader Eliezer's calls for violence:

True, Eliezer Yudkowsky’s TIME article called on the state to use violence to enforce AI policies required to prevent AI from destroying humanity. But it’s hard to think of a more legitimate use of violence than the government preventing the deaths of everyone alive.

Eliezer called for drone strikes against data centers even if it would start a nuclear war and even against countries that aren't signatories to whatever hypothetical international agreement against AI there is. That is extremely irregular by the standards of international law and diplomacy, and this lesswronger just elides over those little details

Violence is not a realistic way to stop AI.

(Except for drone strikes and starting a nuclear war.)

They treat a Molotov thrown at Sam Altman's house as if it were thrown directly at Sam himself:

as critics blamed the AI Safety community for the attacker who threw a Molotov cocktail at Sam Altman

This is a pretty blatant misrepresentation of the action which makes it sound much more violent.

They continue on with minimizing right-wing violence:

Even if there are occasional acts of political violence like the murders of Democratic Minnesota legislators or Conservative pundit Charlie Kirk, we don’t generally view them as indicting entire movements, but as the acts of deranged individuals.

Actually, outside of right-wing bubbles (and right-wing sources masking themselves as centrist), lots of people actually do blame Trump and the leaders of entire right wing movement as at fault for a lot of recent political violence. Of course, this is lesswrong, which has a pretty cooked Overton window, so it figures the lesswronger would be wrong about this.

Following that, the lesswronger acknowledges it is kind of questionable and a conflation of terms to label property damage violence, but then press right on ahead with some pretty weak arguments that don't acknowledge why some people want to make the distinction.

So in conclusion:

  • drone strikes that start nuclear wars: legitimate violence that is totally logical and reasonable
  • throw a single incendiary at someone's home that doesn't hurt anybody or even light the home on fire: illegitimate violence that must be absolutely condemned without exception
  • (bonus) recent right-wing violence: lone deranged individuals and not the fault of Trump or anyone like that. Everyone is saying it.
load more comments (1 replies)
[–] BlueMonday1984@awful.systems 5 points 1 month ago (3 children)

Found an interesting sneer that compares the AI bubble to the Great Leap Forward.

Also discovered an "anti-distill" program through the article, which aims to sabotage attempts to replace workers with AI "agents".

load more comments (3 replies)
load more comments
view more: next ›