this post was submitted on 12 Apr 2026
17 points (94.7% liked)

TechTakes

2538 readers
74 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sansruse@awful.systems 5 points 4 hours ago

https://www.cnbc.com/2026/04/15/allbirds-bird-stock-shoes-ai.html

Struggling shoe retailer Allbirds makes bizarre pivot from shoes to AI, stock explodes more than 400%

I had such a hard time coming up with an original joke for this, until i realized the reason why is that allbirds is stealing jokes from the dotcom bubble in the first place.

The company, valued around $4 billion at its peak, sold its intellectual property and other assets two weeks ago for $39 million. The stock surged over 400%, from under $3 a share up to $13. The shoe company had a market cap of about $21 million Tuesday.

Oh. so, bit of a misleading headline there CNBC. This wasn't a real publicly traded company, it was a company on life support that got pivoted by a greedy founder looking to cash in. Cynical move or the delusions of a true believer? does it matter?

Regardless, the stupidity is too much, the resemblance too striking. good luck to Allbirds in the totally normal footwear-to-high tech pivot that is happening in this totally normal economy.

[–] HotGarbage@awful.systems 3 points 4 hours ago
[–] nfultz@awful.systems 1 points 3 hours ago
[–] BlueMonday1984@awful.systems 2 points 6 hours ago

New Blood in the Machine, about the escalating violence against the slop-mongers.

[–] YourNetworkIsHaunted@awful.systems 4 points 11 hours ago* (last edited 11 hours ago)

To distract us from the ongoing cycle of violence and discourse about violence that neither cracks down or addresses it's causes, may I offer the fruit of today's YouTube rabbit hole:

AI isn't the future. It's medieval alchemy.

[–] scruiser@awful.systems 5 points 21 hours ago* (last edited 20 hours ago) (3 children)

Eliezer joins the trend of condemning "political" violence with confidence on the far end of the dunning-kruger curve: https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

I've already mocked this attitude down thread and in the previous weekly thread, so I'll try to keep my mockery to a few highlights...

He's admitting nuke the data centers is in fact violence!

It would be beneath my dignity as a childhood reader of Heinlein and Orwell to pretend that this is not an invocation of force.

But then drawing a special case around it.

But it's the sort of force that's meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.

I don't think Eliezer has checked the news if he think the US government carries out violence in predictable or fair or avoidable ways! Venezuela! (It wasn't fair before Trump, or avoidable if you didn't want to bend over for the interest of US capital, but it is blatantly obvious under Trump) The entire lead up to Iran consisted of ripping up Obama's attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as "Lawful" in Eliezers book? ~~If he kept up with the DnD/Pathfinder source material, he would know Achaekek's assassins are actually Lawful Evil~~

The ASI problem is not like this. If you shut down 5% of AI research today, humanity does not experience 5% fewer casualties. We end up 100% dead after slightly more time.

His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And they think slowing down AGI (whether through violence or other means) might buy time for another sort of solution that is more permanent (like the idea of "solve alignment" Eliezer originally promised them). Lots of lesswrong posts regularly speculate on how to slow down the AI race and how to make use of the time they have, this isn't even outside the normal window of lesswrong discourse!

Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals

Sources cited: 0

One of the comments also pisses me off:

Which reminds me about another point: I suspect that "bomb data centers" meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory.

"Drone strike the data centers even if starts nuclear war" is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.

[–] fullsquare@awful.systems 7 points 11 hours ago* (last edited 11 hours ago) (1 children)

eliezer misses that (as used in decolonization/civil rights era) nonviolence is effectively a sophisticated propaganda strategy that takes existing injustices and violence and uses it to bait opponent into attacking you, all while your own people take photos and show to entire world carefully crafted messaging that appeals to general public conscience. the messaging part is extremely important in this. there's no fucking way this could work for him because his cause is comprehensible only to those who already buy his cult messaging as ground truth. he's in just for the moral superiority of being nonviolent. he's never gonna get it because comprehending it requires touching grass

[–] gerikson@awful.systems 2 points 5 hours ago (1 children)

Yeah both non-violence and pure terrorism are communication forms at the root. I remember reading long ago that the Rote Armee Fraktion's master plan was:

  1. commit horrific acts of violence against pillars of the community / rob banks to get money
  2. said acts would unleash a repressive wave of violence from the state
  3. the proletariat would see this repressive wave, wake up, and cause the revolution

It kinda stopped at stage 2, because the BRD's security services were a bit less ex-Nazi than they expected, and also there was basically no proletariat.

Also the Southern police chief who correctly deduced that mass arrests were what the civil rights activists wanted, got the go-ahead from neighboring county jails, and then politely and non-violently arrested everyone protesting and spread them out over a wider area, thus preventing the media-friendly repression that was the goal.

[–] fullsquare@awful.systems 1 points 4 hours ago

Yeah there are only so many ways to get it going, you don't hear about these that don't figure it out because cops bust them making them look like clowns and nobody wants to get associated with them afterwards

there is also a barrier between step 2 and 3, because sometimes news like that are suppressed. american school shootings get that treatment sometimes, not to mention all the info filtering at facebook and friends. this is why sympathetic media is an important bit to have in advance. there's also this bit where any serious insurgency needs money and it looks like what they got didn't work out

that southern police chief was per blogpost Laurie Pritchett and this kind of thinking is also what makes COIN tick. worry not, Hegseth declared it all woke nonsense

[–] Soyweiser@awful.systems 4 points 13 hours ago

But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.

Remember the cartoon of the bombs being dropped on people and the people going 'I hear the next bombs will be sent by a woman', this but 'with lawful force'.

[–] YourNetworkIsHaunted@awful.systems 6 points 15 hours ago (1 children)

This feels somehow tied to the whole "agentic" thing I've ranged about previously. Like, individual acts of violence are strictly destructive because the people doing it aren't sufficiently "agentic" to change things, even though American history is full of cases where (usually racist) vigilante violence had a huge impact on people's decision-making. But when the government does it it's different because people in government got there by proving their agency and ability to actually impact the world. Like, it feels almost like he's offended that the NPCs might try and do something as drastic as killing someone without GM permission.

Meanwhile in reality, people legitimately do feel like they don't have a lot of options to protect themselves from the real harms this industry is doing, to say nothing of the people who buy his line about the oncoming class-K end-of-life scenario. Anger is an appropriate response to the circumstances we find ourselves in, and in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn't be surprised to see people trying to inflict that fear and rage upon the outside world.

[–] Evinceo@awful.systems 6 points 15 hours ago

in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn’t be surprised to see people trying to inflict that fear and rage upon the outside world.

Nay a culture where every citizen is entitled to one armed crashout and threats of such have been an important lever used by the party that believes in that entitlement for decades.

[–] samvines@awful.systems 6 points 1 day ago* (last edited 1 day ago) (2 children)

Soon, at each new model of AI along the current capability curve, you will start to see large discrete jumps in ability in economically important areas, because the previous AI ability level in some aspect of the job just wasn't good enough and bottlenecked progress. When bottlenecks are released, it looks like a leap forward. It is going to look like unexpected gains in AI capacity, and, indeed there is no sign that the current exponential ability curve is slowing down so far but it is going to be like what happened in coding: as soon as models crossed a certain threshold with Opus 4.5, GPT-5.2, and Gemini 3, suddenly Claude Code & Codex were viable.  Before that, it was all about coding assistance, afterwards it was all about agents from despite relatively small gains in model ability

There is just something so inherently smug and annoying about Mollick. He is one of those low information boosters whose posts sound intellectual until you really think about them.

Tell me more about how the pile of cursed spaghetti that is Claude code is now viable due to model breakthroughs. All I see are hype men saying "the new model is a team of PhDs in your pocket" and then releasing disappointing updates or saying "the new model is too dangerous" because they have some vaporware powered by human crowdsourcing.

Also coding is not like other areas - you can test for hallucinations by compiling and printing and running tests.

I guess my first mistake this morning was opening linkedin

[–] YourNetworkIsHaunted@awful.systems 5 points 23 hours ago (1 children)

I've never understood how these things are simultaneously gaining their abilities based on statistical analysis of all kinds of random writings online including social media, fanfic, reddit, etc. but also are simultaneously supposed to end up as experts rather than a much faster and more agreeable dumbass. Like, the training data may include all the great works of literature, all the scrapable scientific studies and textbooks they could steal, and so on. But it also included every moron who ever shared conspiracy theories on Twitter, every confident-sounding business idiot on LinkedIn, and every stupid word that Scott or Yud ever wrote. Surely the bullshit has to exceed the expertise by raw volume, and if they took the time and energy to curate it out the way they would need to to correct that they wouldn't be left with a large enough sample to actually scale off of.

Basically, either I'm dramatically misunderstanding something or the best we can hope for is the Average Joe on Reddit, who may not be a complete dumbass but definitely isn't a team of PhDs.

[–] scruiser@awful.systems 4 points 21 hours ago* (last edited 20 hours ago)

LLMs generate the next most probable token given the previous context of tokens they have (not an average of the entire internet). And post-training shifts the odds a bit further in a relatively useful direction. So given the right context the LLM will mostly consistently regurgitate content stolen from PhDs and academic papers, maybe even managing to shuffle it around in a novel way that is marginally useful.

Of course, that is only the general trend given the right^tm^ prompt. Even with a prompt that looks mostly right, one seemingly innocuous word in the wrong place might nudge the odds and you get the answer of a moron /r/hypotheticalphysics in response to a physics question. Or a asking for a recipe gets you elmer's glue on your mozarella pizza from a reddit joke answer.

if they took the time and energy to curate it out the way they would need to to correct that they wouldn’t be left with a large enough sample to actually scale off of

They do steps like train the model generally on the desired languages with all the random internet bullshit, and then fine-tuning it on the actually curated stuff. So that shifts the odds, but again, not enough to actually guarantee anything.

So tldr; you're right, but since it is possible to get somewhat better than average internet junk with curating and post-training and prompting, llm boosters and labs have convinced themselves they are just a few more iterations of data curation and training approaches and prompting techniques away from entirely eliminating the problem, when the best they can do is make it less likely.

[–] o7___o7@awful.systems 2 points 1 day ago

"Cursed spaghetti"

🤌

[–] sc_griffith@awful.systems 2 points 1 day ago (1 children)

new odium symposium episode is now available on all platforms. we look Avgi Saketopoulou and Ann Pellegrini's Gender Without Identity, a contemporary work of queer psychoanalytic theory. then we look at a case study in which it all goes wrong.

https://www.patreon.com/posts/14-just-call-me-155052365

also we're starting a discord for the podcast https://discord.gg/7tEEE39Fx

also also we're going to release our first subscriber episode next week, where we look at the pseudoscientists of paper repository viXra

Maybe it's just because I'm rolling back through Age of Mythology, but I died laughing at "it's like the centaur, Helen"

[–] gerikson@awful.systems 8 points 2 days ago (2 children)

LW stalwart discovers kids get sniffles from daycare, obviously this means women have to stay at home to take care of kids and not work:

https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses

BTW almost every person born after 1970 in Sweden has been to daycare as a kid, if daycare illnesses had long-term consequences it would be showing up here

[–] CinnasVerses@awful.systems 6 points 1 day ago (1 children)

Sweden is an interesting example because they pioneered the let-it-rip approach to COVID. That was less disastrous than it could have been but not great even in a country with a lot of detached housing and nuclear families. https://kevinmd.com/2025/01/swedens-controversial-covid-19-strategy-lessons-from-higher-mortality-rates.html I would not have recommended putting children in daycare without strict indoor-air-quality standards between 2020 and 2024.

[–] gerikson@awful.systems 4 points 1 day ago* (last edited 1 day ago) (1 children)

Covid is an exception, and believe me, if the main victims of Covid had been kids instead of old people stuffed into elder-care facilities, forgotten by everyone, the dynamics around masking and vaccines and lockdowns would have been a lot different.

My point is that most kids in Sweden go to daycare, "daycare sickness" (where the whole family comes down with enteritis etc) is a common thing, and as far as I know the country doesn't stand out in health stats.

You can argue that the loss of productivity from this is a factor, but as you mention in a parallell comment, the authorities can demand better hygiene and air quality in preschools and schools, and it would be cheaper than outfitting every single home.

[–] CinnasVerses@awful.systems 3 points 1 day ago

In 2026 I would be most concerned about measles.

[–] CinnasVerses@awful.systems 5 points 1 day ago* (last edited 1 day ago)

They do have consequences! Crowded schools and preschools and daycare spread all kinds of dangerous infectious diseases and there are consequences of getting them. With the arrival of COVID and the decline in vaccination risks are rising.

'Getting sick builds resistance' is another of the folk medicine beliefs which we in the infection control community have been fighting since 2020. Some diseases are milder the second or third time, but generally you want to get as few infectious diseases as possible.

[–] antifuchs@awful.systems 6 points 2 days ago

Excited for the labor and contract law disputes that this will spawn when the model makes promises that the person won’t keep https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone

But as as expected, this is another zuck project that doesn’t have the leg(itimation)s

[–] gerikson@awful.systems 5 points 2 days ago (2 children)

ok the takes on the attempted firebombing of sama's mansion are coming in from the rats and those that watch them. Credit to letting stuff marinate , I guess, and/or not working on a weekend

no clue who this dude is, has a slobsuck with .ai domain but makes sense:

https://www.campbellramble.ai/p/the-rational-conclusion

Weird MtG scarecrow Zvi plays moral philosopher, invokes multiple authorities on Xhitter:

https://thezvi.substack.com/p/political-violence-is-never-acceptable

[–] dgerard@awful.systems 6 points 1 day ago (1 children)

the worst kind of violence, the sort against people like me

all those other deaths? those aren't violence

[–] gerikson@awful.systems 3 points 1 day ago

Right? If it had been some poor schlub manning the security desk at a datacenter, it would have been a blip. But this is a VC we're talking about!!

[–] scruiser@awful.systems 6 points 1 day ago (1 children)

The Zvi post really pisses me off for continuing to normalize Eliezer's comments (in a way that misrepresents the problems with them).

This happened quite a bit around Eliezer’s op-ed in Time in particular, usually in highly bad faith, and this continues even now, equating calls for government to enforce rules to threats of violence, and there are a number of other past cases with similar sets of facts.

Eliezer called for the government to drone strike data centers, even of foreign governments not signatories to international agreements, and even if doing so risked starting nuclear war.

Pacifism is at least a consistent position, but instead rationalists like Zvi want to simultaneously disown the radical actions, but legitimizes the US's shit show of a foreign policy.

Another thing that pisses me off is the ahistorical claim by rationalist that such actions are ineffective and unlikely to succeed. Asymmetric warfare and terrorist tactics have obtained success many times in history! The kkk successfully used terrorism to repress a population for a century. The black panthers got gun control passed in California and put pressure on political leaders to accept the more peaceful branch of the civil rights movement. The IRA got the Good Friday agreement. The US revolution! All the empires that have withdrawn from Afghanistan!

Overall though... I guess this is a case of two wrongs making a sorta right. They are dangerously wrong about AI doom, but at least they are also wrong about direct action and so usually won't take the actions implied by their beliefs. (But they are still, completely predictably, inspiring stochastic terrorists).

[–] gerikson@awful.systems 9 points 1 day ago (2 children)

Yeah, what the fuck is this passage

If you believe that If Anyone Builds It, Everyone Dies, then you should say that if anyone builds it, then everyone dies. Not moral blame. Cause and effect. Note that this is importantly different from ‘anyone who is trying to build it is a mass murderer.’

(note the rat-tic of using "importantly" as an adjective)

This deftly evades the main question - how do we ensure that no-one builds it? There's a host of options, and political violence is one of them. I guess categorically stating it's off the table is a start, but Zvi has the moral gravitas of a dormouse. If I was of the political voilence bent I'd probably commit some just to spite him.

It's a willful refusal to actually consider the consequences of their beliefs, which is deeply ironic for a bunch that pride themselves on their hardcore consequentialism. Like, even if you just mean "if anyone builds it, everyone dies" as a simple cause and effect, that should imply some kind of action unless you don't think everyone dying would be bad actually.

[–] scruiser@awful.systems 5 points 21 hours ago* (last edited 21 hours ago) (1 children)

how do we ensure that no-one builds it?

Eliezer made a lesswrong post yesterday where he explains that since anyone could build it, lone acts of violence are obviously ineffective and the only solution is the right and proper ("Lawful" as he calls it, because he has been stuck on DnD since writing Planecrash) state violence which can enforce a worldwide ban (which you may recall Eliezer has put at the absurdly low 8 2024 GPUs).

[–] blakestacey@awful.systems 9 points 20 hours ago (1 children)

All their doom scenarios are made-up sci-fi bullshit, so of course they have free rein to pontificate about the right and wrong ways to prevent them. And because they are high on their own sci-fi, they downplay or neglect or misunderstand the real harms of the rising slop sea. Consequently, they fail to grasp the real social reaction to acts of violence.

[–] scruiser@awful.systems 5 points 20 hours ago (1 children)

they fail to grasp the real social reaction

side-note... I wonder what the overlap is between rationalist that showed up to their stupid "march for billionaires" and AI doomers?

[–] Evinceo@awful.systems 5 points 15 hours ago

Aella showed up.

[–] CinnasVerses@awful.systems 5 points 2 days ago (1 children)

There is research on evangelicals in the USA who interpret Trump tweets like their heretical pastors teach them to interpret the Bible. Is there something similar on the vast contradictory off-the-cuff social media outputs of people like Yud and Islamic treatment of the Hadiths?

[–] scruiser@awful.systems 4 points 1 day ago (1 children)

You've reminded me about the whole edifice of Qanon lore where they would try to combine 4chan (and later even sketchier sites like 8chan) hints with whatever Trump was posting at the moment to decode secret knowledge about stuff like when the military tribunals executing all the democrats would be.

Anyway, in Eliezer's case, I kind of get the feeling the lesswrong rationalists have somewhat moved on from him? They are still excessively deferential to him, but the vibe I get from hate-browsing lesswrong is that the majority of the rationalists there put much lower odds on AI doom? (It's hard to tell exactly because Eliezer has avoided committing to timelines or hard probabilities on AI doom despite all his talk about putting probabilities on everything in the sequences). Lesswrong occasionally references his tweets but not that often. Like I think sneerclub actually references them more often?

[–] CinnasVerses@awful.systems 4 points 1 day ago* (last edited 1 day ago)

The places to find Yudkowsky stans are Substack, Twitter, and the meetups and foundations in the Bay Area. Some of them accept his teachings but reject him because he became a doomer and they want to build God and conquer Death tomorrow.

Around 2012 or 2013, Yudkowsky passed the forum off to CFAR and stopped posting much there. For a while he used it as one mouth of his recruitment funnel (the other mouth being Effective Altruism). That is a really common fate for mailing lists, forums, and comments sections, just like its really common that people take what they want from the Sacred Texts. People who post and post are not always helpful for raising money or creating offline events.

e/ The people who respond to Yudkowsky's tweets and whom he retweets are TESCREAL figures.

load more comments
view more: next ›