this post was submitted on 09 Mar 2026
11 points (92.3% liked)

TechTakes

2549 readers
55 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 11 points 1 month ago

OT: an interesting musing I found on fedi:

[–] blakestacey@awful.systems 10 points 1 month ago (1 children)

Julia Angwin:

I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.

State law requires consent before someone's name can be used for commercial purposes.

[–] self@awful.systems 10 points 1 month ago

DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead

[–] o7___o7@awful.systems 8 points 1 month ago* (last edited 1 month ago) (11 children)
[–] mirrorwitch@awful.systems 9 points 1 month ago* (last edited 1 month ago) (1 children)

It's true that these analogies can be stigmatizing, but they needn't be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life.

This bothers me more than I can explain.

ICE as autoimmune disorder presupposes that it's normally a good thing to have ICE around and it's just malfunctioning as an exceptional state of things. If ICE is an immune system (malfunctional or not), what are we immigrants?

load more comments (1 replies)
[–] samvines@awful.systems 8 points 1 month ago (1 children)

They're not vibe-coding mission-critical AWS modules.

  1. Yes they are

and

  1. It's worse than that, they're vibe coding critical operating system components
[–] Architeuthis@awful.systems 7 points 1 month ago (2 children)

It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

They're just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won't always choose wisely, but that's normal too. There's plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

Whoa, the whole thing is indefensibly capital-W wrong, just an utterly weird rosy-colored-glass view of the current corporate experience.

[–] istewart@awful.systems 7 points 1 month ago

centaur-configured programmer

Cory, baby, my dogg, sure "enshittification" was a big hit, but you can't expect that your rough-draft followups are automatically gold

[–] blakestacey@awful.systems 6 points 1 month ago (1 children)

A skilled, centaur-configured programmer

This is like reading Yud mumbling about "Shoggoths". It's giving knight errant, organ-meat eater, Byronic hero, Haplogroup Rlb.

[–] YourNetworkIsHaunted@awful.systems 6 points 1 month ago (1 children)

Man, due to a weird alignment of the spheres I started reading those Honor Levy excerpts in the voice of Max Payne-style hardboiled narration and it fits weirdly well? Like a bargain version of the same sort of mid-budget semi-affectionate parody of existential angst that's all tone and minimal substance.

load more comments (1 replies)
[–] mirrorwitch@awful.systems 7 points 1 month ago* (last edited 1 month ago)

Take "Morgellons Disease," a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

Nitpick but this is unusually sloppy for Doctorow. 1) People with Morgellon's don't believe they have wires growing out of sores, but fibres (which upon examination turn out to be cotton for clothes). 2) The original Morgellons is a putative children's disease «wherein they critically break out with harsh Hairs on their Backs, which takes off the Unquiet Symptomes of the Disease, and delivers them from Coughs and Convulsions.» Which is quite different from the modern condition, whose sufferers have skin sores anywhere in the body with fibrous material looking like lint, dandelion fluff etc., and not particularly associated with convulsions. And 3) The association between the two was made by Miriam Leitao, a mother who believes her son suffers from the disease, and has gone to countless doctors and media trying to prove it's real. So it's an attempt to legitimise the postulated disease by cherry-picking something "historical" that vaguely resembles it.

It doesn't detract from Doctorow's overall argument, it's just an invalid example of the point he's trying to make (that delusions can be spread farther or intensified individually by technology). But where's the fact-checking? I learned this in five minutes from two Wikipedia pages, one of which was linked in the post. I have to wonder if Doctorow is posting through it in such emotional distress that he's pressing publish too quickly, which is what I hope for, because the alternative is that he asked a chatbot about Morgellon's Disease instead of reading Wikipedia.

[–] anise@quokk.au 6 points 1 month ago (6 children)

He knows how LLMs work, right? This really is just cope because he got called out for being weird about using them. Really fucking disappointing

[–] Architeuthis@awful.systems 6 points 1 month ago* (last edited 1 month ago) (1 children)

In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I'd say the jury's out on that.

edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.

I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.

Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.

load more comments (1 replies)
load more comments (5 replies)
load more comments (7 replies)
[–] lurker@awful.systems 8 points 1 month ago* (last edited 1 month ago) (4 children)

the Pentagon's CTO has AI psychosis now. sighhhhhhhhh

The whole argument can just be countered with "if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn't that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn't be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??"

It just reeks of bullshit. "uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we're doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government."

load more comments (4 replies)
[–] lurker@awful.systems 8 points 1 month ago* (last edited 1 month ago) (2 children)
[–] BurgersMcSlopshot@awful.systems 6 points 1 month ago

AI was going to give us all universal healthcare but we didn't believe hard enough and now all we have is this.

load more comments (1 replies)
[–] YourNetworkIsHaunted@awful.systems 8 points 1 month ago (5 children)

FT reports from Amazon insiders that they're investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.

FT also links to several previous stories they've reported on related issues, and I haven't had the time to breach the paywalls to read further, but the line that caught my eye was this:

The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.

To be honest, this is why I'm skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it's only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what's happening is, of course, a different story.

load more comments (5 replies)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (7 children)

Starting this Stubsack off, I've found another FOSS project that hit the digital krokodil - ntfy.sh v2.18.0 was written by AI

[–] mirrorwitch@awful.systems 7 points 1 month ago

I feel like at this point I want to highlight the ones that took a clear stance against LLM code. On a chardet thread, people listed:

  • Gentoo
  • Servo
  • Loupe
  • Qemu
  • postmarketOS
  • GoTo Social
  • Zig
load more comments (6 replies)
[–] o7___o7@awful.systems 8 points 1 month ago

Never have so few been so unsatisfied to be so correct.

[–] o7___o7@awful.systems 7 points 1 month ago* (last edited 1 month ago) (4 children)

A hackernews notices that HN autoflags 404 Media articles. A little downthread, dang bullshits unconvincingly in reponse.

https://news.ycombinator.com/item?id=47354634

load more comments (4 replies)
[–] samvines@awful.systems 7 points 1 month ago* (last edited 1 month ago) (3 children)

Silicon Valley is buzzing about this new idea: AI compute as compensation

These people are genuinely unhinged.

As the recent harpers article says:

"...people who should be in The Hague are giving [startups] twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen...”

[–] fullsquare@awful.systems 10 points 1 month ago

this is just wages paid in crypto but adapted to new era in a way that doesn't make sense

[–] BurgersMcSlopshot@awful.systems 9 points 1 month ago

"Selling your soul to the company store is not just fun, it is also invigorating!"

[–] jaschop@awful.systems 7 points 1 month ago

Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos you've ever seen.

[–] aninjury2all@awful.systems 7 points 1 month ago (8 children)
[–] istewart@awful.systems 6 points 1 month ago

Hmm, he's still sticking to tweet-threads on Twitter. We'll know he's fully cracking when he resorts to Ackman-style unreadable text blocks on there.

load more comments (7 replies)
[–] blakestacey@awful.systems 7 points 1 month ago

Chris Stokel-Walker at Fast Company reports:

High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.

The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. [...] “Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.

Just one of many reasons that the mere existence of "ChatGPT Edu" means that many people need to be tased in the nads

[–] blakestacey@awful.systems 6 points 1 month ago
[–] froztbyte@awful.systems 6 points 1 month ago (4 children)
load more comments (4 replies)
[–] macroplastic@sh.itjust.works 6 points 1 month ago
[–] sc_griffith@awful.systems 6 points 1 month ago* (last edited 1 month ago) (5 children)

new development in the theory of ontology: "the ontology that makes ai models valuable is american," said in the context of the models killing iranians

https://bsky.app/profile/atrupar.com/post/3mguiup62lt2j

[–] Soyweiser@awful.systems 8 points 1 month ago

"Our lethal capacities. Our ability to fight war."

These are two different things. But I fear he doesn't get that.

[–] zogwarg@awful.systems 6 points 1 month ago

Actually the race-realism use last week, combined with this one, makes me realize that for them it's just a fancy way of saying "world-view" [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].

It's just a way of calling biases/prejudice legitimate.

And you know what, inasmuch the models have a "world-view" it IS annoyingly american in many ways. (at least the wrong kind of american.)

load more comments (3 replies)
[–] lurker@awful.systems 6 points 1 month ago* (last edited 1 month ago) (3 children)

Anthropic is suing the Pentagon

This whole saga is a resounding “everyone sucks here”. but I’m gonna have to side with Anthropic on this one because at least they have some incredibly basic standards, which is far more than I can say for the current government and OpenAI, though the real best outcome is if the government and the AI industry destroy each other

(this has now been deemed high-quality enough for its own post)

[–] scruiser@awful.systems 7 points 1 month ago* (last edited 1 month ago)

The specific article's framing pisses me off...

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking "yes to confirm" makes slop-bot powered drones so much better). This wasn't good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic "picking a fight" is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn't start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn't just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court's composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch's judgement, even if the process for the judgement was "Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization". If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn't be in a position to sue and this drama wouldn't have been as publicized in the first place.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.

load more comments (2 replies)
[–] samvines@awful.systems 6 points 1 month ago (6 children)

It turns out we didn't need that list of AI-corrupted open source projects after all...

At this rate it's actually going to be easier to make a list of projects that don't have AI...

Systemd and libuv now on the slop hype train

[–] mirrorwitch@awful.systems 7 points 1 month ago

Systemd

Jesus.

I've been advocating for a hall of fame of projects that explicitly reject LLMs; ctrl+f "Gentoo" on this very comment thread for the few examples I heard about.

load more comments (5 replies)
[–] sailor_sega_saturn@awful.systems 6 points 1 month ago (2 children)

Last week 404 Media reported on some DOGE deposition videos.

The videos were since removed via court order, but are available on Internet Archive.

For anyone unfamiliar: this slots under TechTakes because DOGE is basically Elon Musk's army of naive fascist silicon valley tech-bros rampaging about the federal government with Chat-GPT, SQL, and unsecured thumb-drives.

This article is behind a paywall, but links to the following video snippets from the depositions:

https://www.instagram.com/reels/DVtOiqJjcu4/ https://www.instagram.com/reels/DVyhJT9jf4f/

For example here Justin Fox talks about deleting federal grants that he considered in-scope for an anti DEI executive order: https://www.instagram.com/reels/DVtOiqJjcu4/

Q: "Why is a documentary about Holocaust survivors DEI"

A: "It's the gender based story 🤷 that's inherently discriminatory to focus on this specific group 🙄."

Q: "It's inherently discriminatory to focus on what specific group?"

A: "The gender based. So, females 🤷 during the Holocaust."

He goes on to clarify that it's DEI because it focuses on Jewish women. Oh that's OK then!

There is a lot of video to work through but I know there is more ~~comedy gold~~ rage inducing punchable nazi snippets within.

load more comments (2 replies)
load more comments
view more: next ›