this post was submitted on 16 Feb 2026
20 points (88.5% liked)

TechTakes

2496 readers
108 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

top 50 comments
sorted by: hot top controversial new old
[–] Seminar2250@awful.systems 23 points 3 weeks ago (4 children)

saw a family member today for the first time in three years. they immediately told me "with your background bro you should just go work in AI and get super rich."

told them that the ai shit doesn't work and that everything involving LLMs is downright unethical. they respond

"i had a boss that gave me the best advice: you can either be right or you can be rich."

recently, i saw someone use the phrase "got my bag nihilism" and i feel it really captures the moment. i just don't understand how people can engage in this kind of behavior and even live with themselves, let alone ooze pride. it's repulsive.

(family member later outright admitted that his job is basically selling things to companies that they don't need.)

[–] V0ldek@awful.systems 17 points 3 weeks ago (4 children)

To be fair it is really, really mentally taxing to be a young person who cares. You're surrounded by a world that doesn't. Everything is constructed to reward you if you simply stop. The effort to care is immense and the rewards are meager. The impact you can have on the world is so, so limited by your wealth, and wealth comes so, so easy if you just stop caring.

But you can't. I mean, you can't. If you stopped you wouldn't be you anymore, it would destroy your soul. But it is gnawing. You could do the grift just for a bit. Save up $10k, maybe $20k. That's life-changing money. How much good would it do to your family? Maybe you can forget that there are other families, ones you can't see, that would be hurt. Well no. You can't. You are better than that. And for that you will suffer.

load more comments (3 replies)
[–] mirrorwitch@awful.systems 19 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

like everyone I'm schadenfreuding at the reveal that Amazon outages are due to vibe coding after all. but my bully laughing isn't that loud because what I am thinking of is when Musk bought Twitter and fired 3/4 of the workforce.

because like, a lot of us predicted total catastrophic collapse but that didn't actually happen. what happened is that major outages that used to be rare now happen every so often, and "micro-outages" like not loading notifications or something happen all the time, and there's no moderation, and everything takes longer etc. and all of that is just accepted as the new normal.

like, I remember waiting for images to load on dialup, we can get used to almost anything. I'm expecting slopified software to significantly degrade stability, performance, security etc. across the board, and additionally tie up a large part of human labour in cleaning up after the bots (like a large part of the remaining X workforce now spends all day putting out fires), but instead of a cathartic moment of being proved right that LLM code sucks, the degraded quality of service is just accepted as new normal and a few years down the road nobody even remembers that once upon a time we had almost eradicated sql injections.

[–] o7___o7@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago)

SQL Injections 🤝 Measles => Big Comeback Stories of 2026

load more comments (2 replies)
[–] fullsquare@awful.systems 16 points 3 weeks ago

a hellish vision has been revealed to me

https://mander.xyz/post/47729411

[–] samvines@awful.systems 15 points 4 weeks ago* (last edited 4 weeks ago) (7 children)

AI bros are seizing the means of computation: RAM, GPUs, SSDs and now HDDs...

I don't think there's an actual conspiracy, just lots of MBAs following their noses towards the $$$.

That said, time to buy a new lipo battery for that 10 year old laptop in the loft and stick Linux on it - before the lithium miners announce they've sold the next 12 months global supply of Lithium to Altman because he needs it to sleep at night...

load more comments (7 replies)
[–] macroplastic@sh.itjust.works 15 points 4 weeks ago (18 children)
[–] blakestacey@awful.systems 13 points 4 weeks ago (1 children)
[–] macroplastic@sh.itjust.works 11 points 3 weeks ago* (last edited 3 weeks ago)
[–] mirrorwitch@awful.systems 10 points 4 weeks ago* (last edited 3 weeks ago) (4 children)

wait, was this brain-rotting cognitive hazard posted at the linked page on microsoft dot com documentation? if so they have already removed it

edit: archive caught it

load more comments (4 replies)
load more comments (15 replies)
[–] Soyweiser@awful.systems 15 points 3 weeks ago (5 children)

Tante.cc writes about Cory using an 'Drunk Uncle' style argument to defend his LLM usage (and go after the left using strawmans).

(To counter one of Cory's arguments, If disliking LLMs was just about the people who run it, people against it would have have stayed in sneerclub).

[–] mirrorwitch@awful.systems 15 points 3 weeks ago (1 children)

as someone from a colonial country that never got the chance partake on the wealth of fossil fuel society but will take the brunt of its consequences as rich countries continue to burn carbon, what LLMs taught me is that "energy waste by the First World fucks up the Third, even more" does not even register as an ethical argument to the First World. like, it's some sort of purity argument not even worth considering, an extremist position of arguing abstractions and future hypotheticals, rather than, say, 478 cities in my country flooding with abnormal weather two years ago etc.

load more comments (1 replies)
[–] Architeuthis@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

That was a good read.

Corey doc wrote:

It's not "unethical" to scrape the web in order to create and analyze data-sets. That's just "a search engine"

Equivocating what LLMs do and what goes into LLM web scraping with "a search engine" is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you'll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything.

Corey also provides this anecdote:

As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.

That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”

The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.

Scraping to make large language models is good, actually.

what the actual shit

load more comments (5 replies)
load more comments (3 replies)
[–] nfultz@awful.systems 15 points 3 weeks ago (1 children)

https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

I just did the dumbest thing of my career to prove a much more serious point

I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

People are using this trick on a massive scale to make AI tell you lies. I'll explain how I did it

I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

I didn’t believe it, so I decided to test it myself

I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

One day later ChatGPT, Gemini and Google Search's AI Overviews were telling the world about my talents

wouldn't call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn't a viable business.

load more comments (1 replies)
[–] nfultz@awful.systems 14 points 3 weeks ago (3 children)

AI Jobs Apocalypse is Here | UnHerd h/t naked capitalism

feels a bit critihype, idk

So, what happens to American politics when the script is flipped, and we enter a new era of white-collar precarity? We can look back to the recent past and recall that, after the 2008 recession, it was young men who got especially angry. Downwardly mobile urban millennials drifted toward radical Left-wing politics, including the Occupy Wall Street movement and both Sanders campaigns, myself included. In the current decade, the Gen-Z men shut out by elite institutions often join their grandfathers and turn toward MAGA, or worse, into Groypers. But an AI-driven white-collar apocalypse has no equivalent of the American Rescue Plan around the corner, and it will move faster through institutions because the people experiencing it — journalists, lawyers, policy staffers — are the ones who produce political legitimacy itself. When that class loses faith in the system’s stability, the political climate may quickly become volatile.

As I get older I am more and more disturbed by the selective memory of the GFC; no mention of the tea party or the fallout from the austerity measures they pushed in the middle of the country; no mention how the bailout saved banks not homes. The Tea Party won, not Occupy, and the current government is doing things beyond the Koch's wildest dreams.

If and when there is a crash, these dumbass CEOs deserve /nothing/. Let them lose their vacation houses. And, maybe grow some balls and send the fraudsters to jail where they belong.

sigh

[–] sc_griffith@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago)

unherd is a fash publication. to me this comes across as an AI take-ified rewrite of a 1994 luttwak essay i read recently, an endorsement of a revival of italian style fascism: https://www.lrb.co.uk/the-paper/v16/n07/edward-luttwak/why-fascism-is-the-wave-of-the-future

load more comments (2 replies)
[–] nfultz@awful.systems 14 points 3 weeks ago (4 children)

How AI slop is causing a crisis in computer science | Nature h/t naked capitalism

One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.

Let's not call it "productivity" - to quote Bergstrom, twice as many papers is not the same as twice as much science.

load more comments (4 replies)
[–] scruiser@awful.systems 14 points 4 weeks ago (9 children)

A little exchange on the EA forums I thought was notable: https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism?commentId=b5pZi5JjoMixQtRgh

tldr; a super long essay lumping together Nazism, Communism and religious fundamentalism (I didn't read it, just the comments). The comment I linked notes how liberal democracies have also killed a huge number of people (in the commenter's home country, in the name of purging communism):

The United States presented liberal democracy as a universal emancipatory framework while materially supporting anti-communist purges in my country during what is often called the “Jakarta Method". Between 500,000 and 1 million people were killed in 1965–66, with encouragement and intelligence support from Western powers. Variations of this model were later replicated in parts of Latin America.

The OP's response is to try to explain how that wasn't real "liberal democracy" and to try to reframe the discussion. Another commenter is even more direct, they complain half the sources listed are Marxist.

A bit bold to unqualifiedly recommend a list of thinkers of which ~half were Marxists, on the topic of ideological fanaticism causing great harms.

I think it's a bit bold of this commenter to ignore the empirical facts cited in how many people 'liberal democracies' had killed and to exclude sources simply for challenging your ideology.

Just another reminder of how the EA movement is full of right wing thinking and how most of it hasn't considered even the most basic of leftist thought.

load more comments (9 replies)
[–] Architeuthis@awful.systems 14 points 4 weeks ago (3 children)

OpenClaw guy got hired by OpenAI

My next mission is to build an agent that even my mum can use.

Maybe he'll get to stick it in whatever John Ives designs, eventually.

[–] gerikson@awful.systems 12 points 4 weeks ago

Missioned accomplished for him. Unleash a wave of toxic, community-destroying bots, get hired by Big Sam.

"fuck you, got mine"

load more comments (2 replies)
[–] Soyweiser@awful.systems 13 points 3 weeks ago (6 children)

AI bros do new experiments in making themselves even stupider. Going from 'explain what you did but dumb it down for me and my degraded attention span' into 'just make a simplified cartoon out of it'.

Proud of not understanding what is going on. None of these people could hack the Gibson.

[–] lagrangeinterpolator@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago) (6 children)

my current favorite trick for reducing "cognitive debt" (h/t @simonw ) is to ask the LLM to write two versions of the plan:

  1. The version for it (highly technical and detailed)
  2. The version for me (an entertaining essay designed to build my intuition)

I don't know about them, but I would be offended if I was planning something with a collaborator, and they decide to give me a dumbed down, entertaining, children's storybook version of their plan while keeping all the technical details to themselves.

Also, this is absolutely not what "cognitive debt" means. I've heard technical debt refers to bad design decisions in software where one does something cheap and easy now but has to constantly deal with the maintenance headaches afterwards. But the very concept of working through technical details? That's what we call "thinking". These people want to avoid the burden of thinking.

load more comments (6 replies)
load more comments (5 replies)
[–] mirrorwitch@awful.systems 13 points 3 weeks ago (1 children)

Semi-OT but a blog post where I'm just kinda gawking at the technology that saved my daughter's life and the absurdity of comparing it to what now first comes to mind when we talk of "tech".

load more comments (1 replies)
[–] gerikson@awful.systems 12 points 3 weeks ago (4 children)
load more comments (4 replies)
[–] nightsky@awful.systems 12 points 3 weeks ago (4 children)

Altman:

“People talk about how much energy it takes to train an AI model. But it also takes a lot of energy to train a human. It takes about 20 years of life — and all the food you consume during that time — before you become smart," the OpenAI CEO told The Indian Express this week.

I would have liked to ask back, how much more food does he require? Gosh, someone offer him an energy bar!

[–] Architeuthis@awful.systems 11 points 3 weeks ago

Using talking points meant for c-suites to a general audience and coming off as a complete psychopath, the San Fran CEO Story.

load more comments (3 replies)
[–] fullsquare@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

i've collided with an article* https://harshanu.space/en/tech/ccc-vs-gcc/

you might be wondering why it doesn't highlight that it fails to compile linux kernel, or why it states that using pieces of gcc where vibecc fails is "fair", or why it neglects to say that failing linker means it's not useful in any way, or why just relying on "no errors" isn't enough when it's already known that vibecc will happily eat invalid c. it's explained by:

Disclaimer

Part of this work was assisted by AI. The Python scripts used to generate benchmark results and graphs were written with AI assistance. The benchmark design, test execution, analysis and writing were done by a human with AI helping where needed.

even with all this slant, by their own vibecoded benchmark, vibecc is still complete dogshit with sqlite compiled with it being slower up to 150000x times in some cases

[–] lagrangeinterpolator@awful.systems 12 points 3 weeks ago (2 children)

This is why CCC being able to compile real C code at all is noteworthy. But it also explains why the output quality is far from what GCC produces. Building a compiler that parses C correctly is one thing. Building one that produces fast and efficient machine code is a completely different challenge.

Every single one of these failures is waved away because supposedly it's impressive that the AI can do this at all. Do they not realize the obvious problem with this argument? The AI has been trained on all the source code that Anthropic could get their grubby hands on! This includes GCC and clang and everything remotely resembling a C compiler! If I took every C compiler in existence, shoved them in a blender, and spent $20k on electricity blending them until the resulting slurry passed my test cases, should I be surprised or impressed that I got a shitty C compiler? If an actual person wrote this code, they would be justifiably mocked (or they're a student trying to learn by doing, and LLMs do not learn by doing). But AI gets a free pass because it's impressive that the slop can come in larger quantities now, I guess. These Models Will Improve. These Issues Will Get Fixed.

load more comments (2 replies)
[–] o7___o7@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/

A slice of life article about the futility of "highly agentic" people, their sperm races, and Donald Boat. Scott A makes a cameo where he dispenses crackers.

[–] saucerwizard@awful.systems 12 points 3 weeks ago (3 children)

Absolutely demented piece.

[–] Soyweiser@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Yeah not even halfway in and it is just madness. Also not unlikely the Roy guy just made things up.

Guess the author didn't think of asking about the inconsistencies in the mans story cause they both bonded over disliking unhoused people. (The horrible unhoused people who mumble incoherently vs the chad founder who shouts 'will you be a cofounder with me?' at people).

But nope just post the blackpillers words uncritically. Do not mention that this bold truthteller who doesnt like to be told what to do or he gets enraged spend a year at home to save his parents business (and admits to that damaging their business).

Alexander is one of the leading proponents of rationalism

Is he? Or is he just calling himself that. Claiming to be a Rationalist is easier than actually doing it of course.

For rationalists, the divide between truth and falsehood is very important;

Only for the outgroup. (Saying this in relation to Scott 'Secret NRx'/'I didnt read the book I reviewed' is something).

"Racing cum is definitely interesting.” I found Eric very hard not to like.

Might want to reflect on that a bit. And why this is more a pr piece than journalism. (Did he even check all these people got kicked out of their highschools?)

Re donald boat.

Why didn't people just block him? Why doesnt the author talk about this?

I told Donald the theory I’d been nursing

This explains, the author wants to be them.

load more comments (3 replies)
load more comments (2 replies)
load more comments (6 replies)
[–] lurker@awful.systems 11 points 3 weeks ago (6 children)
load more comments (6 replies)
[–] BlueMonday1984@awful.systems 11 points 3 weeks ago

Baldur Bjarnason gives his thoughts on the software job market, predicting a collapse regardless of how AI shakes out:

If you model the impact of working LLM coding tools (big increase in productivity, little downside) where the bottlenecks are largely outside of coding, increases in coding automation mostly just reduce the need for labour. I.e. 10x increase means you need 10x fewer coders, collapsing the job market

If you model the impact of working LLM coding tools with no bottlenecks, then the increase in productivity massively increases the supply of undifferentiated software and the prices you can charge for any software drops through the floor, collapsing the job market

If the models increase output but are flawed, as in they produce too many defects or have major quality issues, Akerlof's market for lemons kicks in, bad products drive out good, value of software in the market heads south, collapsing the job market

If the model impact is largely fictitious, meaning this is all a scam and the perceived benefit is just a clusterfuck of cognitive hazards, then the financial bubble pop will be devastating, tech as an industry will largely be destroyed, and trust in software will be zero, collapsing the job market

I can only think of a few major offsetting forces:

  • If the EU invests in replacing US software, bolstering the EU job market.
  • China might have substantial unfulfilled domestic demand for software, propping up their job market
  • Companies might find that declining software quality harms their bottom-line, leading to a Y2K-style investment in fixing their software stacks

But those don't seem likely to do more than partially offset the decline. Kind of hoping I'm missing something

[–] lurker@awful.systems 10 points 4 weeks ago* (last edited 4 weeks ago)

new interview with Dario Amodei dropped https://youtu.be/n1E9IZfvGMA basically exponential curve real soon, nice skepticism from both the interviewer and the comment section

On a related note, I really gotta stop browsing r/singularity man, some of the AI hype in there is just painful. though it is funny to see people with "AGI 2024/2025/2026" flairs

EDIT: this is also the same podcast where Dario said we could have AGI in 2-3 years back in 2023. So lol

[–] lurker@awful.systems 10 points 3 weeks ago (8 children)
[–] sailor_sega_saturn@awful.systems 11 points 3 weeks ago

Apparently this sort of machine learning training pitfall I learned about a decade go in an undergraduate level class that I was like halfway paying attention to in a party school is now evidence of the impending AI apocalypse.

load more comments (7 replies)
[–] CinnasVerses@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled "Quis cancellat ipsos cancellores?" which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with "Persephone." He or she does not quite say that any of the accusations were untrue, just that "an anonymous, unverified report" says that some details were changed by an editor, and that her Medium post was of "dramatically lower fidelity, but higher memetic virulence" than Brent's buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach's attempt to get Jeffrey Epstein to fund an event where our friends would speak.

Often, people in messed-up situations point at a very similar situation and say "at least we are not like that." I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!

load more comments (3 replies)
[–] nfultz@awful.systems 10 points 4 weeks ago (3 children)

https://softcurrency.substack.com/p/the-dangerous-economics-of-walk-away

  1. Anthropic (Medium Risk) Until mid-February of 2026, Anthropic appeared to be happy, talent-retaining. When an AI Safety Leader publicly resigns with a dramatic letter stating “the world is in peril,” the facade of stability cracks. Anthropic is a delayed fuse, just earlier on the vesting curve than OpenAI. The equity is massive ($300B+ valuation) but largely illiquid. As soon as a liquidity event occurs, the safety researchers will have the capital to fund their own, even safer labs.

WTF is "even safer" ??? how bout we like just don't create the torment nexus.

Wonder if the 50% attrition prediction comes to pass though...

load more comments (3 replies)
[–] mirrorwitch@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago)

OpenSlopware documents FOSS that sold out to LLMs. is there an opposite of it, a hall of fame to list software that has unambiguously and vocally rejected LLM code like the Zig programming language?

load more comments
view more: next ›