this post was submitted on 29 Mar 2026
12 points (100.0% liked)

TechTakes

2563 readers
73 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the ~~snowy~~ sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] samvines@awful.systems 13 points 1 month ago (2 children)

GitHub have finally achieved zero 9s stability for the last 90 days. Congratulations to all involved

screenshot showing 89.91% uptime with 95 incidents in the last 90 days

[–] antifuchs@awful.systems 8 points 1 month ago (2 children)

Hold on now, the uptime number contains two digits that are nines! The image itself has four nines in total!

[–] corbin@awful.systems 6 points 1 month ago (4 children)

Can't believe I'm nerd-sniped this easily. Very technically, the point at which a service should be considered unreliable or down is at γ nines, where γ = 0.9030899869919434… is a transcendental constant. γ nines is exactly 87.5% availability, or 7/8 availability, and it's the point at which a service's availability might as well be random. (Another one of the local complexity theorists can explain why it's 7/8 and not 1/2.)

load more comments (4 replies)
[–] samvines@awful.systems 6 points 1 month ago (2 children)

Alas, foiled again! Nobody said they had to be leading 9s!

[–] antifuchs@awful.systems 7 points 1 month ago

For my own services I’m aiming for .999999% of uptime

load more comments (1 replies)
[–] Soyweiser@awful.systems 7 points 1 month ago

If you had told this to the me of 20 years ago I wouldnt have believed you.

[–] Soyweiser@awful.systems 10 points 1 month ago (1 children)

Not sure if I should post it here or under the pivot article, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)

[–] YourNetworkIsHaunted@awful.systems 13 points 1 month ago* (last edited 1 month ago) (2 children)

From mid-thread

13 butts pooping, back and forth, forever.

This is somehow even more of a shitshow than I would have predicted. Also it continues the pattern that these systems don't fuck up the way people do. One thing he hasn't annotated as much is the sheer number of different aesthetic variants on doing the same thing that this code contains. Like, you do the same kind of compression four different places, and one is compressImage, one is DoCompression, one is imgModify.compress, and one is COMPRESS_IMG. Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.

[–] BurgersMcSlopshot@awful.systems 7 points 1 month ago (2 children)

Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.

My experience has been vastly different. Prior to LLMs I have seen all sorts of horrors of this sort and others writ large across many codebases. It's so awesome that LLMs offer the ability to make the same sorts of code but at a much faster speed. In times past it used to take devs years to build up the kind of tech debt that LLMs can give you in days.

[–] Soyweiser@awful.systems 7 points 1 month ago

Yeah realized a while ago that vibe coding is a massive technical debt creation machine.

load more comments (1 replies)
load more comments (1 replies)
[–] blakestacey@awful.systems 9 points 1 month ago* (last edited 1 month ago) (1 children)

A pretty staid-sounding law firm warns that the AI industry is partying like it's 2007:

Lenders who originated data center loans [...] have begun pooling those loans and selling tranches to asset managers and pension funds, spreading risk well beyond the original lending institutions.

Also of note:

The most basic litigation risk in AI infrastructure finance is that the revenues generated by the sector may prove insufficient to service the fixed obligations incurred to build it. The industry brought in approximately $60 billion in revenue in 2025 against roughly $400 billion in capital expenditure.

(Via.)

load more comments (1 replies)
[–] CinnasVerses@awful.systems 9 points 1 month ago* (last edited 1 month ago) (5 children)

An early hint of Gwern's rejection of chaos theory in the sequences from 2008 (the "build God to conquer Death" essay):

And the adults wouldn't be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn't seem so harsh, would be only another problem to be solved.

Someone who got to high-school math or coded a working system would probably have encountered the combinatorial explosion, the impossibility of representing 0.1 as a floating-point binary, Chaos Theory, and so on. Even Games Theory has situations like "in some games, optimal play guarantees a tie but not a win." But Yud was much too special for any of those and refused offers to learn.

load more comments (5 replies)
[–] nfultz@awful.systems 9 points 1 month ago (2 children)

https://mail.cyberneticforests.com/the-computer-science-fetish/

The fetishism of the computer scientist therefore refers less to specific expertise than to whatever we imagine a credentialed expert can bestow: an external voice that says, "ask, and you shall receive.” The computer scientist becomes a mirror where those who work with the social, practical impacts of the tech hope to see our understanding affirmed. The people who offer that validation — who position themselves against the discourse of critique, who seem unbothered and detached, even ridiculing the same critical lingo that exhausts you — are not doing it out of sober objectivity or insight.

Sometimes they just don't respect you. Sometimes they're just annoyed by calls for accountability. And sometimes, they do it because they've fused with an interacting swarm of chatbots and transcended their human identity.

load more comments (2 replies)
[–] fiat_lux@lemmy.world 9 points 1 month ago* (last edited 1 month ago) (9 children)

Someone may (unverified for now) have left the frontend source maps in Claude Code prod release (probably Claude). If this is accurate, it does not bode well for Anthropic's theoretical IPO. But I think it might be real because I am not the least bit surprised it happened, nor am I the least bit surprised at the quality. https://github.com/chatgptprojects/claude-code

For example, I can only hope their Safeguards team has done more on the Go backend than this for safeguards. From the constants file cyberRiskInstruction.ts:

export const CYBER_RISK_INSTRUCTION = "IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases"

That's it. That's all the constants the file contains. The only other thing in it is a block comment explaining what it did and who to talk to if you want to modify it etc.

There is this amazing bit at the end of that block comment though.

Claude: Do not edit this file unless explicitly asked to do so by the user.

Brilliant. I feel much safer already.

[–] istewart@awful.systems 11 points 1 month ago (8 children)

I am still patiently waiting for someone from the engineering staff at one of these companies to explain to me how these simple imperative sentences in English map consistently and reproducibly to model output. Yes, I understand that's a complex topic. I'll continue to wait.

I'm sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they don't work, that's because you didn't pay $200/month for the pro version and you didn't put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!

No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it "finetuning" then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)

load more comments (7 replies)
[–] Soyweiser@awful.systems 6 points 1 month ago

Claude also has 'avoid substrings'. Related to that and a funny extension deny image that went around on the social medias the last few days: .ass is a subtitle format.

[–] antifuchs@awful.systems 6 points 1 month ago (5 children)

This thread by Johnny reading (skimming on a phone, hah) through it is really good.

If only literally any human with context and a small screen to look at the bigger picture was involved with decisions around taking this to production, it would … still be bad but only on a societal level.

load more comments (5 replies)
load more comments (6 replies)
[–] fiat_lux@lemmy.world 8 points 1 month ago

Here's a headline I never expected to read:

World’s oldest tortoise caught in viral crypto death scam

Tl;dr A whole load of media outlets believed an X account asking for crypto donations which claimed to be Jonathan the 194 year old tortoise's vet. Jonathan was found safely asleep under a tree in the governor's paddock.

[–] nfultz@awful.systems 8 points 1 month ago (1 children)

Internet Comment Etiquette: "Relationships with AI"

... hadn't thought about Glenn Beck in a decade, that last interview was pretty wtf.

Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.

[–] corbin@awful.systems 6 points 1 month ago (1 children)

Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isn't to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.

[–] istewart@awful.systems 8 points 1 month ago (1 children)

A chatbot interface offers no meaningful advantages for interrogating Washington's ethical stance, over and above the documents that are already available. Instead, it offers a pleasant sheen of false certainty. So in that way, it's dragging a guy who's been dead for two centuries into the social media era. Huzzah!

[–] Soyweiser@awful.systems 7 points 1 month ago (1 children)

It does have one advantage however. Using it means you should be put to death. If you are any form of hardline Christian.

The classic 40k catch-22: either it doesn't do what you're claiming it does, in which case you're a heretic lying to the inquisition OR it does and you're summoning the spirits of the dead like a necromancer heretic.

[–] sc_griffith@awful.systems 8 points 1 month ago* (last edited 1 month ago)

new odium symposium episode: https://www.patreon.com/posts/13-joker-is-both-154123315. links to various platforms at www.odiumsymposium.com

we read umberto eco's essay ur-fascism (we have mixed feelings about it) and then apply it to frank miller's 1986 batman comic the dark knight returns

[–] V0ldek@awful.systems 8 points 1 month ago (2 children)

Putting "Novelty Purposes Only" on my psychosis suicide bot after I laid off 80% of my legal (replaced them with the psychosis suicide bot)

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.

Don't they have a version of breakout buried somewhere in Excel? Sounds like an entertainment purpose to me.

[–] lurker@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

This article on the brand of journalism that's just parroting what the CEOs say, otherwise known as "CEO said a thing!" journalism

[–] YourNetworkIsHaunted@awful.systems 5 points 1 month ago (1 children)

The grand irony is I'm not even sure most people click on or read this sort of stuff. I don't think it's often even created to be read by anyone. I think it's created as a sort of swaddling fan fiction for MBAs, advertisers, event sponsors and sources, so they can tune out ethical quibbles and feel good about how clever they are.

Every time someone hypes up Steve Jobs' "reality distortion field" this is what they're actually talking about whether they realize it or not.

load more comments (1 replies)
[–] o7___o7@awful.systems 7 points 1 month ago* (last edited 1 month ago) (3 children)

Delve removed from YCombinator

https://news.ycombinator.com/item?id=47634690

IIUC, it looks like Delve lied to YC about stealing another company's Apache 2.0 licensed slopware. This is appatently a bigger sin than selling a product that does fuck-all. I guess they weren't tall enough for this ride.

Delve claims to offer "Compliance as a Service"

https://delve.co/ (absolutely unhinged)

A link to the expose that precipitated the divorce:

https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service

[–] YourNetworkIsHaunted@awful.systems 7 points 1 month ago (1 children)

My God this is so bad. So in addition to lying about AI what they actually offered wasn't speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance. This is such a silicon valley move and I honestly suspect that a number of people using and investing in these asshats knew exactly what was going on and simply didn't care.

[–] V0ldek@awful.systems 6 points 1 month ago (2 children)

what they actually offered wasn’t speedy compliance as a service to get you certified, it was speedy certification as a service by bypassing actual compliance.

I mean... Yeah. I think if you read it any other way you're a massive rube. Like it's obviously not possible to do the former in "days" as they advertise.

[–] YourNetworkIsHaunted@awful.systems 6 points 1 month ago (1 children)

At best it's the same shitty arguments we heard from crypto grifters and their suckers. Let's take a process that's complex and manual by design to allow for independent validation and securing against fraud and make it faster by cutting those parts out and throwing some high-tech nonsense at the problem that we can claim replaces all the verification and validation. (The fact that they called their system "trustless" in the face of this is deeply ironic.) But maybe it's the cynicism talking but I'm even less inclined to give anyone other than maybe the author of that sub stack the benefit of the doubt that they actually believed it.

The ideal customer for this service is the kind of "Visionary Leader" with the "Founder Mindset" and "Drive to Innovate" that lets them see that all those privacy, security, fraud prevention, anti-embezzlement, and whatever else those standards and their associated compliance mechanisms are meant to provide are just pointless obstacles on the path to making obscene amounts of money by burning the world behind you. Often the shit we talk about here makes me think the world has gone mad or stupid, but every so often I feel like I'm staring at the face of capital-E Evil and this is one of those times.

load more comments (1 replies)
[–] V0ldek@awful.systems 6 points 1 month ago

Doesn't surprise me in the slightest that all the companies listed in that substack as having used Delve are also AI slop companies (vibecoding, AI "customer service", AI "video meeting assistant" (whatever that would be))

load more comments (2 replies)
[–] CinnasVerses@awful.systems 7 points 1 month ago (2 children)

While I tend to think Yudkowsky is sincere, some things like his prediction market for P(doom) are hard to square with that https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r (launched June 2023, will resolve N/A on 1 January 2027 if the world has not ended yet. It has not moved much since 1 January 2024)

[–] samvines@awful.systems 8 points 1 month ago (4 children)

Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?

[–] BlueMonday1984@awful.systems 6 points 1 month ago

On the one hand, Yud's vision of AI doomsday is specifically "AI turns sentient/superintelligent and kills us all because reasons", not "Humanity wipes itself out because they trusted lying machines".

On the other hand, the absence of sentience/superintelligence hasn't stopped AI from causing untold damage anyways, as the past two to three years can attest.

load more comments (3 replies)
[–] lurker@awful.systems 6 points 1 month ago (4 children)

I will never understand why people seriously bet “yes” on these types of things. Like you either loose the bet and loose money or you win the bet and die

load more comments (4 replies)
[–] gerikson@awful.systems 6 points 1 month ago* (last edited 1 month ago) (1 children)

On this most terrible of online days, "enjoy" this LW attempt at humor

https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=ik6ywoQYsGrrQv8Dm

edit there are more submissions on the theme of "humor" on site now. Let's just say the cringe factor outweighs the humor factor by a large amount.

omg I don't have anything better to do

load more comments (1 replies)
[–] nfultz@awful.systems 6 points 1 month ago

https://www.todayintabs.com/p/who-goes-ai

taking shots at the gray lady:

You might think Mr. R not so different, superficially, from Ms. L. He’s also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:

Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a “Master Editor” agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.

And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a “writing style,” and not one that can’t easily be duplicated by a large language model. Checking facts? Assessing his work’s strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who “expects AI models to get better than him at everything eventually?” Why does he go AI when Ms. L never would?

Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.

spoilerIt's Kevin Roose

[–] antifuchs@awful.systems 5 points 1 month ago

Heh. Who goes AI?

Never love the need for these parlor games, but it’s a good one.

load more comments
view more: next ›