this post was submitted on 16 Feb 2026
19 points (88.0% liked)

TechTakes

2442 readers
64 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

you are viewing a single comment's thread
view the rest of the comments
[–] nfultz@awful.systems 10 points 1 day ago (3 children)

https://softcurrency.substack.com/p/the-dangerous-economics-of-walk-away

  1. Anthropic (Medium Risk) Until mid-February of 2026, Anthropic appeared to be happy, talent-retaining. When an AI Safety Leader publicly resigns with a dramatic letter stating “the world is in peril,” the facade of stability cracks. Anthropic is a delayed fuse, just earlier on the vesting curve than OpenAI. The equity is massive ($300B+ valuation) but largely illiquid. As soon as a liquidity event occurs, the safety researchers will have the capital to fund their own, even safer labs.

WTF is "even safer" ??? how bout we like just don't create the torment nexus.

Wonder if the 50% attrition prediction comes to pass though...

[–] scruiser@awful.systems 2 points 18 hours ago

So they've highlighted an interesting pattern to compensation packages, but I find their entire framing of it gross and disgusting, in a capitalist techbro kinda way.

Like the way the describe Part III's case study:

The uncapped payouts were so large that it fractured the relationship between Capital (Activision) and Labor (Infinity Ward).

Acitivision was trying to cheat its labor after they made them massively successful profits! Describing it as a fracture relationship denies the agency on the Acitivision's part to choose to be greedy capitalist pigs.

The talent that left formed the core of the team that built Titanfall and Apex Legends, franchises that have since generated billions in revenue, competing directly in the same first-person shooter market as Call of Duty.

Activision could have paid them what they owed them, and kept paying them incentive based payouts, and come out billions of dollars ahead instead of engaging in short-sighted greedy behavior.

I would actually find this article interesting and tolerable if they framed it as "here are the perverse incentives capitalism encourages businesses to create" instead of "here is how to leverage the perverse incentives in your favor by paying your employees just enough, but not enough to actually reward them a fair share" (not that they were honest enough to use those words).

WTF is “even safer” ??? how bout we like just don’t create the torment nexus.

I think the writer isn't even really evaluating that aspect, just thinking in terms of workers becoming capital owners and how companies should try to prevent that to maximize their profits. The idea that Anthropic employees might care on any level about AI safety (even hypocritically and ineffectually) doesn't enter into the reasoning.

[–] istewart@awful.systems 5 points 1 day ago

the capital to fund their own, even safer labs.

I wonder, is this a theory of "safety" analogous to what's driven the increased gigantism of vehicles in the US? Sure seems like it.

"even safer" in this case means some combination of two things:

  1. The new organization is more ideologically aligned with the transhumanist doom cult that apparently managed to eat the brains of the people with money to burn.

  2. The new organization, largely as a result of this, is capable of sinking an unending amount of capital into buying compute time and Nvidia chips but due to their commitments to safety is even less inclined to actually deliver anything.