this post was submitted on 11 Aug 2025
22 points (100.0% liked)

TechTakes

2146 readers
115 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

top 50 comments
sorted by: hot top controversial new old
[–] mirrorwitch@awful.systems 20 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I've often called slop "signal-shaped noise". I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative "AI" is good at, making spam that's hard to detect.

It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue "A Plan for Spam" launched Paul Graham's notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of "off" worked for your specific inbox.

Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.

I wonder what PG is saying about gen-"AI" these days? let's check:

“AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
He shared no examples, but […]

Who would have thought that A Plan for Spam was, all along, a plan for spam.

[–] Soyweiser@awful.systems 11 points 2 weeks ago

It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.

This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.

load more comments (1 replies)
[–] blakestacey@awful.systems 18 points 2 weeks ago (15 children)

Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., "for each b in blueberry".

[–] nightsky@awful.systems 14 points 2 weeks ago (1 children)

And the language's main data container is a kind of stack, but to push or pop values, you have to wrap them into "boats" which have to cross a "river", with extra rules for ordering and combination of values.

load more comments (1 replies)
load more comments (14 replies)
[–] TinyTimmyTokyo@awful.systems 17 points 3 weeks ago (11 children)

Ozy Brennan tries to explain why "rationalism" spawns so many cults.

One of the reasons they give is "a dangerous sense of grandiosity".

the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts.

Yep, you heard it right. Shitposting and inconsequential code are the proper way to save the world.

[–] gerikson@awful.systems 14 points 3 weeks ago (1 children)

JFC

Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.

First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.

Second, rats don't take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.

load more comments (10 replies)
[–] scruiser@awful.systems 16 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

So... apparently Peter Thiel has taken to co-opting fundamentalist Christian terminology to go after Effective Altruism? At least it seems that way from this EA post (warning, I took psychic damage just skimming the lunacy). As far as I can tell, he's merely co-opting the terminology, Thiel's blather doesn't have any connection to any variant of Christian eschatology (whether mainstream or fundamentalist or even obscure wacky fundamentalist), but of course, the majority of the EAs don't recognize that, or the fact that he is probably targeting them for their (kind of weak to be honest) attempts at getting AI regulated at all, and instead they charitably try to steelman him and figure out if he was a legitimate point. ...I wish they could put a tenth of this effort into understanding leftist thought.

Some of the comments are... okay actually, at least by EA standards, but there are still plenty of people willing to defend Thiel

One comment notes some confusion:

I’m still confused about the overall shape of what Thiel believes.

He’s concerned about the antichrist opposing Jesus during Armageddon. But afaik standard theology says that Jesus will win for certain. And revelation says the world will be in disarray and moral decay when the Second Coming happens.

If chaos is inevitable and necessary for Jesus’ return, why is expanding the pre-apocalyptic era with growth/prosperity so important to him?

Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology... possibly to try to turn the Christofascists against EA?

Someone actually gets it:

I'm dubious Thiel is actually an ally to anyone worried about permanent dictatorship. He has connections to openly anti-democratic neoreactionaries like Curtis Yarvin, he quotes Nazi lawyer and democracy critic Carl Schmitt on how moments of greatness in politics are when you see your enemy as an enemy, and one of the most famous things he ever said is "I no longer believe that freedom and democracy are compatible". Rather I think he is using "totalitarian" to refer to any situation where the government is less economically libertarian than he would like, or "woke" ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.

Note this commenter still uses non-confrontational language ("I'm dubious") even when directly calling Thiel out.

The top comment, though, is just like the main post, extending charitability to complete technofascist insanity. (Warning for psychic damage)

Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc)

I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there's a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don't hear so much about "FDA delenda est" or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging's importance are a little weak, IMO (more on this later) -- in Thiel's view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.

As for my personal take on Thiel's views -- I'm often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic "vibe" and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.

[–] istewart@awful.systems 14 points 3 weeks ago (1 children)

tl,dr; Thiel now sees the Christofascists as a more durable grifting base than the EAs, and is looking to change lanes while the temporary coalitions of maximalist Trumpism offer him the opportunity.

I repeat my suspicion that Thiel is not any more sober than Musk, he's just getting sloppier about keeping it out of the public eye.

[–] zogwarg@awful.systems 10 points 3 weeks ago

I think a big difference between Thiel and Musk, is that Thiel views himself as an "intellectual" and derives prestige "intellectualism". I don't believe for a minute he's genuinely christian, but his wankery about end-of-times eschatology of armageddon = big-left-government, is a a bit too confused to be purely cynical, I think sniffing his own farts feeds his ego.

Of course a man who would promote open doping olympics isn't sober.

[–] Soyweiser@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology… possibly to try to turn the Christofascists against EA?

Yep, the usefulness of EA is over, they are next on the chopping block. I'd imagine a similar thing will happen to redscare/moldbug if they ever speak out against him.

E: And why would a rich guy be against a "we are trying to convince rich guys to spend their money differently" organization. Esp a 'libertarian' "I get to do what I want or else" one.

[–] gerikson@awful.systems 9 points 3 weeks ago (7 children)

It always struck me as hilarious that the EA/LW crowd could ever affect policy in any way. They're cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.

There's a mini-freakout over OpenAI deciding to keep GPT-4o active, despite it being more "sycophantic" than GPT-5 (and thus more likely to convince people to do Bad Things) but there's also the queasy realization that if sycophantic LLMs is what brings in the bucks, nothing is gonna stop LLM companies from offering them. And there's no way these people can stop it, because they've made the deal that LLM companies are gonna be the ones realizing that AI is gonna kill everyone and that's never gonna happen.

load more comments (7 replies)
load more comments (2 replies)
[–] corbin@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

Thiel is a true believer in Jesus and God. He was raised evangelical. The quirky eschatologist that you're looking for is René Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.

Edit: I wrote this before clicking on the LW post. This is a decent summary of Girard's claims as well as how they influence Thiel. I'm quoting West here in order to sneer at Thiel:

Unfortunately (?), Christian society does not let us sacrifice random scapegoats, so we are trapped in an ever-escalating cycle, with only poor substitutes like “cancelling celebrities on Twitter” to release pressure. Girard doesn’t know what to do about this.

Thiel knows what to do about this. After all, he funded Bollea v. Gawker. Instead of letting journalists cancel celebrities, why not cancel journalists instead? Then there's no longer any journalists to do any cancellation! Similarly, Thiel is confirmed to be a source of funding for Eric Weinstein and believed to fund Sabine Hossenfelder. Instead of letting scientists cancel religious beliefs, why not cancel scientists instead? By directing money through folks with existing social legitimacy, Thiel applies mimesis: pretend to be legitimate and you can shift what is legitimate.

In this context, Thiel fears the spectre of AGI because it can't be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open. After all, if AGI is truly to unify humanity, it must unify our moralities and cultures into a single uniformly-acceptable code of conduct. But the only acceptable unification for Thiel is the holistic catholic apostolic one-and-only forever-and-ever church of Jesus, and if AGI is against that then AGI is against Jesus himself.

load more comments (5 replies)
load more comments (2 replies)
[–] BlueMonday1984@awful.systems 14 points 3 weeks ago

Thomasaurus has given their thoughts on using AI, in a journal entry called "I tried coding with AI, I became lazy and stupid)". Unsurprisingly, the whole thing is one long sneer, with a damning indictment of its effectiveness at the end:

If I lose my job due to AI, it will be because I used it so much it made me lazy and stupid to the point where another human has to replace me and I become unemployable.

I shouldn't invest time in AI. I should invest more time studying new things that interest me. That's probably the only way to keep doing this job and, you know, be safe.

[–] BlueMonday1984@awful.systems 13 points 3 weeks ago (4 children)

New article from the New York Times reporting on an influx of compsci graduates struggling to find jobs (ostensibly caused by AI automation). Found a real money shot about a quarter of the way through:

Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent.

You want my take, I expect this article's gonna blow a major hole in STEM's public image - being a path to a high-paying job was one of STEM's major selling points (especially compared to the "useless" art/humanities degrees), and this new article not only undermines that selling point, but argues for flipping it on its head.

[–] BlueMonday1984@awful.systems 13 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

Quick update: I've checked the response on Bluesky, and it seems the general response is of schadenfreude at STEM's expense. From the replies, I've found:

Plus one user mocking STEM in general as "[choosing] fascism and “billions must die”" out of greed, and another approving of others' dunks on STEM over past degree-related grievances.

You want my take on this dunkfest, this suggests STEM's been hit with a double-whammy here - not only has STEM lost the status their "high-paying" reputation gave them, but that reputation (plus a lotta built-up grievances from mockery of the humanities) has crippled STEM's ability to garner sympathy for their current predicament.

[–] V0ldek@awful.systems 15 points 3 weeks ago (1 children)

I hate the fact that now someone might look at me and surmise that I do something related to blockchain or AI, I feel almost like I need a sticker, like those "I bought it before we knew Elon was crazy" they put on Teslas

"I learnt to code before this stupid bubble"

load more comments (1 replies)
[–] swlabr@awful.systems 14 points 3 weeks ago (1 children)

On one hand, this is another case of capitalism working as intended. You have the ruling class dangling the carrot of the promise of social mobility via job. Just gotta turn the crank of the orphan grinder for 4 years or so, until there's enough orphan paste to grease the next grinding machine. But it's ok, because your experience in crank will let you climb the ladder to the next, higher paying, higher prestige crank of the machine. Then one day, they decide to turn on the motor.

On the other hand? There is no other hand, they chopped it off because you didn't turn the crank fast enough when you had the chance.

load more comments (1 replies)
[–] Soyweiser@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

The whole joining of the fascist side by a lot of the higher ups of the tech world, combined with the long-standing debate bro both sides free speech libertarianism (but mostly for neonazis, payment services do go after sex work and lgbt content) also did not help the rep of STEM, even if those decisions are made by STEM curious people and not actually STEM people. billionaires want you to know they could have done physics - Angela Collier

load more comments (1 replies)
[–] fullsquare@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago)

you say STEM, but you seem to mean almost exclusively computer touchers, already mentioned biologists or variety of engineers won't likely have these problems (i'm not gonna be excessively smug about this because my field will destroy you physically while still being STEM and not particularly glorious)

also it's not a complete jobocalypse, there's still 93% employed fresh CS grads, they might have comparatively shittier jobs, but it's not a disaster (unless picture is actually much bleaker in that that unemployment is, say, concentrated in last 2 years of graduates, but still even in this case it's maybe 10%, 12% tops for the worst affected). unless you mean their unlimited libertarian flavoured greed coming through it, then yeah, it's pretty funny

even then, there's gonna be a funny rebound when these all genai companies implode, partially maybe not in top earner countries, but places like eastern europe or india will fill that openai-sized crater pretty handily, if that mythical outsourcing to ai happened in the first place, that is

load more comments (1 replies)
[–] corbin@awful.systems 11 points 3 weeks ago (1 children)

Well, what's next, and how much work is it? I didn't want to be a computing professional. I trained as a jazz pianist. At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.

load more comments (1 replies)
load more comments (2 replies)
[–] FredFig@awful.systems 13 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

We're at the point of 100xers giving themselves broken sleep schedules so they can spend tokens optimally.

Inevitably, Anthropic will increase their subscription costs or further restrict usage limits. It feels like they're giving compute away for free at this point. So when the investor bux start to run dry, I will be ready.

This has to be satire, but oh my god.

[–] sailor_sega_saturn@awful.systems 16 points 3 weeks ago

I'm sorry in advance for posting this meme.

Attic dweller at thanksgiving meme image. "There's our little steath mode startup CEO! Why don't you come down and tell the family about your AI coding assistant?"

[–] istewart@awful.systems 11 points 3 weeks ago

My velocity has increased 10x and I'm shipping features like a cracked ninja now, which is great because my B2B SaaS is still in stealth mode.

Yeah it's satire, but effective satire means you can never really tell...

load more comments (1 replies)
[–] scruiser@awful.systems 13 points 2 weeks ago (5 children)

Yall ready for another round of LessWrong edit wars on Wikipedia? This time with a wider list of topics!

https://www.lesswrong.com/posts/g6rpo6hshodRaaZF3/mech-interp-wiki-page-and-why-you-should-edit-wikipedia-1

On the very slightly merciful upside... the lesswronger recommends "If you want to work on a new page, discuss with the community first by going to the talk page of a related topic or meta-page." and "In general, you shouldn't post before you understand Wikipedia rules, norms, and guidelines." so they are ahead of the previous calls made on Lesswrong for Wikipedia edit-wars.

On the downside, they've got a laundry list of lesswrong jargon they want Wikipedia articles for. Even one of the lesswrongers responding to them points out these terms are a bit on the under-defined side:

Speaking as a self-identified agent foundations researcher, I don't think agent foundations can be said to exist yet. It's more of an aspiration than a field. If someone wrote a wikipedia page for it, it would just be that person's opinion on what agent foundations should look like.

[–] zogwarg@awful.systems 14 points 2 weeks ago (2 children)

PS: We also think that there existing a wiki page for the field that one is working in increases one's credibility to outsiders - i.e. if you tell someone that you're working in AI Control, and the only pages linked are from LessWrong and Arxiv, this might not be a good look.

Aha so OP is just hoping no one will bother reading the sources listed on the article...

Looking to exploit citogenesis for political gain.

load more comments (1 replies)
[–] blakestacey@awful.systems 12 points 2 weeks ago

From the comments:

On the contrary, I think that almost all people and institutions that don't currently have a Wikipedia article should not want one.

Huh. How oddly sensible.

An extreme (and close-to-home) example is documented in TracingWoodgrains’s exposé.of David Gerard’s Wikipedia smear campaign against LessWrong and related topics.

Ah, never mind.

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 12 points 3 weeks ago

Tante fires off about web search:

There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal.

And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.

On a personal sidenote, part of me suspects webrings and web directories will see a boost in popularity in the coming years - with web search in the shitter and AI crawlers being a major threat, they're likely your safest and most reliable method of bringing human traffic to your personal site/blog.

[–] Soyweiser@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago)

Thank you Dan Brown for working hard on poisoning LLMs.

(Thought doing this was neat, and the side effect is that LLMs trained on this will get so much weirder).

[–] BurgersMcSlopshot@awful.systems 11 points 3 weeks ago (1 children)

Mastodon post linking to the least shocking Ars lede I have seen in a bit. Apparently "reasoning" and "chain of thought" functionality might have been entirely marketing fluff? :shocked pikachu:

load more comments (1 replies)
[–] Soyweiser@awful.systems 10 points 3 weeks ago (1 children)

Check out this sneer on the EA subreddit

"I’m employed, can someone explain what either of these mean?"

[–] fullsquare@awful.systems 9 points 3 weeks ago (1 children)

"i'm employed, what does it mean?" is common on r/shitposting when some obscure online/weeb thing surfaces. probably in a dozen of other places also

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 10 points 2 weeks ago (24 children)

Ed Zitron's given his thoughts on GPT-5's dumpster fire launch:

Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the "deaths" of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they've set up for themselves.

load more comments (24 replies)
[–] BlueMonday1984@awful.systems 10 points 2 weeks ago (1 children)

Ed Zitron has chimed in on OpenAI's woes, directly comparing their situation to a dying MMO:

Zitron is in a pretty good position to make this comparison - he worked as a games journalist in the '00s before pivoting to working in public relations.

load more comments (1 replies)
[–] swlabr@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Forwarding this discussion to here:

News from r/philosophy: OOP, Richard Y Chappell, posted an article containing image slop, landing them a 3-day ban. OOP writes a new article that DESTROYS the r/philosophy moderation policy on AI generated content with FACTS and LOGIC. For added flavour, OOP is an EA. OP is an SSCer. Both are participants in the thread.

[–] gerikson@awful.systems 10 points 3 weeks ago (1 children)

Guess either term hasn't started, or his gig as phil prof is some sort of right-wing sinecure. Dude has a lot of time on his hands.

FWIW I'd say banning a poster for including slop image in a 3rd party article is a bit harsh, but what would Reddit be without arbitrary draconian rules? A normal person would note this, accept the 3 day ban, and maybe avoid the sub in future or avoid including slop. The fact he flew off his handle this much is very very funny though.

[–] Soyweiser@awful.systems 11 points 3 weeks ago (7 children)

Forget being exposed to the elements to build character. People should be randomly temp banned and use that to build/judge character. (Also a good judge of the power balance in a community, if the mod team can temp ban a power poster that predates the mod team, say lesswrong giving Yud a timeout).

[–] swlabr@awful.systems 11 points 3 weeks ago (6 children)

power poster that predates the mod team

Does Yud predate for food or sport?

load more comments (6 replies)
load more comments (6 replies)
[–] Soyweiser@awful.systems 9 points 3 weeks ago (1 children)

I have not tried it yet, but apparently there is an open source alternative for github called https://codeberg.org/. Might be useful.

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (1 children)

It'll probably earn a lot of users if and when Github goes down the shitter. They've publicly stood with marginalised users before, so they're already in my good books.

load more comments (1 replies)
load more comments
view more: next ›