this post was submitted on 29 Dec 2025
17 points (100.0% liked)

TechTakes

2343 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] o7___o7@awful.systems 2 points 2 hours ago
[–] Soyweiser@awful.systems 3 points 3 hours ago (1 children)

Happy new year everybody. They want to ban fireworks here next year so people set fires to some parts of Dutch cities.

Unrelated to that, let 2026 be the year of the butlerian jihad.

[–] bigfondue@lemmy.world 1 points 26 minutes ago

Meanwhile in the US...

[–] nfultz@awful.systems 3 points 5 hours ago

Anti-A.I.-relationship-sub r/cogsuckers maybe permanently locked down by its mods after users criticize mod-led change of the subreddit to a somewhat pro A.I.-sub (self.SubredditDrama)

The mods were heavily downvoted and critiqued for pulling the rug from under the community as well as for parallelly modding pro-A.I.-relationship-subs. One mod admitted:

"(I do mod on r/aipartners, which is not a pro-sub. Anyone who posts there should expect debate, pushback, or criticism on what you post, as that is allowed, but it doesn’t allow personal attacks or blanket comments, which applies to both pro and anti AI members. Calling people delusional wouldn’t be allowed in the same way saying that ‘all men are X’ or whatever wouldn’t. It’s focused more on a sociological issues, and we try to keep it from devolving into attacks.)"

A user, heavily upvoted, replied:

You’re a fucking mod on ai partners? Are you fucking kidding me?

It goes on and on like this: As of now, the posting has amassed 343 comments. Mostly, it's angry subscribers of the sub, while a few users from pro-A.I.-subreddits keep praising the mods. Most of the users agree that brigading has to stop, but don't understand why that means that a sub called COGSUCKERS should suddenly be neutral to or accepting of LLM-relationships. Bear in mind that the subreddit r/aipartners, for which one of the mods also mods, does not allow to call such relationships "delusional". The most upvoted comments in this shitstorm:

"idk, some pro schmuck decided we were hating too hard 💀 i miss the days shitposting about the egg" https://www.reddit.com/r/cogsuckers/comments/1pxgyod/comment/nwb159k/

[–] BlueMonday1984@awful.systems 4 points 14 hours ago
[–] gerikson@awful.systems 8 points 21 hours ago* (last edited 6 hours ago)

A rival gang of "AI" "researchers" dare to make fun of Big Yud's latest book and the LW crowd are Not Happy

Link to takedown: https://www.mechanize.work/blog/unfalsifiable-stories-of-doom/ (hearbreaking : the worst people you know made some good points)

When we say Y&S’s arguments are theological, we don’t just mean they sound religious. Nor are we using “theological” to simply mean “wrong”. For example, we would not call belief in a flat Earth theological. That’s because, although this belief is clearly false, it still stems from empirical observations (however misinterpreted).

What we mean is that Y&S’s methods resemble theology in both structure and approach. Their work is fundamentally untestable. They develop extensive theories about nonexistent, idealized, ultrapowerful beings. They support these theories with long chains of abstract reasoning rather than empirical observation. They rarely define their concepts precisely, opting to explain them through allegorical stories and metaphors whose meaning is ambiguous.

Their arguments, moreover, are employed in service of an eschatological conclusion. They present a stark binary choice: either we achieve alignment or face total extinction. In their view, there’s no room for partial solutions, or muddling through. The ordinary methods of dealing with technological safety, like continuous iteration and testing, are utterly unable to solve this challenge. There is a sharp line separating the “before” and “after”: once superintelligent AI is created, our doom will be decided.

LW announcement, check out the karma scores! https://www.lesswrong.com/posts/Bu3dhPxw6E8enRGMC/stephen-mcaleese-s-shortform?commentId=BkNBuHoLw5JXjftCP

Update an LessWrong attempts to debunk the piece with inline comments here

https://www.lesswrong.com/posts/i6sBAT4SPCJnBPKPJ/mechanize-work-s-essay-on-unfalsifiable-doom

Leading to such hilarious howlers as

Then solving alignment could be no easier than preventing the Germans from endorsing the Nazi ideology and commiting genocide.

Ummm pretty sure engaging in a new world war and getting their country bombed to pieces was not on most German's agenda. A small group of ideologues managed to sieze complete control of the state, and did their very best to prevent widespread knowledge of the Holocaust from getting out. At the same time they used the power of the state to ruthlessly supress any opposition.

rejecting Yudkowsky-Soares' arguments would require that ultrapowerful beings are either theoretically impossible (which is highly unlikely)

ohai begging the question

[–] froztbyte@awful.systems 7 points 23 hours ago (1 children)

good morning awful, I found you the first thing you’ll want to scream at today

palantir and others offering free addictions, all in the name of “productivity”

[–] o7___o7@awful.systems 7 points 23 hours ago (1 children)
[–] swlabr@awful.systems 4 points 21 hours ago (1 children)
[–] o7___o7@awful.systems 3 points 9 hours ago (1 children)
[–] swlabr@awful.systems 3 points 7 hours ago

Freemason Freebasin’

[–] nfultz@awful.systems 7 points 23 hours ago

internet comment etiquette with erik just got off YT probation / timeout from when YouTube's moderation AI flagged a decade old video for having russian parkour.

He celebrated by posting the below under a pipebomb video.

Hey, this is my son. Stop making fun of his school project. At least he worked hard on it. unlike all you little fucks using AI to write essays about books you don't know how to read. So you can go use AI to get ahead in the workforce until your AI manager fires you for sexually harassing the AI secretary. And then your AI health insurance gets cut off so you die sick and alone in the arms of your AI fuck butler who then immediately cremates you and compresses your ashes into bricks to build more AI data centers. The only way anyone will ever know you existed will be the dozens of AI Studio Ghibli photos you've made of yourself in a vain attempt to be included. But all you've accomplished is making the price of my RAM go up for a year. You know, just because something is inevitable doesn't mean it can't be molded by insults and mockery. And if you depend on AI and its current state for things like moderation, well then fuck you. Also, hey, nice pipe bomb, bro.

[–] nfultz@awful.systems 6 points 1 day ago (1 children)

Another video on Honey ("The Honey Files Expose Major Fraud!") - https://www.youtube.com/watch?v=qCGT_CKGgFE

Shame he missed cyber monday by a couple weeks.

Also 16:35 haha ofc it's just json full of regexes.

[–] sailor_sega_saturn@awful.systems 6 points 1 day ago* (last edited 1 day ago) (1 children)

They avoid the classic mistake of forgetting to escape . in the URL regex. I've made that mistake before...

Like imagine you have a mission critical URL regex telling your code what websites to trust as https://www.trusted-website.net/.* but then someone comes along and registers the domain name https://wwwwtrusted-website.net/. I'm convinced that's some sort of niche security vulnerability in some existing system but no one has ran into it yet.

None of this comment is actually important. The URL regexes just gave me work flashbacks.

[–] froztbyte@awful.systems 4 points 23 hours ago

a couple weeks back I had a many-rounds support ticket with a network vendor, querying exactly the details of their regex implementation. docs all said PCRE, actual usage attempt indicated….something very much else. and indeed it was because of . that I found it

[–] o7___o7@awful.systems 16 points 2 days ago* (last edited 2 days ago) (6 children)

CW: Slop, body humor, Minions

So my boys recieved Minion Fart Rifles for Christmas from people who should have known better. The toys are made up of a compact fog machine combined with a vortex gun and a speaker. The fog machine component is fueled by a mixture of glycerin and distilled water that comes in two scented varieties: banana and farts. The guns make tidy little smoke rings that can stably deliver a payload tens of feet in still air.

Anyway, as soon as they were fired up, Ammo Anxiety reared its ugly head, so I went in search of a refill recipe. (Note: I searched "Minions Vortex Gun Refill Recipe") and goog returned this fartifact*:

194 dB, you say? Alvin Meshits? The rabbit hole beckoned.

The "source links" were mostly unrelated except one, which was a reddit thread that lazily cited ChatGPT generating the same text almost verbatim in response to the question, "What was the loudest ever fart?"

Luckily, a bit of detectoring turned up the true source, an ancient Uncyclopedia article's "Fun Facts" section:

https://en.uncyclopedia.co/wiki/Fartium

The loudest fart ever recorded occurred on May 16, 1972 in Madeline, Texas by Alvin Meshits. The blast maintained a level of 194 decibels for one third of a second. Mr. Meshits now has recurring back pain as a result of this feat.

Welcome to the future!

  • yeah I took the bait/I dont know what I expected
[–] e8d79@discuss.tchncs.de 5 points 1 day ago (2 children)

That toy sounds like someone took a vape and turned it into a smoke ring launcher. Have you tried filling it with vape juice?

[–] o7___o7@awful.systems 4 points 1 day ago* (last edited 1 day ago) (1 children)

right? lol but I cant it's too popular with the kiddos

[–] e8d79@discuss.tchncs.de 4 points 1 day ago

Wouldn't that be the refill recipe you where looking for? Vape juice is just a mix of propylene glycol and vegetable glycerine. I think its the glycerine that is responsible for the "smoke".

[–] saucerwizard@awful.systems 5 points 1 day ago

I want a vortex ring gun.

[–] istewart@awful.systems 6 points 1 day ago

Maybe if we're lucky, Alvin Meshits can team up wtih https://en.wikipedia.org/wiki/Bum_Farto for the feel-good buddy comedy of the summer. Remember, the more you toot, the better you feel!

[–] bitofhope@awful.systems 10 points 1 day ago (1 children)

Somewhat interestingly, 194 decibels is the loudest that a sound can be physically sustained in the Earth's atmosphere. At that point the "bottom" of the pressure wave is a vacuum. Some enormous blast such as a huge meteor impact, a supervolcano eruption or a very large nuclear weapon can exceed that limit but only for the initial pulse.

[–] jonhendry@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

I suspect a 194 dB fart would blow the person in half.

[–] istewart@awful.systems 8 points 1 day ago

Vacuum-driven total intestinal eversion, nobody's ever seen anything like it

[–] jonhendry@awful.systems 8 points 1 day ago

Apparently there's another brand that describes its scents as "(rich durian & mellow cheese)"

[–] blakestacey@awful.systems 7 points 1 day ago

upvoted for "fartifact"

[–] rook@awful.systems 11 points 2 days ago (1 children)

This is a fun read: https://nesbitt.io/2025/12/27/how-to-ruin-all-of-package-management.html

Starts out strong:

Prediction markets are supposed to be hard to manipulate because manipulation is expensive and the market corrects. This assumes you can’t cheaply manufacture the underlying reality. In package management, you can. The entire npm registry runs on trust and free API calls.

And ends well, too.

The difference is that humans might notice something feels off. A developer might pause at a package with 10,000 stars but three commits and no issues. An AI agent running npm install won’t hesitate. It’s pattern-matching, not evaluating.

[–] sc_griffith@awful.systems 9 points 1 day ago* (last edited 12 hours ago)

the tea.xyz experiment section is exactly describing academic publishing

[–] BlueMonday1984@awful.systems 8 points 2 days ago

Foz Meadows brings a lengthy and merciless sneer straight from the heart, aptly-titled "Against AI"

[–] mlen@awful.systems 10 points 2 days ago (1 children)

Rich Hickey joins the list of people annoyed by the recent Xmas AI mass spam campaign: https://gist.github.com/richhickey/ea94e3741ff0a4e3af55b9fe6287887f

[–] gerikson@awful.systems 10 points 2 days ago (2 children)

LOL @ promptfondlers in comments

[–] froztbyte@awful.systems 4 points 23 hours ago

aww those got turned off by the time I got to look :(

[–] mlen@awful.systems 11 points 2 days ago* (last edited 2 days ago) (2 children)

It's a treasure trove of hilariously bad takes.

There's nothing intrinsically valuable about art requiring a lot of work to be produced. It's better that we can do it with a prompt now in 5 seconds

Now I need some eye bleach. I can't tell anymore if they are trolling or their brains are fully rotten.

[–] lagrangeinterpolator@awful.systems 13 points 2 days ago* (last edited 2 days ago)

Don't forget the other comment saying that if you hate AI, you're just "vice-signalling" and "telegraphing your incuruosity (sic) far and wide". AI is just like computer graphics in the 1960s, apparently. We're still in early days guys, we've only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don't have any better ideas than throwing more ~~money~~ compute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn't pay for. Look at these models doing mathematical reasoning. Actually don't look at those, you can't see them because they're proprietary and live in Canada.

In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.

EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.

[–] swlabr@awful.systems 12 points 2 days ago

these fucking people: "art is when picture matches words in little card next to picture"

[–] mlen@awful.systems 11 points 2 days ago (1 children)
[–] fullsquare@awful.systems 5 points 2 days ago (1 children)

lowkey disappointed to see so much slop in other talks (illustrations on slides mostly)

[–] smiletolerantly@awful.systems 4 points 2 days ago (1 children)

Really? Which ones? I didn't notice any

[–] CinnasVerses@awful.systems 8 points 3 days ago* (last edited 3 days ago) (1 children)

A few weeks ago, David Gerard found this blog post with a LessWrong post from 2024 where a staffer frets that:

Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz. Importantly, Open Phil cannot make grants through Good Ventures to projects involved in almost any amount of "rationality community building"

So keep whisteblowing and sneering, its working.

Sailor Sega Saturn found a deleted post on https://forum.effectivealtruism.org/users/dustin-moskovitz-1 where Moskovitz says that he has moral concerns with the Effective Altruism / Rationalist movement not reputation concerns (he is a billionaire executive so don't get your hopes up)

[–] sailor_sega_saturn@awful.systems 6 points 2 days ago* (last edited 2 days ago) (3 children)

All of the bits I quoted in my other comment were captured by archive.org FWIW: a, b, c. They can also all still be found as EA forum comments via websearch, but under [anonymous] instead of a username.

This newer archive also captures two comments written since then. Notably there's a DOGE mention:

But I can't e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID. (On Bsky, they blame EAs for the whole endeavor)

[–] TinyTimmyTokyo@awful.systems 4 points 2 days ago

I was unable to follow the thread of conversation from the archived links, so here is the source in case anyone cares.

Does anyone know when Dustin deleted his EA forums account? Did he provide any additional explanation for it?

[–] CinnasVerses@awful.systems 7 points 2 days ago

The February 2024 Medium post by Moskovitz objects to cognitive decoupling as an excuse to explore eugenics and says that Eliezer Yudkowsky seems unreasonably confident in immanent AI doom. It also notes that Utilitarianism can lead ugly places such as longtermism and Derek Parfit's repugnant conclusion. In the comments he mentions no longer being convinced that its as useful to spend on insect welfare as on "chicken, cow, or pig welfare." He quotes Julia Galef several times. A choice quote from his comments on forum.effectivealtruism.org:

If the (Effective Altruism?) brand wasn’t so toxic, maybe you wouldn’t have just one foundation like us to negotiate with, after 20 years?

[–] CinnasVerses@awful.systems 4 points 2 days ago (1 children)

Does anyone have an explainer on the supposed DOGE/EA connection? All I can find is this dude with a blo wobbling back and forth with LessWrong flavoured language https://www.statecraft.pub/p/50-thoughts-on-doge (he quotes Venkatesh Rao and Dwarkesh Patel who are part of the LessWrong Expanded Universe).

[–] sailor_sega_saturn@awful.systems 5 points 2 days ago* (last edited 2 days ago) (1 children)

The bluesky reference may be about this thread & this thread.

One of the replies names Cole Killian as an EA involved with DOGE. The image is dead but has alt text.

I mean there's at least one. You could "no-true-scotsman" him, but between completing an EA fellowship and going vegan, he seems to fit a type. [A vertical screenshot of an archive.org snapshot of Cole Killian's website, stating accomplishments. Included in the list are "completed the McGill effective altruism fellowship" and "went vegan and improved cooking skills"]

(It looks like that archive has since been scrubbed, though Rolling Stone also mentions the connection)

[–] CinnasVerses@awful.systems 3 points 2 days ago (2 children)

Two of the bsky posts are log-in only. Huh, Killian is in to Decentralized Autonomous Organizations (blockchain), high-frequency trading (like our friends at Jane Street), veganism, and Effective Altruism?

[–] dgerard@awful.systems 2 points 4 hours ago

Two of the bsky posts are log-in only

if you're going to do internet research, at this point it's a skill issue. Create a reader account.

[–] sailor_sega_saturn@awful.systems 7 points 2 days ago* (last edited 2 days ago) (1 children)

Here's another interesting quote from the now deleted webpage archive: https://old.reddit.com/r/mcgill/comments/1igep4h/comment/masajbg/

My name is Cole. Here's some quick info. Memetics adjacence:

Previously - utilitarianism, effective altruism, rationalism, closed individualism

Recently - absurdism, pyrrhonian skepticism, meta rationalism, empty individualism

[–] CinnasVerses@awful.systems 4 points 2 days ago

Sounds like a typical young make seeker (with a bit of épater les bourgeois). Not the classic Red Guard personality but it served Melon Husk's needs.