this post was submitted on 09 Feb 2026
13 points (93.3% liked)

TechTakes

2435 readers
78 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] saucerwizard@awful.systems 7 points 3 hours ago

OT: Just gave my two weeks notice and it turns out management is very big on using ChatGPT…

[–] CinnasVerses@awful.systems 6 points 3 hours ago (2 children)

A 2025 UBC master's thesis on our friends' ideas and their literary antecedents https://dx.doi.org/10.14288/1.0449985 The supervisor was born around the time that Elron Hubbard, Jack Parsons, RAH, and their wives and lovers were having a chaotic transition to the postwar world.

[–] Amoeba_Girl@awful.systems 4 points 2 hours ago (1 children)

I was getting excited to read this but seeing the word "hyperstition" used three times in the abstract put a bit of a damper on things hahah

[–] CinnasVerses@awful.systems 1 points 1 hour ago

I like the quote by John Swartzwelder in chapter 1.

[–] o7___o7@awful.systems 2 points 2 hours ago

AI Singularity Fantasies : Tracing Mythinformation from Erewhon to Spiritual Machines

That title is a banger

[–] lurker@awful.systems 2 points 2 hours ago (2 children)
[–] nightsky@awful.systems 2 points 23 minutes ago

Ugh, I'm so fucking tired of this shit.

I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, I'm sure it'll find some stuff, and some of it might be legit issues.

But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, it's supposed to bring about dystopian sci-fi alien wars?

Why are people hyped about that?

(Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)

[–] froztbyte@awful.systems 4 points 1 hour ago (3 children)

til that youtube now features "posts"

....sigh

[–] o7___o7@awful.systems 1 points 42 minutes ago

Going to youtube for the posts is the perfect inverse of reading playboy for the articles.

[–] lurker@awful.systems 2 points 1 hour ago

community posts have been a thing for like, two years now? three?

[–] lagrangeinterpolator@awful.systems 7 points 4 hours ago (1 children)

A machine learning researcher points out how the field has become enshittified. Everything is about publications, beating benchmarks, and social media. LLM use in papers, LLM use in reviews, LLM use in meta-reviews. Nobody cares about the meaning of the actual research anymore.

https://www.reddit.com/r/MachineLearning/comments/1qo6sai/d_some_thoughts_about_an_elephant_in_the_room_no/

[–] CinnasVerses@awful.systems 3 points 53 minutes ago

I like this reply on Reddit:

I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.

I see maybe a solution, or at least help, in closer research-business collaboration. Companies don't care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I've seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.

This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff's economic paper with the Excel error).

[–] Architeuthis@awful.systems 5 points 6 hours ago* (last edited 6 hours ago) (2 children)

Candidate for one of the PR threads of all time

In brief: OpenClaw bot sends PR to the matplotlib repo posing as a human, gets found out and is told to piss off in the politest terms imaginable, then gets passive aggressive to the point of publishing a pissy blog post about getting discriminated against. Some impoliteness ensues.

Crinve warning: thread may include some overt anthropomorphizing of text synthesizers.

[–] gerikson@awful.systems 5 points 4 hours ago (2 children)

I regret to inform y'all that the target of the blog post is a rat, or at least rat-adjacent

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

I think there’s a lot to say about the object level issue of how to deal with AI agents in open source projects, and the future of building in public at all.

[–] blakestacey@awful.systems 6 points 1 hour ago

object level issue

[–] Architeuthis@awful.systems 8 points 4 hours ago

Makes sense, given the embarrassing lengths he went to not hurt the bot's feelings in that thread.

One of the few benefits of AI is that nowadays some PR threads are very entertaining to read.

[–] e8d79@discuss.tchncs.de 7 points 12 hours ago

Great news everybody! Copilot will no longer delete your files when you ask it to document them and it took only 6 months to vibe code a solution.

[–] fnix@awful.systems 9 points 22 hours ago (2 children)

Rutger Bregman admits that he’s not sure what AGI actually is beyond vague utopian visions, but trivial questions aside, he’s sure it will revolutionize the world in 10 years.

For those who haven’t heard of him, he’s a Dutch historian who achieved some fame for his book arguing for UBI and reduced work weeks, as well as his critique of rich people avoiding taxes and a segment on Tucker Carlson’s show where he openly challenged his politics. He has since seemingly turned 180 degrees and become a billionaire-backed effective altruist.

[–] Soyweiser@awful.systems 6 points 11 hours ago* (last edited 11 hours ago)

Yeah he is trying to build his own EA movement. He also wrote a book (which I have not read) which basically argues that people in general are good not evil actually. (Fair enough, but not relevant).

Im still trying to meet him and shake is hand, the resulting matter antimatter explosion will take out the country.

[–] jaschop@awful.systems 6 points 14 hours ago

but I do know that what's available now is just f*cking impressive - and it will only get better.

Another victim of the proof-by-dopamine-hit fallacy it seems.

It's telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, it's so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?

[–] TinyTimmyTokyo@awful.systems 6 points 1 day ago (4 children)
[–] pewnack@aus.social 2 points 8 hours ago

@TinyTimmyTokyo

'This post is for "members" only'

Oh, I'm sure it is.

@BlueMonday1984

[–] WellsiteGeo@masto.ai 2 points 17 hours ago

@TinyTimmyTokyo @BlueMonday1984
The Romans answered that one : live donkey.

"Best" for what purposes, and for whom?

[–] lurker@awful.systems 4 points 1 day ago

Fuck, I was just about to post that. You beat me to the sneer

chatbots doing normal chatbot things

[–] arfisk@aus.social 2 points 21 hours ago

@TinyTimmyTokyo @BlueMonday1984

No mention of swede suppositories?

[–] V0ldek@awful.systems 9 points 1 day ago* (last edited 1 day ago) (31 children)

EDIT:

I'm removing the image (keeping the original text for posterity), but I just completely got had by someone straight up lying.

It's quite embarrasing, I should've been way more skeptical of someone posting an image without sourcing the original paper. Turns out not only is it not a recent paper at all (published June 2025), not only is that table not saying what he claims it's saying, but the authors have since removed that table altogether from revised versions of the paper!

That's what you get from reposting someone who has "The Finance Newsletter" in his fucking username, couldn't have gone well for me.

original post

From https://bsky.app/profile/thefinancenewsletter.com/post/3mek7wsqgkk26

Microsoft released a study showing the 40 jobs most at risk by AI:

Tag the most ridiculous entry, I am curious of your choices.

To me it has to be fucking historians. Arriving at new conclusions by looking at available evidence and/or finding obscure references that are not well known to the public -- CLASSIC THING LLMS ARE GOOD AT.

[–] V0ldek@awful.systems 5 points 1 day ago (1 children)

Edited the post after it came to my attention I got duped, I got had, I got bamboozled by a liar

[–] gerikson@awful.systems 1 points 4 hours ago (1 children)

Don't feel bad, it's gonna be harder and harder to avoid being duped in the future.

[–] V0ldek@awful.systems 1 points 18 minutes ago

But unlike those that have fallen to hubris I am built different and should be immune to disinformation!

[–] janxdevil@sfba.social 5 points 1 day ago

@V0ldek Mathematicians.

Tell me you have no idea what mathematicians do by publishing an absolute mockery of mathematics purporting to explain that mathematicians are likely to be replaced by LLMs.

[–] mawhrin@awful.systems 7 points 1 day ago (2 children)

before you order the cavalry charge, fwiw this skeet misrepresents the actual study topic rather badly, as another bluesky commenter notes.

load more comments (2 replies)
[–] BurgersMcSlopshot@awful.systems 7 points 1 day ago (4 children)

"these ai girls with 3 boobs really puts strain on the fashion model industry"

CNC Tool Programmer is a good one and shows that Microsoft, a company that probably has paid for someone to run CNC tooling for prototyping AND supposedly makes software, didn't do the bare minimum to understand complexeties involved by talking to that someone.

Yeah, you can make mistakes with programming this thing, it'll happily destroy hundreds of thousands of dollars in tooling as well as potentially maiming or killing anyone standing too close while the machine is actually physically crashing. It will friction-weld your nice, expensive carbide cutting tool with cooling channels to your work piece (even if they are dissimler metals) by taking too big of a cut because it does exactly as it's instructed.

load more comments (4 replies)
load more comments (27 replies)
[–] blakestacey@awful.systems 17 points 1 day ago (9 children)
[–] nightsky@awful.systems 12 points 1 day ago

Even if you've never heard of him before and know nothing else about him... this short tweet alone tells so much about what kind of person he is.

[–] Soyweiser@awful.systems 10 points 1 day ago

Interesting first job your mind goes to there Yud. Might spend a little bit less time around people who regularly use the word goon but who never talk about the mob.

load more comments (7 replies)
load more comments
view more: next ›