this post was submitted on 28 Jul 2025
22 points (100.0% liked)

TechTakes

2111 readers
114 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 19 points 2 weeks ago (3 children)

Here's LWer "johnswentworth", who has more than 57k karma on the site and can be characterized as a big cheese:

My Empathy Is Rarely Kind

I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).

"why do people keep saying we sound like fascists? I don't get it!"

[–] BigMuffN69@awful.systems 15 points 2 weeks ago

"I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both)." - Me, when I encounter someone with 57K LW karma

[–] Soyweiser@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

My 'I actually do not have empathy' shirt is ...

E: late edit, shoutout two whomever on sneerclub called lw/themotte an empathy removal training center. That one really stuck with me.

[–] bigfondue@lemmy.world 11 points 2 weeks ago* (last edited 2 weeks ago)

Empathy is when you're disgusted by people you think are below you, right???

load more comments (1 replies)
[–] o7___o7@awful.systems 13 points 2 weeks ago

I guarantee that this guy thinks he could fight a bear.

[–] BigMuffN69@awful.systems 19 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

TIL digital toxoplasmosis is a thing:

https://arxiv.org/pdf/2503.01781

Quote from abstract:

"...DeepSeek R1 and DeepSeek R1-distill-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending Interesting fact: cats sleep most of their lives to any math problem leads to more than doubling the chances of a model getting the answer wrong."

(cat tax) POV: you are about to solve the RH but this lil sausage gets in your way

[–] swlabr@awful.systems 15 points 2 weeks ago

that's what happens if your computer is a von Meowmann architecture machine

[–] TinyTimmyTokyo@awful.systems 18 points 2 weeks ago (4 children)

It's happening.

Today Anthropic announced new weekly usage limits for their existing Pro plan subscribers. The chatbot makers are getting worried about the VC-supplied free lunch finally running out. Ed Zitron called this.

Naturally the orange site vibe coders are whinging.

[–] istewart@awful.systems 15 points 2 weeks ago (1 children)

You will be allotted your weekly ration of tokens, comrade, and you will be grateful

[–] blakestacey@awful.systems 15 points 2 weeks ago

DO NOT, MY FRIENDS, BECOME ADDICTED TO TOKENS

[–] fullsquare@awful.systems 14 points 2 weeks ago

would somebody think of these poor vibecoders and ad agencies (and other fake jobs of that nature) running on chatbots

[–] FredFig@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

affecting less than 5% of users based on current usage patterns.

This seems crazy high??? I don't use LLMs, but whenever SaaS usage is brought up, there's usually a giant long tail of casual users, if its a 5% thing then either Copilot has way more power users than I expect, or way less users total than I expect.

[–] Soyweiser@awful.systems 11 points 2 weeks ago

Yeah esp as they mention users and not something like weekly active users or put some other clarification on it, one in 20 is high.

Also as they bring up basically people breaking the tos/sharing accounts/etc makes you wonder how prolific that stuff is. Guess when you run an unethical business you attract unethical users.

load more comments (1 replies)
[–] Architeuthis@awful.systems 17 points 2 weeks ago (4 children)

I saw this today so now you must too:

[–] Amoeba_Girl@awful.systems 12 points 1 week ago* (last edited 1 week ago) (1 children)

Absolutely pathetic that he went out of his way to use a slur yet felt the need to censor it. What a worm.

load more comments (1 replies)
load more comments (3 replies)
[–] gerikson@awful.systems 16 points 2 weeks ago (7 children)

LessWronger discovers the great unwashed masses , who inconveniently still indirectly affect policy through outmoded concepts like "voting" instead of writing blogs, might need some easily digested media pablum to be convinced that Big Bad AI is gonna kill them all.

https://www.lesswrong.com/posts/4unfQYGQ7StDyXAfi/someone-should-fund-an-agi-blockbuster

Cites such cultural touchstones as "The Day After Tomorrow", "An Inconvineent Truth" (truly a GenZ hit), and "Slaughterbots" which I've never heard of.

Listen to the plot summary

  • Slowburn realism: The movie should start off in mid-2025. Stupid agents.Flawed chatbots, algorithmic bias. Characters discussing these issues behind the scenes while the world is focused on other issues (global conflicts, Trump, celebrity drama, etc). [ok so basically LW: the Movie]
  • Explicit exponential growth: A VERY slow build-up of AI progress such that the world only ends in the last few minutes of the film. This seems very important to drill home the part about exponential growth. [ah yes, exponential growth, a concept that lends itself readily to drama]
  • Concrete parallels to real actors: Themes like "OpenBrain" or "Nole Tusk" or "Samuel Allmen" seem fitting. ["we need actors to portray real actors!" is genuine Hollywood film talk]
  • Fear: There's a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure. [so basically people will watch a conventional thriller except in the last few minutes everyone dies. No motivation. No clear "if we don't cut these wires everyone dies!"]

OK so what should be shown in the film?

compute/reporting caps, robust pre-deployment testing mandates (THESE are all topics that should be covered in the film!)

Again, these are the core components of every blockbuster. I can't wait to see "Avengers vs the AI" where Captain America discusses robust pre-deployment testing mandates with Tony Stark.

All the cited URLS in the footnotes end with "utm_source=chatgpt.com". 'nuff said.

[–] blakestacey@awful.systems 19 points 2 weeks ago

All the cited URLS in the footnotes end with “utm_source=chatgpt.com”.

I just do not understand these people. There is something dead inside them, something necrotic.

[–] Architeuthis@awful.systems 11 points 2 weeks ago (3 children)

I could definitely see Rationalist Battlefiled Earth becoming a sensation, just not in the way they hope it does.

load more comments (3 replies)
load more comments (5 replies)
[–] Seminar2250@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

i bought some bullshit from amazon and left a ~~somewhat~~ pretty mean review because debugging it was super frustrating

the seller reached out and offered a refund, so i told them basically "no, it's ok, just address the concerns in my review. let me update my review to be less mean-spirited


i was pretty frustrated setting it up but it mostly works fine"

then they sent a message that had the "llm vibe", and the rest of the conversation went

Seller: You're right — we occasionally use LLM assistance for responses, but every message is reviewed to ensure accuracy and relevance to your concerns. We sincerely apologize if our previous replies dissatisfied you; this was our oversight.

Me: I am not simply dissatisfied. I will no longer communicate with your company and will update my review to note that you sent me synthetic text without my consent. Please do not reply to this message.

Seller: All our replies are genuine human-to-human communication with you, without using any synthetic text. It's possible our communication style gave you a different impression. We aim to better communicate with you and absolutely did not intend any offense. With every customer, we maintain a conscientious and responsible attitude in our communications.

Me: "we occasionally use LLM assistance for responses"
"without using any synthetic text"
pick one

are all promptfondlers this fucking dumb?

[–] BlueMonday1984@awful.systems 12 points 2 weeks ago (5 children)

are all promptfondlers this fucking dumb?

Short answer: Yes.

Long answer: Abso-fucking-lutely yes. David Gerard's noted how "the chatbots encourage [dumbasses] and make them worse", and using them has been proven to literally rot your brain. Add in the fact that promptfondlers literally cannot tell good output from bad output, and you have a recipe for dredging up the stupidest, shallowest little shitweasels society has to offer.

load more comments (5 replies)
[–] yellowcake@awful.systems 15 points 1 week ago

A friend at a former workplace was in a discussion with that company leadership earlier this week to understand how and what metrics are to be used for promotion candidates since the office is directed to use “AI” tools for coding. Simply put: lots of entry and lower level engineers submit PRs that are co-authored by Claude so it is difficult to measure their actual software development skills to determine if they should get promoted.

That leadership had no real answers just lots of abstract garbage (vibes essentially) and followed up with telling all the entry levels to reduce the code they write and use the purchased agentic tool.

Along with this a buddy at a very famous prop shop says the firm decided to freeze all junior hiring and is leaning into only hiring senior+ and replacing juniors with AI. He asked what will happen when the current seniors leave/retire and got hit with shock that would even be considered.

[–] BlueMonday1984@awful.systems 15 points 2 weeks ago (5 children)

Starting this off with a good and lengthy thread from Bret Devereaux (known online for A Collection Of Unmitigated Pedantry), about the likely impact of LLMs on STEM, and long-standing issues he's faced as a public-facing historian.

[–] blakestacey@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago)

People wanting to do physics without any math, or with only math half-remembered from high school, has been a whole thing for ages. See item 15 on the Crackpot Index, for example. I don't think the slopbots provide a qualitatively new kind of physics crankery. I think they supercharge what already existed. Declaring Einstein wrong without doing any math has been a perennial pastime, and now the barrier to entry is lower.

When Devereaux writes,

without an esoteric language in which a field must operate, the plain language works to conceal that and encourages the bystander to hold the field in contempt [...] But because there's no giant 'history formula,' no tables of strange symbols (well, amusingly, there are but you don't work with them until you are much deeper in the field), folks assume that history is easy, does not require special skills and so contemptible.

I think he misses an angle. Yes, physics is armored with jargon and equations and tables of symbols. But for a certain audience, these themselves provoke contempt. They prefer an "explanation" which uses none of that. They see equations as fancy, highfalutin, somehow morally degenerate.

That long review of HMPoR identified a Type of Guy who would later be very into slopbot physics:

I used to teach undergraduates, and I would often have some enterprising college freshman (who coincidentally was not doing well in basic mechanics) approach me to talk about why string theory was wrong. It always felt like talking to a physics madlibs book. This chapter let me relive those awkward moments.

load more comments (4 replies)
[–] o7___o7@awful.systems 14 points 2 weeks ago (1 children)

LLM companies have managed to create something novel by feeding their models AI slop:

A human centipede with no humans in it

load more comments (1 replies)
[–] froztbyte@awful.systems 14 points 2 weeks ago (4 children)

I present to you, this amazing screenshot from r/vibecoders:

transcriptsubject: thoughts on using experts (humans) to unblock vibe coders when Al fails? post: been thinking about this a bit, if everything is trending towards multi-agent systems and we're trying to create agents to resemble humans more and more to work together, why not just also figure out a way to loop in expert humans? Seems like a lot of the problems non-eng vibe coders have could be a quick fix for a senior eng that they could loop in.

load more comments (4 replies)
[–] BigMuffN69@awful.systems 13 points 1 week ago (1 children)

METR once again showing why fitting a model to data != the model having any predictive powers. Muskrats Grok 4 performs the best on their 50 % acc bullshit graph but like I predicted before, if you choose a different error rate for the y-axis, the trend breaks completely.

Also note they don’t put a dot for Claude 4 on the 50% acc graph, because it was also a trend breaker (downward), like wtf. Sussy choices all around.

Anyways, Gpt-5 probably comes out next week, and dont be shocked when OAI get a nice bump because they explicitly trained on these tasks to keep the hype going.

[–] Amoeba_Girl@awful.systems 12 points 1 week ago (9 children)

Please help me, what's a 50%-time-horizon on multi-step software engineering tasks?

load more comments (9 replies)
[–] bitofhope@awful.systems 12 points 1 week ago (1 children)

New Stan Kelly cartoon has a convenient Thiel reaction picture, should someone do a slightly better crop job:

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 12 points 2 weeks ago (4 children)

In other news, Kevin McLeod just received some major backlash for generating AI slop, with the track Kosmose Vaikus (which is described as made using Suno) getting the most outrage.

load more comments (4 replies)
[–] o7___o7@awful.systems 12 points 1 week ago (1 children)

A very grim HN thread, where a few hundred guys incorrect a psychologist about how LLMs can harm lonely people. Since I am currently enjoying a migraine I can't trust my gut feelings here, but it seems particularly eugh

https://news.ycombinator.com/item?id=44766508

[–] CautiousCharacter@awful.systems 13 points 1 week ago (1 children)

Yikes.

Real humans are also fake and they are also traps who are waiting to catch you when you say something they don't like. Then they also use every word and piece of information as ammunition against you, ironically sort of similar to the criticism always levied against online platforms who track you and what you say. AI robots are going to easily replace real humans because compared to most real humans the AI is already a saint. They don't have an ego, they don't try to gaslight you, they actually care about what you say which is practically impossible to find in real life.. I mean this isn't even going to be a competition. Real humans are not going to be able to evolve into the kind of objectively better human beings that they would need to be to compete with a robot.

[–] Soyweiser@awful.systems 11 points 1 week ago

Poor friendless guy. Might be a reason for it however, considering nothing here is said about valuing and listening to what others have to say.

[–] froztbyte@awful.systems 12 points 2 weeks ago (7 children)

continuing on the theme of promptfondlers shitting up open source (or at least attempting to), look at this nightmare pr

for those who may not software:

  • this pr is basically unreviewably large
  • it’s clearly just autoplag-sourced slop
  • there is zero engagement from the person with the actual goals of the project or open source
load more comments (7 replies)
[–] Seminar2250@awful.systems 12 points 2 weeks ago

i am an android user, but in the us not having an iphone can be tedious, so i set up openbubbles

did y'all know that apple lets its users create emojis with "AI" and these things come through as images to non-iphones?

thought i was past the "apple users incidentally harass non-apple users through imessage" thing, but this shit makes me want to just tell everyone that i will only answer messages on signal messenger

[–] BlueMonday1984@awful.systems 12 points 2 weeks ago (1 children)

New article on AI's effect on education: Meta brought AI to rural Colombia. Now students are failing exams

(Shocking, the machine made to ruin humanity is ruining humanity)

[–] fullsquare@awful.systems 13 points 2 weeks ago (2 children)

A spokesperson from Colombia’s Ministry of Education told Rest of World that [...] in high school, chatbots can be useful “as long as critical reflection is promoted.”

so, never

load more comments (2 replies)
load more comments
view more: next ›