this post was submitted on 14 Jul 2025
19 points (100.0% liked)

TechTakes

2087 readers
350 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] froztbyte@awful.systems 1 points 4 days ago

hi it's me, your locally undesignated wordsmith! in today's wordsmithing, one not of my own but I thought worthy of known: robo-scab

(cred to @dgerard for linking upskeet that got me finding this. also I was so close; close but no cigar, I guess)

[–] nightsky@awful.systems 31 points 2 weeks ago (10 children)

I need to rant about yet another SV tech trend which is getting increasingly annoying.

It's something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I'm not talking about the translation feature in webbrowsers, it's the websites themselves.)

Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.

On a product page I then saw:

Material: gefühlt

This was very strange... because that makes no sense at all. "Gefühlt" is a form (participle) of the verb "fühlen", which means "to feel". It can be used in a past tense form of the verb.

So, to make sense of this you first have to translate that back to English, the past tense "to feel" as "felt". And of course "felt" can also mean a kind of fabric (which in German is called "Filz"), so it's a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they're in.

And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.

And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It's unbearable. At least there's a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!

Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people's faces?

[–] sailor_sega_saturn@awful.systems 14 points 2 weeks ago

Click here if you want a horribly bad translation in your faceA screenshot of a food delivery website advertising some chicken nuggets called 'chicken pops' that cost 195 rupees. However the item description is the medical definition of chickenpox, the virus, instead.

From Reddit

[–] HedyL@awful.systems 13 points 2 weeks ago (1 children)

Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

This really gets on my nerves too. They probably came up with the idea that they could increase time spent on their platforms and thus revenue by providing more content in their users' native languages (especially non-English). Simply forcing it on everyone, without giving their users a choice, was probably the cheapest way to implement it. Even if this annoys most of their user base, it makes their investors happy, I guess, at least over the short term. If this bubble has shown us anything, it is that investors hardly care whether a feature is desirable from the users' point of view or not.

load more comments (1 replies)
load more comments (8 replies)
[–] blakestacey@awful.systems 26 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Yud:

ChatGPT has already broken marriages, and hot AI girls are on track to remove a lot of men from the mating pool.

And suddenly I realized that I never want to hear a Rationalist say the words "mating pool".

(I fired up xcancel to see if any of the usual suspects were saying anything eminently sneerable. Yudkowsky is re-xitting Hanania and some random guy who believes in g. Maybe he should see if the Pioneer Fund will bankroll publicity for his new book....)

[–] Soyweiser@awful.systems 20 points 2 weeks ago

Good news for women, less risk of the pool needing to be drained because someone crapped in it again.

[–] shapeofquanta@lemmy.vg 18 points 2 weeks ago

hot AI girls are on track to remove a lot of men from the mating pool

Can’t remove them if they were never in it.

[–] V0ldek@awful.systems 16 points 2 weeks ago

hot AI girls are on track to remove a lot of men from the mating pool.

Wasn't the "problem" that there's too many men in the mating pool and women are "alphamaxxing" or whatever the fuck to get the highest quality dick? Shouldn't this be a good thing for incels like Hanania.

[–] swlabr@awful.systems 14 points 2 weeks ago

mating pool

probably on par with the other stuff you might see at a rationalist poly compound

[–] flaviat@awful.systems 24 points 2 weeks ago (11 children)

rsyslog goes "AI first", for what reason? no one knows.

Opening ipython greeted me with this: "Tip: IPython 9.0+ has hooks to integrate AI/LLM completions."

I wish open source projects would stop doing this.

[–] nightsky@awful.systems 17 points 2 weeks ago (1 children)

rsyslog goes “AI first”

what

Thanks for the "from now on stay away from this forever" warning. Reading that blog post is almost surreal ("how AI is shaping the future of logging"), I have to remind myself it's a syslog daemon.

[–] froztbyte@awful.systems 12 points 2 weeks ago

I would've stan'd syslog-ng but they've also been pulling some fuckery with docs again lately that's making me anxious, so I'm very :|||||

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago (3 children)

Potential hot take: AI is gonna kill open source

Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I'm not sure it can recover from.

[–] fullsquare@awful.systems 13 points 2 weeks ago (2 children)

that and deluge of fake bug reports

load more comments (2 replies)
load more comments (2 replies)
load more comments (9 replies)
[–] swlabr@awful.systems 22 points 2 weeks ago (1 children)

found on reddit. posted without further comment

load more comments (1 replies)
[–] hrrrngh@awful.systems 20 points 2 weeks ago (1 children)

Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611

Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

. . .

Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

taking a wild guess it's Yudkowsky. "very knowledgeable people" and "many/most experts" is staying on my AI apocalypse bingo sheet.

even among people critical of AI (who don't otherwise talk about it that much), the AI apocalypse angle seems really common and it's frustrating to see it normalized everywhere. though I think I'm more nitpicking than anything because it's not usually their most important issue, and maybe it's useful as a wedge issue just to bring attention to other criticisms about AI? I'm not really familiar with Bernie Sanders' takes on AI or how other politicians talk about this. I don't know if that makes sense, I'm very tired

load more comments (1 replies)
[–] gerikson@awful.systems 17 points 1 week ago* (last edited 1 week ago) (2 children)

Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

(Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

At least on commenter gets it:

Most people distinguish between intentional acts and shit that happens.

(source)

Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

[–] lagrangeinterpolator@awful.systems 13 points 1 week ago (2 children)

Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?

One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?

(I don't know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)

[–] fullsquare@awful.systems 13 points 1 week ago (2 children)

reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?

load more comments (2 replies)
load more comments (1 replies)
[–] Soyweiser@awful.systems 12 points 1 week ago (4 children)

Recently, I've realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.

Yes, this is why people think that. This is a normal thought to think others have.

[–] o7___o7@awful.systems 15 points 1 week ago (1 children)

Why do these guys all sound like deathnote, but stupid?

[–] dgerard@awful.systems 16 points 1 week ago

because they cribbed their ideas from deathnote, and they're stupid

load more comments (3 replies)
[–] self@awful.systems 17 points 2 weeks ago (1 children)

404media posted an article absolutely dunking on the idea of pivoting to AI, as one does:

media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked

[–] blakestacey@awful.systems 21 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already

Fucking ghastly equivocation. Not just between "LLMs" and "machine learning", but between opening a website that has a chatbot icon I never click and actually wasting my time asking questions to the slop machine.

load more comments (5 replies)
[–] BlueMonday1984@awful.systems 16 points 2 weeks ago (3 children)

The curl Bug Bounty is getting flooded with slop, and the security team is prepared to do something drastic to stop it. Going by this specific quote, reporters falling for the hype is a major issue:

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

load more comments (3 replies)
[–] lagrangeinterpolator@awful.systems 16 points 1 week ago* (last edited 1 week ago) (1 children)

OpenAI claims that their AI can get a gold medal on the International Mathematical Olympiad. The public models still do poorly even after spending hundreds of dollars in computing costs, but we've got a super secret scary internal model! No, you cannot see it, it lives in Canada, but we're gonna release it in a few months, along with GPT5 and Half-Life 3. The solutions are also written in an atrociously unreadable manner, which just shows how our model is so advanced and experimental, and definitely not to let a generous grader give a high score. (It would be real interesting if OpenAI had a tool that could rewrite something with better grammar, hmmm....) I definitely trust OpenAI's major announcements here, they haven't lied about anything involving math before and certainly wouldn't have every incentive in the world to continue lying!

It does feel a little unfortunate that some critics like Gary Marcus are somewhat taking OpenAI's claims at face value, when in my opinion, the entire problem is that nobody can independently verify any of their claims. If a tobacco company released a study about the effects of smoking on lung cancer and neglected to provide any experimental methodology, my main concern would not be the results of that study.

Edit: A really funny observation that I just thought of: in the OpenAI guy's thread, he talks about how former IMO medalists graded the solutions in message #6 (presumably to show that they were graded impartially), but then in message #11 he is proud to have many past IMO participants working at OpenAI. Hope nobody puts two and two together!

load more comments (1 replies)
[–] BigMuffN69@awful.systems 16 points 2 weeks ago* (last edited 1 week ago) (4 children)

Remember last week when that study on AI's impact on development speed dropped?

A lot of peeps take away on this little graphic was "see, impacts of AI on sw development are a net negative!" I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.

https://substack.com/home/post/p-168077291

"First, I don’t like calling this study an “RCT.” There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the “treated units” here are the coding assignments. We’ll see in a second that this characterization isn’t so simple."

(I am once again shilling Ben Recht's substack. )

load more comments (4 replies)
[–] TinyTimmyTokyo@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. It's also great for vibe physics.

load more comments (6 replies)
[–] BlueMonday1984@awful.systems 14 points 2 weeks ago

New high-strength sneer from Matthew Hughes: The Biggest Insult, targeting "The Unspeakable Contempt At The Heart of Generative AI"

[–] blakestacey@awful.systems 14 points 1 week ago

Evan Urquhart:

I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldn't stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.

[–] scruiser@awful.systems 14 points 2 weeks ago (6 children)

So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread

Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.

And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.

load more comments (6 replies)
[–] blakestacey@awful.systems 14 points 1 week ago (6 children)

https://xcancel.com/jasonlk/status/1946069562723897802

Vibe Coding Day 8,

I'm not even out of bed yet and I'm already planning my day on @Replit.

Today is AI Day, to really add AI to our algo.

[...]

If @Replit deleted my database between my last session and now there will be hell to pay

[–] swlabr@awful.systems 13 points 1 week ago* (last edited 1 week ago)

Saw this, was going to post this a literal minute before you did but stared into this abyss a little too long.

Here's what the abyss revealed:

  • Guy is fucking stupid
  • Guy is fucking stupid, hence the AI use
  • Guy is fucking stupid, AI accidentally his whole database and he is still an AI glazer

Can't wait to see this guy just use a different but same tool to delete his shit again, and learn nothing

load more comments (5 replies)
[–] fasterandworse@awful.systems 13 points 2 weeks ago

My new video about the anti-design of the tech industry where I talk about this little passage from an ACM article that set me off when I found it a few years back.

In short, before software started eating all the stuff "design" meant something. It described a process of finding the best way to satisfy a purpose. It was a response to the purpose.

The tech industry takes computation as being an immutable means and finds purposes it may satisfy. The purpose is a response to the tech.

p.s. sorry to spam. :)

vid: https://www.youtube.com/watch?v=ollyMSWSWOY pod: https://pnc.st/s/faster-and-worse/8ffce464/tech-as-anti-design

threads bsky: https://bsky.app/profile/fasterandworse.com/post/3ltwles4hkk2t masto: https://hci.social/@fasterandworse/114852024025529148

[–] BlueMonday1984@awful.systems 13 points 1 week ago* (last edited 1 week ago)

Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

I've got time, so I'll fire off a sidenote:

In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

[–] Soyweiser@awful.systems 12 points 1 week ago (4 children)

Somebody found a relevant reddit post:

Dr. Casey Fiesler ‪@cfiesler.bsky.social‬ (who has clippy earrings in a video!) writes: " This is fascinating: reddit link

Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…"

[–] blakestacey@awful.systems 20 points 1 week ago

After understanding a lot of things it’s clear that it didn’t. And it fooled me for two weeks.

I have learned my lesson and now I am using it to generate one page at a time.

qu1j0t3 replies:

that's, uh, not really the ideal takeaway from this lesson

load more comments (3 replies)
[–] Seminar2250@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

thinking about how often code will be like

[-1] * len(array) # -1 is a place holder because I don't know the vocabulary term "sentinel value"

and justified because "it just needs to work"

and now we have professional vibe coders

https://en.wikipedia.org/wiki/Anguish

[–] TinyTimmyTokyo@awful.systems 12 points 1 week ago (5 children)
load more comments (5 replies)
load more comments
view more: next ›