this post was submitted on 20 Jul 2025
24 points (100.0% liked)

TechTakes

2087 readers
138 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] antifuchs@awful.systems 13 points 5 days ago

If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.

https://unstable.systems/@sop/114898566686215926

[–] BlueMonday1984@awful.systems 13 points 5 days ago (3 children)

Caught a particularly spectacular AI fuckup in the wild:

(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)

[–] antifuchs@awful.systems 11 points 5 days ago

Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data

[–] bitofhope@awful.systems 8 points 5 days ago (1 children)

Damn, this is how I find out?

[–] froztbyte@awful.systems 5 points 4 days ago
[–] Soyweiser@awful.systems 7 points 5 days ago* (last edited 5 days ago)

The AI is right with how much we know of his life he osnt really dead, the AGI can just simulate hom and resurrect him. Takes another hit from my joint made exclusively out of the sequences book pages

(Rip indeed, what a crazy ride, and he was all aboard).

[–] froztbyte@awful.systems 10 points 5 days ago (4 children)

Ouch. Also, I'm raging and didn't even realize I had barbarian levels.

[–] Amoeba_Girl@awful.systems 8 points 5 days ago (1 children)

Well I suppose it can't be much worse than graphology or myers-briggs!

[–] froztbyte@awful.systems 5 points 4 days ago (1 children)

is graphology the pentaseptateragonoid spiderweb-dartboard-connect-the-spines thing?

load more comments (1 replies)
[–] Soyweiser@awful.systems 7 points 5 days ago

failed my saving throw.

[–] o7___o7@awful.systems 5 points 5 days ago

I don't know what I expected

[–] gerikson@awful.systems 13 points 5 days ago* (last edited 5 days ago)

So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?

https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

[–] antifuchs@awful.systems 20 points 6 days ago (5 children)

This incredible banger of a bug against whisper, the OpenAI speech to text engine:

Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic which translates as "Translation by Nancy Qunqar"

[–] nightsky@awful.systems 7 points 5 days ago (1 children)

Similar case from 2 years ago with Whisper when transcribing German.

I'm confused by this. Didn't we have pretty decent speech-to-text already, before LLMs? It wasn't perfect but at least didn't hallucinate random things into the text? Why the heck was that replaced with this stuff??

[–] dgerard@awful.systems 4 points 5 days ago (1 children)

Transformers do way better transcription, buuuuuut yeah you gotta check it

[–] nightsky@awful.systems 7 points 5 days ago

I'm just confused because I remember using Dragon Naturally Speaking for Windows 98 in the 90s and it worked pretty accurately already back then for dictation and sometimes it feels as if all of that never happened.

[–] BurgersMcSlopshot@awful.systems 10 points 6 days ago (1 children)

Lol, training data must have included videos where there was silence but on screen was a credit for translation. Silence in audio shouldn't require special "workarounds".

[–] antifuchs@awful.systems 11 points 6 days ago

The whisper model has always been pretty crappy at these things: I use a speech to text system as an assistive input method when my RSI gets bad and it has support for whisper (because that supports more languages than the developer could train on their own infrastructure/time) since maybe 2022 or so: every time someone tries to use it, they run into hallucinated inputs in pauses - even with very good silence detection and noise filtering.

This is just not a use case of interest to the people making whisper, imagine that.

load more comments (3 replies)
[–] BigMuffN69@awful.systems 5 points 5 days ago

Ernie Davis gives his thoughts on the recent GDM and OAI performance at the IMO.

https://garymarcus.substack.com/p/deepmind-and-openai-achieve-imo-gold

[–] BlueMonday1984@awful.systems 13 points 6 days ago (1 children)

New Ed Zitron: The Hater's Guide To The AI Bubble

(guy truly is the Kendrick Lamar of tech, huh)

[–] o7___o7@awful.systems 5 points 6 days ago* (last edited 6 days ago) (1 children)

Hey, remember the thing that you said would happen?

https://bsky.app/profile/iwriteok.bsky.social/post/3lujqik6nnc2z

Edit: whoops, looks like we posted at about the same time!

[–] BlueMonday1984@awful.systems 5 points 6 days ago* (last edited 6 days ago)

Hey, remember the thing that you said would happen?

The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.

EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.

[–] BlueMonday1984@awful.systems 9 points 6 days ago (1 children)

Managed to stumble across two separate attempts to protect promptfondlers' feelings from getting hurt like they deserve, titled "Shame in the machine: affective accountability and the ethics of AI" and "AI Could Have Written This: Birth of a Classist Slur in Knowledge Work".

I found both of them whilst trawling Bluesky, and they're being universally mocked like they deserve on there.

[–] Amoeba_Girl@awful.systems 8 points 5 days ago* (last edited 5 days ago)

I really like how the second one appropriates pseudomarxist language to have a go at those snooty liberal elites again.

edit: The first paper might be making a perfectly valid point at a glance??

[–] nfultz@awful.systems 9 points 6 days ago (1 children)

Not sure if this was already posted here but saw it on LI this morning - AI for Good [Appearance?] - sometimes we focus on the big companies and miss how awful the sycophantic ecosystem gets.

[–] froztbyte@awful.systems 7 points 4 days ago* (last edited 4 days ago)

ah yeah @fasterandworse found this when it was happening (and I pulled archives of the live streams on the days it was playing)

some further observations to the stuff in her writeup: the day1 livestream also “starts late” (and cuts suspiciously cleanly in mid-sentence). I still want to do some tests to find out if YouTube’s live editor allows editing out stream history while stream is going, but either way they made very sure that they could completely silence that talk if it turned out that she didn’t bend as forced

(the now-up video published on youtube definitely starts differently to the livestream, too, so it’s likely a local post-mix recording that got uploaded. I haven’t had time to review both and find possible differences)

[–] gerikson@awful.systems 7 points 6 days ago
load more comments
view more: ‹ prev next ›