this post was submitted on 03 Aug 2025
11 points (100.0% liked)

TechTakes

2101 readers
67 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] o7___o7@awful.systems 5 points 40 minutes ago (2 children)

OpenAI is food. You can tell because it's washed, rinsed, and cooked.

[–] froztbyte@awful.systems 3 points 18 minutes ago

this gave me a good chuckle after a few Some Fuckin Days, ty

[–] fullsquare@awful.systems 3 points 38 minutes ago

who's the bigger fish that's gonna eat them?

[–] sailor_sega_saturn@awful.systems 1 points 4 minutes ago

Crypto bros continue to be morally bankrupt. There is an a coin / NFT called "GreenDildoCoin" and they've thrown dildos onto court at multiple WNBA basketball games (ESPN, video). It warms my heart that one of them was arrested. More of that please.

Polymarket even had a "prediction" on it. Because surely the outcome there couldn't be influenced by someone who also placed a large bet. Oh and Donald Trump Jr. posted a meme about it

None of this is particularly surprising if you've followed NFTs at all: the clout chasing goes to the extreme. In the limit memecoins can act as donations to terrible people from donors who want them to be terrible. Still I hate how much publicity this has gotten, and how this has manifested as gross disrespect towards women atheletes / women's sports by the sorts of losers who make "jokes" about no one watching WNBA games.

[–] TinyTimmyTokyo@awful.systems 3 points 34 minutes ago (1 children)
[–] FredFig@awful.systems 4 points 23 minutes ago

Half of these are people using GPT to write a rant about GPT and the other half are saying "skill issue", it's an entirely different world.

[–] antifuchs@awful.systems 4 points 1 hour ago

I’m excited that Silicon Valley tech has finally managed to invent thinking. Makes this book obsolete at long last.

[–] BlueMonday1984@awful.systems 5 points 2 hours ago

Found a good sneer recently: The LLM In The Room, about LLMs' deeply-lacking usefulness for programming

[–] slop_as_a_service@awful.systems 9 points 3 hours ago

Fate, it seems, is not without a sense of irony.

[–] froztbyte@awful.systems 7 points 9 hours ago (2 children)

can't wait to see what fresh horrors this shit unleases

[–] Seminar2250@awful.systems 7 points 6 hours ago (2 children)

ugh cybersecurity is already a fucking nightmare i should have braced myself

[–] froztbyte@awful.systems 3 points 13 minutes ago

wouldn't you just love some snakeoil sauce on your snakeoil sandwich? imagine how good it'll go with that snakeoil cocktail we're given you, on the house!

(which, ofc, is a limited-size cocktail. only 30ml! enough to get a feel for our snakeoil! but also it's only 10ml/day. license levels. you understand, I'm sure.)

[–] BlueMonday1984@awful.systems 6 points 7 hours ago (1 children)

Considering the quality of your average LLM, and the quality of the promptfondlers who use them, I expect this will result in a lot of serious security vulnerabilities and broken projects.

[–] Soyweiser@awful.systems 5 points 4 hours ago

Considering how bad these things are at math, see below, and how important math is for cryptography, see any textbook on it, this will be !!fun!!.

[–] BigMuffN69@awful.systems 8 points 11 hours ago (1 children)

Only taste tester I trust dropped his verdict

[–] BigMuffN69@awful.systems 7 points 11 hours ago
[–] sailor_sega_saturn@awful.systems 10 points 12 hours ago (1 children)

At my big tech job after a number of reorgs / layoffs it's now getting pretty clear that the only thing they want from me is to support the AI people and basically nothing else.

I typed out a big rant about this, but it probably contained a little too much personal info on the public web in one place so I deleted it. Not sure what to do though grumble grumble. I ended up in a job I never would have chosen myself and feel stuck and surrounded by chat-bros uggh.

[–] YourNetworkIsHaunted@awful.systems 6 points 12 hours ago* (last edited 12 hours ago) (1 children)

You could try getting laid off, scrambling for a year trying to get back into a tech position, start delivering Amazon packages to make ends meet, and despair at the prospect of reskilling in this economy. I... would not recommend it.

It looks like there are a weirdly large number of medical technician jobs opening up? I wonder if they're ahead of the curve on the AI hype cycle.

  1. Replace humans with AI
  2. Learn that AI can't do the job well
  3. Frantically try to replace 2-5 years of lost training time
[–] sailor_sega_saturn@awful.systems 7 points 11 hours ago (1 children)

Amazon should treat drivers better. I hate how much "hustle" is required for that sort of job and how poorly they respect their workers.

I think my job needs me too much to lay me off, which I have mixed feelings about despite the slim-pickings for jobs.

I'm also trying to position myself to potentially have to flee the USA* due to transgender persecution**. There's still a lot of unknowns there. I'll probably stay at my job for awhile while I work on setting some stuff up for the future.

That said part of me is tempted to reskill into a career that'd work well internationally (nursing?) -- I'm getting a little up in years for that but it'd probably be a lot more fulfilling than what I'm doing now.

* My previous attempt did not work out. I rushed things too much and ended up too stressed out and unbelievably homesick.

** This has been getting incredibly stressful lately.

[–] mountainriver@awful.systems 3 points 55 minutes ago

Most medical careers work well internationally, in principle. Something to keep in mind is that language proficiency may be a stated or unstated prerequisite for employment, in particular if you have contact with patients. If you work with the machines (lab technician, etc) the language may be of less importance. Or at least, so I have heard. Relevance depends on your country of choice and your pre-existing language skills, of course.

To bad attempt number one didn't work well. Better luck with attempt number two.

[–] BigMuffN69@awful.systems 8 points 21 hours ago* (last edited 21 hours ago) (3 children)

Well, after 2.5 years and hundreds of billions of dollars burned, we finally have GPT-5. Kind of feels like a make or break moment for the good folks at OAI~~! With the eyes of the world on their lil presentation this morning, everyone could feel the stakes: they needed something that would blow our minds. We finally get to see what a super intelligence looks like! Show us your best cherry picked benchmark Sloppenheimer!

Graphic design is my PASSION. Good thing the entirety of the world's economy is not being held up by cranking out a few more points on SWE bench right????

Ok. what about ARC? Surely ya'll got a new high to prove the AGI mission was progressing right??

Oh my fucking God. They actually have lost the lead to fucking Grok. For my sanity I didn't watch the live stream, but curiously, they left the ARC results out of their presentation. Even though they gave Francois access early to test. Kind of like they knew this looks really bad and underwhelming.

[–] blakestacey@awful.systems 8 points 15 hours ago* (last edited 15 hours ago)

"The word blueberry contains the letter b 3 times."

Also reported in more detail here:

The word "blueberry" has the letter b three times:

  • Once at the start ("B" in blueberry).
  • Once in the middle ("b" in blue).
  • Once before the -erry ending ("b" in berry). [...] That's exactly how blueberry is spelled, with the b's in positions 1, 5, and 7. [...] So the "bb" in the middle is really what gives blueberry its double-b moment. [...] That middle double-b is easy to miss if you just glance at the word.

(via)

[–] Soyweiser@awful.systems 6 points 21 hours ago (2 children)

Graphic design is my PASSION

Wait just how bad is 4? 30% accurate? Did they train it wrong as a joke? Also hatless 5 worse than 3?

[–] BigMuffN69@awful.systems 8 points 20 hours ago

Yeah, O3 (the model that was RL'd to a crisp and hallucinated like crazy) was very strong on math coding benchmarks. GPT5 (I guess without tools/extra compute?) is worse. Nevertheless...

[–] BigMuffN69@awful.systems 5 points 20 hours ago (1 children)

The one big cope I'm seeing is in the METR graph ofc. Tiny bump with massive error bars above Grok 4 so they can claim the exponential is continuing while the models stagnate in all material ways.

[–] ebu@awful.systems 7 points 20 hours ago

50% success rate? sorry, all this for a coin flip?

[–] ShakingMyHead@awful.systems 3 points 19 hours ago* (last edited 19 hours ago)

Looks like they already removed it.

[–] BlueMonday1984@awful.systems 7 points 1 day ago (2 children)

In other news, the mainstream press has caught on to "clanker" (originally coined for use in the Star Wars franchise) getting heavy use, with Rolling Stone, Gizmondo and Axios putting out articles on it, and NPR featuring it in Word of the Week.

You want my take, I expect it will retain heavy usage going forward - as I've stated before (multiple times at least), AI is no longer viewed as a "value-neutral" tool/tech, but as an enemy of humanity, whose use expresses a contempt for humanity.

[–] gerikson@awful.systems 4 points 21 hours ago

cue botlickers whining about "robot discrimination"

[–] Soyweiser@awful.systems 4 points 22 hours ago (2 children)

So question, anybody ever see this before in heavy usage? Or is this just some weird media thing?

[–] TrashGoblin@awful.systems 1 points 12 minutes ago

I've never seen it used in real life, but I did see it occasionally on social media before these articles.

[–] YourNetworkIsHaunted@awful.systems 4 points 12 hours ago (1 children)

I've seen it pick up lately, particularly in non-sneer-adjacent spaces, but it's definitely recent and I'm not sure how common it really was, which is a shame because I love it.

[–] Soyweiser@awful.systems 2 points 4 hours ago

But was that before or after they wrote about it? (Doesnt really matter btw, just curious, slopper and clanker are pretty good)

[–] BlueMonday1984@awful.systems 8 points 1 day ago (2 children)

New article from Matthew Hughes, about the sheer stupidity of everyone propping up the AI bubble.

Orange site is whining about it, to Matthew's delight:

Someone posted my newsletter to Hacker News and the comments are hilarious, insofar as they're upset with the tone of the piece.

Which is hilarious, because it precisely explains why those dweebs love generative AI. They're absolutely terrified of human emotion, or passion, or naughty words.

[–] gerikson@awful.systems 3 points 21 hours ago

Is Hughes legit, and is this the 3rd time's the charm when it comes to linking to substacks here? ;)

[–] gerikson@awful.systems 8 points 1 day ago

HN is all manly and butch about "saying it like it is" when some techbro is in trouble for xhitting out a racism, but god forbid someone says something mean about sama or pg

[–] gerikson@awful.systems 12 points 1 day ago (1 children)

I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

Yud continues:

But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

[–] swlabr@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

Imagine a world where, instead of performing this kind of juvenile psychoanalysis of slop, Yud instead turned his stupid focus on, like, Star Wars EU novels or something.

Edit: from the comments: there's mention about "HHH", so now I say: imagine a world where all the rats and other promptfondlers dedicated all their brainrot energy toward the pro-wrestling fandom instead.

[–] swlabr@awful.systems 7 points 1 day ago* (last edited 1 day ago)

ah man this rules. just gonna live in this world for a bit

  • LW -> "Love Wrestling!" an online forum discussing all things wrestling
  • Zizians are just an alternate, more extreme promotion
  • Roko's Basilisk -> a finisher move of 3rd rate, tech-themed wrestler "Roko" that not only "finishes" your opponent, but simulates them getting finished infinitely
  • Musk and Grimes are personas and their weird dating life is just a long and drawn out storyline
  • All enthusiasm for polyamory replaced with enthusiasm for tag team matches
[–] BlueMonday1984@awful.systems 8 points 1 day ago (1 children)

New case popped up in medical literature: A Case of Bromism Influenced by Use of Artificial Intelligence, about a near-fatal case of bromine poisoning caused by someone using AI for medical advice.

[–] HedyL@awful.systems 5 points 1 day ago* (last edited 1 day ago) (2 children)

On first glance, this also looks like a case where a chatbot confirmed a person's biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn't true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a "perfect" replacement, which, to my knowledge, doesn't exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn't entirely in the wrong if it hadn't suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot's sycophantic communication style significantly exacerbated the problem on hand.

[–] Soyweiser@awful.systems 5 points 22 hours ago

The way I understood salt is that you should be careful with it if you have heart problems or heart problems run in the family, and then esp when you eat a lot of ready made products which generally have more salt. Anyway, talk to your doctor if you worry about it. Not chatgpt.

[–] fullsquare@awful.systems 5 points 23 hours ago (1 children)

the stupidest thing about it is that there already is commercial low sodium table salt, and it substitutes part of sodium chloride with potassium chloride, because the point is to decrease sodium intake, not chloride intake (in most of cases)

[–] HedyL@awful.systems 4 points 18 hours ago

Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I'm not a medical professional). If Google search (not Google AI) tells the truth, this doesn't sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

[–] Soyweiser@awful.systems 6 points 1 day ago (2 children)

Simple way of messing with the dumbest robot ignoring scrapers. Html bomb.

[–] BlueMonday1984@awful.systems 4 points 1 day ago

On a personal note, part of me expects this will see some adoption as an anti-scraping measure - unlike tarpits like Iocaine and Nepenthes, this won't take up a significant amount of resources to implement, and their ability to crash AI scraper bots both wastes the AI corps' time by forcing them to reboot said scraper and encourages them to avoid your website entirely.

[–] gerikson@awful.systems 8 points 1 day ago

Here's a writeup on how to do this practically

https://ache.one/notes/html_zip_bomb

load more comments
view more: next ›