this post was submitted on 03 Aug 2025
12 points (100.0% liked)

TechTakes

2129 readers
74 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] mirrorwitch@awful.systems 24 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

From gormless gray voice to misattributed sources, it can be daunting to read articles that turn out to be slop. However, incorporating the right tools and techniques can help you navigate instructionals in the age of AI. Let's delve right in and and learn some telltale signs like:

  • Every goddamn article reads like this now.
  • With this bullet point list at some point.
  • I am going to tear the eyes off my head
load more comments (2 replies)
[–] smiletolerantly@awful.systems 20 points 3 weeks ago (1 children)

ChatControl is back on the table here in Europe AGAIN (you've probably heard), with mandatory age checking sprinkled on to as a treat.

I honestly feel physically ill at this point. Like a constant, unignorable digital angst eating away at my sanity. I don't want any part in this shit anymore.

[–] blakestacey@awful.systems 11 points 3 weeks ago (6 children)

ChatControl in the EU, the Online Safety Act in the UK, Australia's age gate for social media, a boatload of censorious state laws here in the US and staring down the barrel of KOSA... yeah.

load more comments (6 replies)
[–] nightsky@awful.systems 20 points 3 weeks ago (1 children)
load more comments (1 replies)
[–] Seminar2250@awful.systems 19 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

is anyone else fucking sick and tired of discord? it's one thing if it's gaming-related^[i guess. not really, fuck discord.], but when i'm at a repo for some non-gaming project and they say "ask for help in our discord server", i feel like i'm in a fever dream and i'm going to wake up and discover that the simulation i was in was managed by chatgpt

[–] ebu@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)
load more comments (2 replies)
[–] mii@awful.systems 11 points 2 weeks ago (1 children)

Yeah, for multiple reasons. Mostly because all the information in there isn’t accessed or searchable from the outside, and technically not even from the inside because Discord’s search feature fucking sucks.

load more comments (1 replies)
[–] BigMuffN69@awful.systems 18 points 3 weeks ago (2 children)

Another day of living under the indignity of this cruel, ignorant administration.

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 17 points 3 weeks ago

Cloudflare has publicly announced the obvious about Perplexity stealing people's data to run their plagiarism, and responded by de-listing them as a verified bot and added heuristics specifically to block their crawling attempts.

Personally, I'm expecting this will significantly hamper Perpllexity going forward, considering Cloudflare's just cut them off from roughly a fifth of the Internet.

[–] BlueMonday1984@awful.systems 16 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Found an AI bro making an incoherent defense of AI slop today (fitting that he previously shilled NFTs):

Needless to say, he's getting dunked on in the replies and QRTs, because people like him are fundamentally incapable of being punk.

(EDIT: Originally found this through Bluesky)

[–] nightsky@awful.systems 19 points 3 weeks ago

Yes, doing the thing which the entire business world is pouring billions into and trying their hardest to shove onto everyone to maximize imagined future profits, that's what counterculture is all about.

[–] bigfondue@lemmy.world 17 points 3 weeks ago (1 children)

Making art with the help of tech billionaires is so punk rock man!

load more comments (1 replies)
load more comments (1 replies)
[–] BlueMonday1984@awful.systems 16 points 3 weeks ago (1 children)

Ran across a pretty solid sneer: Every Reason Why I Hate AI and You Should Too.

Found a particularly notable paragraph near the end, focusing on the people focusing on "prompt engineering":

In fear of being replaced by the hypothetical ‘AI-accelerated employee’, people are forgoing acquiring essential skills and deep knowledge, instead choosing to focus on “prompt engineering”. It’s somewhat ironic, because if AGI happens there will be no need for ‘prompt-engineers’. And if it doesn’t, the people with only surface level knowledge who cannot perform tasks without the help of AI will be extremely abundant, and thus extremely replaceable.

You want my take, I'd personally go further and say the people who can't perform tasks without AI will wind up borderline-unemployable once this bubble bursts - they're gonna need a highly expensive chatbot to do anything at all, they're gonna be less productive than AI-abstaining workers whilst falsely believing they're more productive, they're gonna be hated by their coworkers for using AI, and they're gonna flounder if forced to come up with a novel/creative idea.

All in all, any promptfondlers still existing after the bubble will likely be fired swiftly and struggle to find new work, as they end up becoming significant drags to any company's bottom line.

Promptfondling really does feel like the dumbest possible middle ground. If you're willing to spend the time and energy learning how to define things with the kind of language and detail that allows a computer to effectively work on them, we already have tools for that: they're called programming languages. Past a certain point trying to optimize your "natural language" prompts to improve your odds from the LLM gacha you're doing the digital equivalent sot trying to speak a foreign language by repeating yourself louder and slower.

[–] gerikson@awful.systems 15 points 3 weeks ago (3 children)

I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

Yud continues:

But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

load more comments (3 replies)
[–] sailor_sega_saturn@awful.systems 15 points 2 weeks ago (1 children)

Crypto bros continue to be morally bankrupt. There is an a coin / NFT called "GreenDildoCoin" and they've thrown dildos onto court at multiple WNBA basketball games (ESPN, video). It warms my heart that one of them was arrested. More of that please.

Polymarket even had a "prediction" on it. Because surely the outcome there couldn't be influenced by someone who also placed a large bet. Oh and Donald Trump Jr. posted a meme about it

None of this is particularly surprising if you've followed NFTs at all: the clout chasing goes to the extreme. In the limit memecoins can act as donations to terrible people from donors who want them to be terrible. Still I hate how much publicity this has gotten, and how this has manifested as gross disrespect towards women atheletes / women's sports by the sorts of losers who make "jokes" about no one watching WNBA games.

load more comments (1 replies)
[–] o7___o7@awful.systems 14 points 2 weeks ago (4 children)

OpenAI is food. You can tell because it's washed, rinsed, and cooked.

[–] macroplastic@sh.itjust.works 13 points 2 weeks ago

Call me AGI because I am stealing this without attribution

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 14 points 3 weeks ago
[–] blakestacey@awful.systems 13 points 3 weeks ago (2 children)

Wikipedia has higher standards than the American HIstorical Association. Let's all let that sink in for a minute.

[–] BlueMonday1984@awful.systems 15 points 3 weeks ago

Wikipedia also just upped their standards in another area - they've updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:

  • "Communication intended for the user”, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.”)

  • Blatantly incorrect citations (examples given are external links to papers/books which don't exist, and links which lead to something completely unrelated)

Ilyas Lebleu, who contributed to the update in policy, has described this as a "band-aid" that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlers' utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their """work""", the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.

[–] sailor_sega_saturn@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago)

Image should be clearly marked as AI generated and with explicit discussion as to how the image was created. Images should not be shared beyond the classroom

This point stood out to me as particularly bizarre. Either the image is garbage in which case it shouldn't be shared in the classroom either because school students deserve basic respect, good material, and to be held to the same standards as anyone else; or it isn't garbage and then what are you so ashamed of AHA?

[–] BlueMonday1984@awful.systems 13 points 3 weeks ago
[–] slop_as_a_service@awful.systems 13 points 2 weeks ago

Fate, it seems, is not without a sense of irony.

[–] BigMuffN69@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

Well, after 2.5 years and hundreds of billions of dollars burned, we finally have GPT-5. Kind of feels like a make or break moment for the good folks at OAI~~! With the eyes of the world on their lil presentation this morning, everyone could feel the stakes: they needed something that would blow our minds. We finally get to see what a super intelligence looks like! Show us your best cherry picked benchmark Sloppenheimer!

Graphic design is my PASSION. Good thing the entirety of the world's economy is not being held up by cranking out a few more points on SWE bench right????

Ok. what about ARC? Surely ya'll got a new high to prove the AGI mission was progressing right??

Oh my fucking God. They actually have lost the lead to fucking Grok. For my sanity I didn't watch the live stream, but curiously, they left the ARC results out of their presentation. Even though they gave Francois access early to test. Kind of like they knew this looks really bad and underwhelming.

[–] blakestacey@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago)

"The word blueberry contains the letter b 3 times."

Also reported in more detail here:

The word "blueberry" has the letter b three times:

  • Once at the start ("B" in blueberry).
  • Once in the middle ("b" in blue).
  • Once before the -erry ending ("b" in berry). [...] That's exactly how blueberry is spelled, with the b's in positions 1, 5, and 7. [...] So the "bb" in the middle is really what gives blueberry its double-b moment. [...] That middle double-b is easy to miss if you just glance at the word.

(via)

load more comments (5 replies)
[–] swlabr@awful.systems 13 points 3 weeks ago

Recently, I've been seeing a lot of adverts from Google about their AI services. What really tickles me is how defeatist the campaign seems. Every ad is basically like "AI can't do X, but it can do Y!", where X is a job or other task that AI bros are certain that AI will eventually replace, and Y is a smaller, related thing that AI gets wrong anyway. For an ad agency, I'd expect more than this.

[–] blakestacey@awful.systems 13 points 3 weeks ago

Lightcone Infrastructure is running The Inkhaven Residency. For the 30 days of November, ~30 people will posts 30 blogposts – 1 per day. There will also be feedback and mentorship from other great writers, including Scott Alexander, Scott Aaronson, Gwern, and more TBA.

https://www.lesswrong.com/posts/CA6XfmzYoGFWNhH8e/the-inkhaven-residency

"Hmm, your blog post is good, but it would be better with more Adderall, less recognition that other people have minds distinct from your own, and 220% more words."

[–] sailor_sega_saturn@awful.systems 12 points 2 weeks ago (3 children)

At my big tech job after a number of reorgs / layoffs it's now getting pretty clear that the only thing they want from me is to support the AI people and basically nothing else.

I typed out a big rant about this, but it probably contained a little too much personal info on the public web in one place so I deleted it. Not sure what to do though grumble grumble. I ended up in a job I never would have chosen myself and feel stuck and surrounded by chat-bros uggh.

load more comments (3 replies)
[–] TinyTimmyTokyo@awful.systems 12 points 2 weeks ago (3 children)
[–] TinyTimmyTokyo@awful.systems 13 points 2 weeks ago

Can we call this the peak of the LLM hype cycle now?

load more comments (2 replies)
[–] BigMuffN69@awful.systems 12 points 2 weeks ago (1 children)

Only taste tester I trust dropped his verdict

[–] BigMuffN69@awful.systems 12 points 2 weeks ago
[–] BlueMonday1984@awful.systems 12 points 3 weeks ago

Discovered new manmade horrors beyond my comprehension today (recommend reading the whole thread, it goes into a lot of depth on this shit):

[–] gerikson@awful.systems 12 points 3 weeks ago (5 children)

Nothing expresses the inherent atomism and libertarian nature of the rat community like this

https://www.lesswrong.com/posts/HAzoPABejzKucwiow/alcohol-is-so-bad-for-society-that-you-should-probably-stop

A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.

No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless "action".

Oh well, AGI will solve it (or the robot god will be a raging alcoholic)

[–] gerikson@awful.systems 15 points 3 weeks ago (3 children)

OK now there's another comment

I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.

Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.

JFC these people imagine they can regulate how "AGI" is constructed, but faced with a problem that's been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)

load more comments (3 replies)
[–] bitofhope@awful.systems 13 points 3 weeks ago (1 children)

This post is not meant to be an objective cost-benefit analysis of alcohol.

Oh, you're not doing the thing that's supposedly the entire point of the website? Don't worry, no one else is either.

load more comments (1 replies)
load more comments (3 replies)
[–] Soyweiser@awful.systems 12 points 2 weeks ago (17 children)

Sorry to talk about this couple again, but people are discovering the eugenicists are also big time racists

load more comments (17 replies)
[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (2 children)

New article from Matthew Hughes, about the sheer stupidity of everyone propping up the AI bubble.

Orange site is whining about it, to Matthew's delight:

Someone posted my newsletter to Hacker News and the comments are hilarious, insofar as they're upset with the tone of the piece.

Which is hilarious, because it precisely explains why those dweebs love generative AI. They're absolutely terrified of human emotion, or passion, or naughty words.

[–] gerikson@awful.systems 11 points 3 weeks ago

HN is all manly and butch about "saying it like it is" when some techbro is in trouble for xhitting out a racism, but god forbid someone says something mean about sama or pg

load more comments (1 replies)
load more comments
view more: next ›