this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2133 readers
85 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 11 points 3 days ago* (last edited 3 days ago) (4 children)

Someone tried Adobe's new Generative Fill "feature" (just the latest development in Adobe's infatuation with AI) with the prompt "take this elf lady out of the scene", and the results were...interesting:

There's also an option to rate whatever the fill gets you, which I can absolutely see being used to sabotage the "feature".

[–] Talia@mstdn.social 10 points 3 days ago

@BlueMonday1984 I was experimenting with generative fill and asked it to remove a person from a scene and "make the background yellow". It made the person Chinese. No fucking joke.

[–] swlabr@awful.systems 6 points 2 days ago

Putting the “manic pixie” in manic pixie dream girl

[–] V0ldek@awful.systems 5 points 3 days ago

Watch till end the third option made me choke on my drink it was way too funny

[–] bigfondue@lemmy.world 4 points 3 days ago* (last edited 2 days ago)

Please turn the elf lady into elf Grimes

[–] Soyweiser@awful.systems 12 points 3 days ago* (last edited 3 days ago) (9 children)

Lesswronger reads about Orcas the first time in their life. Decided that training them to become smarter than humans is now the next important step.

(I'm very very far from an orca expert - basically everything I know about them I learned today.)

They made several posts about it, and just the opening bits are funny. I obv didn't read any of them an only looked at the opening statements. I will produce a quote from each.

It is currently plausible (~~352115%~~[1]23%) to me that average orcas have at least as high potential for being great scientists as the greatest human scientists, modulo their motivation for doing science[2].

Yes, the weird percentage is in the text, the [1] footnote says 15%, no idea why they can't edit their text normally.

(For speed of writing, I mostly don't cite references. Feel free to ask me in the comments for references for some claims.)

Context: I think there’s a ~17% chance that average orcas are >=+6std intelligent.

And from the last article, two lines as a treat:

TLDR: I now think it’s <1% likely that average orcas are >=+6std intelligent.

(I now think the relevant question is rather whether orcas might be >=+4std intelligent, since that might be enough for superhuman wisdom and thinking techniques to accumulate through generations, but I think it’s only 2% probable. (Still decently likely that they are near human level smart though.))

(Nice of the person to think of the orcas btw, just wish it was more preservation than 'how can we make these animals help us out).

E: apparently "An alternative approach to superbabies" is also about orcas, they just no longer stand behind it.

Can someone please look into this?

[–] BigMuffN69@awful.systems 6 points 2 days ago (2 children)
load more comments (2 replies)
[–] sailor_sega_saturn@awful.systems 12 points 3 days ago* (last edited 3 days ago) (3 children)

The last time someone looked into this it was with dolphins, did not go well, lead to more human-dolphin sex than communication, and ended in a dolphin suicide.

https://www.theguardian.com/environment/2014/jun/08/the-dolphin-who-loved-me

[–] swlabr@awful.systems 4 points 2 days ago

All I need to know about HDI (human-dolphin interaction) (read: fuckin) is covered in many episodes of my favorite podcast Doughboys

[–] blakestacey@awful.systems 4 points 2 days ago

That story mentions Carl Sagan but omits the detail that Peter the dolphin propositioned him. (It's in the William Poundstone biography, IIRC.)

[–] bigfondue@lemmy.world 4 points 3 days ago

They're trying to be John Lilly, but with orcas and even more ketamine

[–] dgerard@awful.systems 5 points 2 days ago

there's having a favourite animal, then there's whatever this is

[–] V0ldek@awful.systems 9 points 3 days ago (1 children)

Finally, after years of research, we have managed to connect the smartest Orca to a text-to-speech device! What great wisdom will those superinteligent creatures bestow on us? How can we solve our world's problems?

Eat the rich.

... What?

Like take your billionaires, right, roast them and then eat their flesh. Burn their yachts too. We can help.

[–] Soyweiser@awful.systems 3 points 2 days ago (2 children)

Sadly, the orcas go after sailing boats not the big motorized yachts, less eat the rich and more eat the upper middle class.

load more comments (2 replies)
[–] fullsquare@awful.systems 9 points 3 days ago* (last edited 3 days ago) (1 children)

bottlenose dolphin with a thousand yard stare on a mountainous jungle background with ongoing battle (ie vietnam flashbacks)

what have the orcas done to deserve this

load more comments (1 replies)
[–] swlabr@awful.systems 4 points 2 days ago* (last edited 2 days ago)

I haven’t clicked on any links here yet, this sounds like a bit, but because it’s LW I have to assume bad faith and that this is real.

E: lol real. Why wouldn’t they go for apes lol

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 6 points 2 days ago (2 children)

The NYT's reported on the suicide of a 16-year old boy, noting how ChatGPT assisted him in said suicide and deterred him from seeking help.

This is not the first time a chatbot's driven someone to suicide. And I fully expect it won't be the last.

[–] blakestacey@awful.systems 10 points 2 days ago (1 children)

Thought 1: This is the kind of incident that makes politicians vote for a law named after a dead kid. It behooves us to think of what kind of legislation could actually address the problem without becoming a clusterfuck that worsens everyone's life, including children's. cough #OnlineSafetyAct cough

Thought 2: Hey, all you guys using LLMs to replace opinion surveys or do "research" on social interactions because it's cheaper than gathering real data... How many human beings talk like the suicide-encouragement bot here?

Thought 3: Oh, remember when OpenAI paid $10 million to buy off the American Federation of Teachers? Because Pepperidge Farm still has that browser tab open. Every school administrator who breathes a word about bringing "AI" into the classroom deserves to get lit up by parents asking why they are embracing suicide tech.

[–] BlueMonday1984@awful.systems 4 points 2 days ago

Thought 1: This is the kind of incident that makes politicians vote for a law named after a dead kid. It behooves us to think of what kind of legislation could actually address the problem without becoming a clusterfuck that worsens everyone’s life, including children’s. cough #OnlineSafetyAct cough

A complete ban on chatbots/LLMs would be enough. These things have basically zero ethical use case, it'd be a net positive if they were legally wiped from existence.

Thought 2: Hey, all you guys using LLMs to replace opinion surveys or do “research” on social interactions because it’s cheaper than gathering real data… How many human beings talk like the suicide-encouragement bot here?

Against my better judgment, I decided to follow that link and check the quotes. Thankfully, there was nobody defending this - calling for a ban on AI, calling for ChatGPT's shutdown, calling for Sam Altman to be charged, pretty much everyone was out for blood.

Thought 3: Oh, remember when OpenAI paid $10 million to buy off the American Federation of Teachers? Because Pepperidge Farm still has that browser tab open. Every school administrator who breathes a word about bringing “AI” into the classroom deserves to get lit up by parents asking why they are embracing suicide tech.

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 6 points 3 days ago (2 children)

Textbook case of anthropomorphisation from The Guardian, trying to posit that AI systems are capable of feeling pain.

You want my unsolicited opinion, machines cannot feel pain/emotion, only imitate it, and the rise of LLMs have made this crystal clear. Much like with being creative or making art, feeling genuine emotion is the exclusive domain of human/animal minds.

[–] deathgrindfreak@awful.systems 8 points 2 days ago

It's so bizarre to see AI get the benefit of a superposition of states where we all admit that these are not machines capable of thought, yet at the same time go through these stupid exercises where we pretend that they are.

load more comments (1 replies)
[–] yellowcake@awful.systems 16 points 4 days ago (2 children)

I bump into a lot of peers/colleagues who are always “ya but what is intelligence” or simply cannot say no to AI. For a while I’ve tried to use the example that if these “AI coding” things are tools, why would I use a tool that’s never perfect? For example I wouldn’t reach for a 10mm wrench that wasn’t 10mm and always rounds off my bolt heads. Of course they have “it could still be useful” responses.

I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.

For something not just venting I tasked a coworker with some runtime memory relocation and Gemini had this to say about ASLR: Age, Sex, Location Randomization

load more comments (2 replies)
load more comments
view more: ‹ prev next ›