this post was submitted on 03 Aug 2025
11 points (100.0% liked)

TechTakes

2101 readers
133 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] BlueMonday1984@awful.systems 7 points 23 hours ago (4 children)

New case popped up in medical literature: A Case of Bromism Influenced by Use of Artificial Intelligence, about a near-fatal case of bromine poisoning caused by someone using AI for medical advice.

[–] HedyL@awful.systems 5 points 22 hours ago* (last edited 22 hours ago) (3 children)

On first glance, this also looks like a case where a chatbot confirmed a person's biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn't true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a "perfect" replacement, which, to my knowledge, doesn't exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn't entirely in the wrong if it hadn't suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot's sycophantic communication style significantly exacerbated the problem on hand.

[–] fullsquare@awful.systems 5 points 17 hours ago (1 children)

the stupidest thing about it is that there already is commercial low sodium table salt, and it substitutes part of sodium chloride with potassium chloride, because the point is to decrease sodium intake, not chloride intake (in most of cases)

[–] HedyL@awful.systems 3 points 13 hours ago

Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I'm not a medical professional). If Google search (not Google AI) tells the truth, this doesn't sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

load more comments (1 replies)
load more comments (1 replies)