this post was submitted on 17 Mar 2025
20 points (100.0% liked)

TechTakes

1732 readers
90 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 16 points 5 days ago (3 children)

A lesswrong declares,

social scientists are typically just stupider than physical scientists (economists excepted).

As a physicist, I would prefer not receiving praise of this sort.

The post to which that is a comment also says a lot of silly things, but the comment is particularly great.

[–] Amoeba_Girl@awful.systems 15 points 5 days ago (2 children)

lmao, economists probably did deserve to catch this stray

[–] JFranek@awful.systems 8 points 5 days ago (2 children)

Are economists considered physical scientists? I've read it as "social scientists are dumb except for economists". Which fits my prejudice for econo-brained less wrongers.

[–] Soyweiser@awful.systems 8 points 5 days ago

Yeah prob important to note that one of the lw precursor blogs was from an economist, so that is why they consider them one of the good fields. Important to not call out your own tribe.

load more comments (1 replies)
load more comments (1 replies)
[–] Soyweiser@awful.systems 8 points 4 days ago* (last edited 4 days ago) (1 children)

That list (which isn't properly sourced) seems to combine both high academic fields with non academic fields so I have no idea what this list is trying to prove even. (Also, see the fakeness of IQ and there is pressure for 'smart' people to go into stem etc etc). I wouldn't base my argument on a quick google search which gives you information from a tabloid site. Wonder why he didn't link to his source directly? More from this author: "We met the smartest Hooters girl in the world who has a maths degree and wants to become a pilot" (The guy is now a researcher at 'Hope not Hate' (not saying that to mock the guy or the organization, just found it funny, do hope he feels a bit of 'oh, I should have made different decisions a while back, wish I could delete that'))

[–] Amoeba_Girl@awful.systems 9 points 4 days ago* (last edited 4 days ago)

The ignorance about social science on display in that article is wild. He seems to think academia is pretty much a big think tank, which I suppose is in line with the extent of the rationalists' intellectual curiosity.

On the IQ tier list, I like the guy responding to the comment mentioning "the stats that you are citing here". Bro.

[–] BurgersMcSlopshot@awful.systems 11 points 5 days ago (1 children)

Imagine a perfectly spherical scientist...

[–] froztbyte@awful.systems 7 points 5 days ago (1 children)
[–] o7___o7@awful.systems 8 points 5 days ago

And high pomposity

[–] fasterandworse@awful.systems 9 points 5 days ago* (last edited 5 days ago)

Here's my audio/video dispatch about framing tech through conservation of energy to kill the magical thinking of generative ai and the like podcast ep: https://pnc.st/s/faster-and-worse/968a91dd/kill-magic-thinking video ep: https://www.youtube.com/watch?v=NLHmtYWzHz8

[–] froztbyte@awful.systems 9 points 5 days ago (5 children)

oh dear god

Razer claims that its AI can identify 20 to 25 percent more bugs compared to manual testing, and this can reduce QA time by up to 50 percent as well as cost savings of up to 40 percent

as usual this is probably going to be only the simplest shit, and I don’t even want to think of the secondary downstream impacts from just listening to this shit without thought will be

[–] mii@awful.systems 9 points 5 days ago

If I had to judge Razer’s software quality based on what little I know about them, I’d probably raise my eyebrows because they ship some insane 600+ MiB driver with a significant memory impact with their mice and keyboards that’s needed to use basic features like DPI buttons and LED settings, when the alternative to that is a 900 kiB open source driver which provides essentially the same functionality.

And now their answer to optimization is to staple a chatbot onto their software? I think I pass.

[–] o7___o7@awful.systems 5 points 5 days ago (1 children)

Isn't this what got crowdstrike in trouble?

[–] froztbyte@awful.systems 6 points 5 days ago

not quite the same but I can see potential for a similar clusterfuck from this

also doesn’t really help how many goddamn games are running with rootkits, either

[–] Soyweiser@awful.systems 4 points 5 days ago

Well the use of stuff like fuzzers has been a staple for a long time so 'compared to manual testing' is doing some work here.

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 9 points 5 days ago (1 children)

TV Tropes got an official app, featuring an AI "story generator". Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it "if we see that users don't find the story generator helpful".

[–] mii@awful.systems 11 points 5 days ago (1 children)

Thinking that trying to sell LLMs as a creative tool at this point into the bubble will not create backlash is just delusional, lmao.

[–] BlueMonday1984@awful.systems 5 points 4 days ago

At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.

On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.

For arts, the slop-nami has made "AI" synonymous with "creative sterility" and likely painted the field as, to copy-paste a previous comment, "all style, no subtance, and zero understanding of art, humanities, or how to be useful to society"

For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.

(Not sure if this makes sense - it was 1AM where I am when I wrote this)

[–] BlueMonday1984@awful.systems 11 points 5 days ago (1 children)

Ran across a short-ish thread on BlueSky which caught my attention, posting it here:

the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it

like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable

i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself

[–] swlabr@awful.systems 9 points 5 days ago

Absolutely.

the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made.

This + I choose to interpret it as static.

you cheapen them by reviving them

Learnt this one from, of all places, the pretty bad manga GANTZ.

[–] BlueMonday1984@awful.systems 7 points 5 days ago (1 children)

New piece from Brian Merchant: DOGE's 'AI-first' strategist is now the head of technology at the Department of Labor, which is about...well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:

“I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That's much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”

How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as "improving efficiency" or "politically neutral" or some random claptrap like that. Between Musk's own crippling incompetence, AI's utterly rancid public image, and a variety of factors I likely haven't factored in, imposing them will likely prove harder than they thought.

(I'd also like to recommend James Allen-Robertson's "Devs and the Culture of Tech" which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)

[–] o7___o7@awful.systems 6 points 5 days ago

Can't wait for them to discover that the DoL was created to protect them from labor

[–] blakestacey@awful.systems 16 points 6 days ago (3 children)

Josh Marshall discovers:

So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were "BS" in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.

[–] YourNetworkIsHaunted@awful.systems 21 points 6 days ago (1 children)

Get David Graeber's name out ya damn mouth. The point of Bullshit Jobs wasn't that these roles weren't necessary to the functioning of the company, it's that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn't exist

The idea was not that "these people should be fired to streamline efficiency of the capitalist orphan-threshing machine".

[–] db0@lemmy.dbzer0.com 4 points 5 days ago

I saw Musk mentioning Ian Banks' Player of Games as an influential book for him, and I puked in my mouth a little.

I demand that Brown University fire (checks notes) first name "YOU ARE HACKED NOW" last name "YOU ARE HACKED NOW" immediately!

[–] swlabr@awful.systems 14 points 6 days ago (1 children)

Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder

load more comments (1 replies)
[–] bitofhope@awful.systems 8 points 5 days ago
[–] BlueMonday1984@awful.systems 13 points 6 days ago (2 children)

In other news, Ed Zitron discovered Meg Whitman's now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:

[–] BigMuffin69@awful.systems 8 points 5 days ago

I want this company to IPO so I can buy puts on these lads.

load more comments (1 replies)
[–] sinedpick@awful.systems 14 points 6 days ago* (last edited 6 days ago) (3 children)

Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.

For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. I've paused work on Apple GPU drivers indefinitely.

I can't share any more information at this time, so please don't ask for more details. Thank you.

[–] nightsky@awful.systems 17 points 6 days ago

Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust -- and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that it's too similar to C++, and I wanted away from that... That twitter thread made me reconsider and take a closer look. So thankful for that.

[–] swlabr@awful.systems 12 points 6 days ago* (last edited 6 days ago) (1 children)

Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.

This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/

E: followup post from Asahi Lina reads:

If you think you know what happened or the context, you probably don't. Please don't make assumptions. Thank you.

I'm safe physically, but I'll be taking some time off in general to focus on my health.

[–] swlabr@awful.systems 12 points 6 days ago (4 children)

Finished reading that post. Sucks that Linux is such a hostile dev environment. Everything is terrible. Teddy K was on to something

load more comments (4 replies)
load more comments (1 replies)
[–] BlueMonday1984@awful.systems 12 points 6 days ago

Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers

I've updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.

load more comments
view more: ‹ prev next ›