this post was submitted on 14 Jul 2025
19 points (100.0% liked)

TechTakes

2096 readers
94 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 12 points 2 weeks ago

Previously sneered:

The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads.

More recently, in the comments:

After reading your comments and @Jiro 's below, and discussing with LLMs on various settings, I think I was too strong in saying....

It's like watching people volunteer for a lobotomy.

[–] froztbyte@awful.systems 12 points 2 weeks ago (6 children)

nasb, new torres piece out, promising title and opening

(I’ll have to read it after done with my present other currency-related diversions. old-man-shakes-fist-at-cloud.gif)

[–] scruiser@awful.systems 9 points 2 weeks ago

Those opening Peter Thiel quotes... Thiel uses talks about (in a kind of dated and maybe a bit offensive) trans people, to draw the comparison to transhumanists wanting to change themselves more extensively. The disgusting irony is that Thiel has empowered the right-wing ecosystem, which is deeply opposed to trans rights.

load more comments (5 replies)
[–] Seminar2250@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

thinking about how often code will be like

[-1] * len(array) # -1 is a place holder because I don't know the vocabulary term "sentinel value"

and justified because "it just needs to work"

and now we have professional vibe coders

https://en.wikipedia.org/wiki/Anguish

[–] Soyweiser@awful.systems 12 points 2 weeks ago (3 children)

Somebody found a relevant reddit post:

Dr. Casey Fiesler ‪@cfiesler.bsky.social‬ (who has clippy earrings in a video!) writes: " This is fascinating: reddit link

Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…"

[–] ebu@awful.systems 10 points 2 weeks ago* (last edited 1 week ago) (1 children)

you have to scroll through the person's comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros

it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel we've all come to know and despise remains unknown

they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can "work in the background" on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestacey's other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed

[–] Soyweiser@awful.systems 10 points 1 week ago

They now deleted their post and I assume a lot of others, but they also claim they have no time to really write and just wanted a collection of stories for their kid(s). Which doesnt make sense, creating 700 pages of kids stories is a lot of work, even if you let a bot improve the flow. Unless they just stole a book of children's stories from somewhere. (I know these books exist, as a child from one of my brothers tricked me into reading two stories from one).

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (1 children)

Found an unironic AI bro in the wild on Bluesky:

You want my unsolicited thoughts on the line between man and machine, I feel this bubble has done more to clarify that line then to blur it, both by showcasing the flaws and limitations inherent to artificial intelligence, and by highlighting the aspects of human minds which cannot be replicated.

load more comments (1 replies)
[–] TinyTimmyTokyo@awful.systems 11 points 1 week ago (7 children)

Daniel Koko's trying to figure out how to stop the AGI apocalypse.

How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don't roll a 1 are destroyed on the spot.

And if that doesn't work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.

"For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes."

[–] bitofhope@awful.systems 11 points 1 week ago (1 children)

Suppose further that enough powerful people are concerned about the poverty in Ireland, anti-catholic discrimination, food insecurity, and/or loss of rental revenue, that there's significant political will to Do Something. Should we ban starvation? Should we decolonise? Should we export their produce harder to finally starve Ireland? Should we sign some kind of treaty? Should we have a national megaproject to replace the population with the British? Many of these options are seriously considered.

Enter the baseline option: Let the Irish sell their babies as a delicacy.

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 9 points 1 week ago (2 children)

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

Ah, yes, artificially kneecap chip fabs' yields, I'm sure that will go over well with the capitalist overlords who own them

Someone didn't get the memo about nVidia's stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missile'd?

load more comments (1 replies)
load more comments (5 replies)
[–] yellowcake@awful.systems 11 points 1 week ago

Ian Lance Taylor (of GOLD, Go, and other tech fame) had a take on chatbots being AGI that I liked to see from an influential person of computing. https://www.airs.com/blog/archives/673

The summary is that chatbots are not AGI, using the current AI wave as the usher to AGI is not it, and all around dislikes in a very polite way that chatbot LLMs are seen as AI.

Apologies if this was posted when published.

[–] HedyL@awful.systems 11 points 2 weeks ago (3 children)

I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).

We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).

Compare this to the estimated true cost of running AI chatbots, which (according to the numbers I'm familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).

I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.

Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).

[–] corbin@awful.systems 9 points 2 weeks ago

I've done some of the numbers here, but don't stand by them enough to share. I do estimate that products like Cursor or Claude are being sold at roughly an 80-90% discount compared to what's sustainable, which is roughly in line with what Zitron has been saying, but it's not precise enough for serious predictions.

Your last paragraph makes me think. We often idealize blockchains with VMs, e.g. Ethereum, as a global distributed computer, if the computer were an old Raspberry Pi. But it is Byzantine distributed; the (IMO excessive) cost goes towards establishing a useful property. If I pick another old computer with a useful property, like a radiation-hardened chipset comparable to a Gamecube or G3 Mac, then we have a spectrum of computers to think about. One end of the spectrum is fast, one end is cheap, one end is Byzantine, one end is rad-hardened, etc. Even GPUs are part of this; they're not that fast, but can act in parallel over very wide data. In remarkably stark contrast, the cost of Transformers on GPUs doesn't actually go towards any useful property! Anything Transformers can do, a cheaper more specialized algorithm could have also done.

load more comments (2 replies)
[–] o7___o7@awful.systems 11 points 1 week ago* (last edited 1 week ago) (9 children)

Democratizing graphic design, Nashville style

Description: genAI artifact depicting guitarist Slash as a cat. This cursed critter is advertising a public appearance by a twitter poster. The event is titled "TURDSTOCK 2025". Also, the cat doesn't appear to be polydactyl, which seems like a missed opportunity tbh.

load more comments (9 replies)
[–] antifuchs@awful.systems 10 points 1 week ago (1 children)
load more comments (1 replies)
[–] YourNetworkIsHaunted@awful.systems 10 points 1 week ago (2 children)

Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because it's a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.

It's a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you don't it backs off.

The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because it's not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesn't appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And that's how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.

Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP don't explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.

load more comments (2 replies)
[–] o7___o7@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Better Offline was rough this morning in some places. Props to Ed for keeping his cool with the guests.

[–] TinyTimmyTokyo@awful.systems 11 points 2 weeks ago (2 children)

Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. "These AI layoffs actually make sense because of complexity theory". "You gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly."

I looked up his background, and it turns out he's the guy behind the TV show "Billions". That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the world's most block-headed and cringe-inducing dialog.

Terrible choice of guest, Ed.

[–] lagrangeinterpolator@awful.systems 12 points 2 weeks ago (8 children)

I study complexity theory and I'd like to know what circuit lower bound assumption he uses to prove that the AI layoffs make sense. Seriously, it is sad that the people in the VC techbro sphere are thought to have technical competence. At the same time, they do their best to erode scientific institutions.

load more comments (8 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›