this post was submitted on 12 Jan 2026
18 points (100.0% liked)

TechTakes

2419 readers
72 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] V0ldek@awful.systems 18 points 3 weeks ago* (last edited 3 weeks ago) (10 children)

It has happened. Post your wildest Scott Adams take here to pay respects to one of the dumbest posters of all time.

I'll start with this gem

[–] istewart@awful.systems 21 points 3 weeks ago (5 children)

This one is just eternally ???!!!

load more comments (5 replies)
[–] BigMuffN69@awful.systems 17 points 3 weeks ago
[–] BigMuffN69@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago)

If trump gets back in office, Scott will be dead within the year.

[–] mirrorwitch@awful.systems 13 points 3 weeks ago

sorry Scott you just lacked the experience to appreciate the nuances, sissy hypno enjoyers will continue to take their brainwashing organic and artisanally crafted by skilled dommes

[–] sansruse@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

it's not exactly a take, but i want to shout out the dilberito, one of the dumbest products ever created

https://en.wikipedia.org/wiki/Scott_Adams#Other

the Dilberito was a vegetarian microwave burrito that came in flavors of Mexican, Indian, Barbecue, and Garlic & Herb. It was sold through some health food stores. Adams's inspiration for the product was that "diet is the number one cause of health-related problems in the world. I figured I could put a dent in that problem and make some money at the same time." He aimed to create a healthy food product that also had mass appeal, a concept he called "the blue jeans of food".

[–] YourNetworkIsHaunted@awful.systems 12 points 3 weeks ago (3 children)

Not gonna lie, reading through the wiki article and thinking back to some of the Elbonia jokes makes it pretty clear that he always sucked as a person, which is a disappointing realization. I had hoped that he had just gone off the deep end during COVID like so many others, but the bullshit was always there, just less obvious when situated amongst all the bullshit of corporate office life he was mocking.

[–] scruiser@awful.systems 10 points 3 weeks ago

I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Alice's anger isn't a legitimate response to the bullshit work environment she has but just haha angry woman funny.

Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.

load more comments (2 replies)
load more comments (6 replies)
[–] blakestacey@awful.systems 11 points 3 weeks ago (1 children)

I knew there was somethin' not right about that boy when his books in the '90s started doing woo takes about quantum mechanics and the power of self-affirmation. Oprah/Chopra shit: the Cosmos has a purpose and that purpose is to make me rich.

Then came the blogosphere.

https://freethoughtblogs.com/pharyngula/2013/06/17/the-saga-of-scott-adams-scrotum/

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago)

Today at work I got to read a bunch of posts from people discussing how sad they were that notable holocaust denier Scott Adams died.

Only they didn't mention that part for some reason.

https://web.archive.org/web/20070222235609/http://dilbertblog.typepad.com/the_dilbert_blog/2006/10/sunday_blogging.html

load more comments (3 replies)
[–] Seminar2250@awful.systems 16 points 3 weeks ago (1 children)

one thing i did not see coming, but should have (i really am an idiot): i am completely unenthused whenever anyone announces a piece of software. i'll see something on the rust subreddit that i would have originally thought "that's cool" and now my reaction is "great, gotta see if an llm was used"

everything feels gloomy.

[–] jaschop@awful.systems 11 points 3 weeks ago (2 children)

I'm gonna leave here my idea, that an essential aspect of why GenAI is bad is that it is designed to extrude media that fits common human communication channels. This makes it perfect to choke out human-to-human communication over those channels, preventing knowledge exchange and social connection.

[–] macroplastic@sh.itjust.works 9 points 3 weeks ago

I am reminded of Val Packett's lobsters comment I read the other day:

The "AI" companies are DDoSing reality itself.

They have massive demand for new electricity, land, water and hardware to expand datacenters more massively and suddenly than ever before, DDoSing all these supplies. Their products make it easy to flood what used to be "the information superhighway" with slop, so their customers DDoS everyone's attention. Also bosses get to "automate away" any jobs where the person's output can be acceptably replaced by slop. These companies are the most loyal and fervent sponsors of the new wave of global fascism, with literal front seats at the Trump administration in the US. They are very happy about having their tools used for mass surveillance in service of state terrorism (ICE) and war crimes. That's the DDoS against everyone's human rights and against life itself.

load more comments (1 replies)
[–] o7___o7@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

when I saw that they'd rebranded Office to Copilot, I turned 365 degrees and walked away

load more comments (1 replies)
[–] fiat_lux@lemmy.world 15 points 3 weeks ago* (last edited 3 weeks ago)

Skynet's backstory is somehow very predictable yet came as a surprise to me in the form of this headline by the Graudain: "Musk’s AI tool Grok will be integrated into Pentagon networks, Hegseth says".

The article doesn't provide much more other than exactly what you'd expect. E.g this Hegseth quote, emphasis mine: "make all appropriate data available across federated IT systems for AI exploitation, including mission systems across every service and component".

Me as a kid: "how could they have been so incompetent and let Skynet take over?!"

Me now: "Oh. Yeah. That checks out"

[–] swlabr@awful.systems 14 points 3 weeks ago (1 children)

my promptfondler coworker thinks that he should be in charge of all branch merges because he doesn’t understand the release process and I think I’m starting to have visions of teddy k

[–] froztbyte@awful.systems 11 points 3 weeks ago (1 children)

thinks that he should be in charge of all branch merges because he doesn’t understand the release process

.......I don't want you to dox yourself but I am abyss-staringly curious

load more comments (1 replies)
[–] mawhrin@awful.systems 14 points 3 weeks ago (5 children)

seeing the furious reactions to shaming of the confabulation machine promoters, i can only conclude the shaming works.

load more comments (5 replies)
[–] saucerwizard@awful.systems 13 points 3 weeks ago (3 children)

OT: I’m adopting two rescue kittens, one is pretty much a go but its proving trickier to get a companion (hoping the current application works out today). Part of me feels guilty for doing this so fast after what happened, but I kinda need it to keep me from doing anything stupid.

load more comments (3 replies)
[–] corbin@awful.systems 12 points 3 weeks ago (2 children)

Over on Lobsters, Simon Willison and I have made predictions for bragging rights, not cash. By July 10th, Simon predicts that there will be at least two sophisticated open-source libraries produced via vibecoding. Meanwhile, I predict that there will be five-to-thirty deaths from chatbot psychosis. Copy-pasting my sneer:

How will we get two new open-source libraries implementing sophisticated concepts? Will we sacrifice 5-30 minds to the ELIZA effect? Could we not inspire two teams of university students and give them pizza for two weekends instead?

[–] blakestacey@awful.systems 12 points 3 weeks ago

Willison:

I haven't reviewed a single line of code it wrote but I clicked around and it seems to do the right things.

Could not waterboard that out of me, etc.

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

Ed Zitron is now predicting an earth-shattering bubble pop: https://www.wheresyoured.at/dot-com-bubble/ so in other words just another weekday.

Even if this was just like the dot com bubble, things would be absolutely fucking catastrophic — the NASDAQ dropped 78% from its peak in March 2000 — but due to the incredible ignorance of both the private and public power brokers of the tech industry, I expect consequences that range from calamitous to catastrophic, dependent almost entirely on how long the bubble takes to burst, and how willing the SEC is to greenlight an IPO.

I am someone who does not understand the economy. Both in that it's behaved irrationally for my entire life, and in that I have better things to do than learn how stonks work. So I have no idea how credible this is.

But it feels credible to the lizard brain part of me y'know? The market crashed a lot during covid, and an economy propped up by nvidia cards feels... worse.

Personally speaking: part of me is really tempted to take a bunch of my stonks to pay down most of my mortgage so it doesn't act like an albatross around my neck (I mean I'm also going to try moving abroad again in a year or two and would prefer not to be underwater on my fantastically expensive silicon valley house at that time lol).

load more comments (8 replies)
[–] e8d79@discuss.tchncs.de 12 points 3 weeks ago (1 children)
[–] sansruse@awful.systems 12 points 3 weeks ago (1 children)

i love articles that start with a false premise and announce their intention to sell you a false conclusion

The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape.

The future of automated stupidity is being set right now, and the path we're on leads to other companies being stupid instead of us. I want to change that.

load more comments (1 replies)
[–] saucerwizard@awful.systems 12 points 2 weeks ago (1 children)

OT: going to pick up a tiny black foster kitten (high energy) later this week…but yesterday I saw the pound had a flame point siamese kitten of all things, and he’s now running around my condo.

[–] o7___o7@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Games Workshop bans generative AI. Hackernews takes that personally. Unhinged takes include accusations of disrespecting developers and a seizure of power by middle management

https://news.ycombinator.com/item?id=46607681

[–] antifuchs@awful.systems 13 points 3 weeks ago (1 children)

Better yet, bandcamp ban ai-generated music too: https://www.reddit.com/r/BandCamp/comments/1qbw8ba/ai_generated_music_on_bandcamp/

Great week for people who appreciate human-generated works

load more comments (1 replies)
[–] saucerwizard@awful.systems 12 points 3 weeks ago (1 children)

OT: I really appreciated the things you guys said last thread. It helped a lot.

load more comments (1 replies)
[–] scruiser@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

(One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control

I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn't as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.

It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they've stuck with their 2027 year of big events happening.

One paragraph I came up with a sneer for...

Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it “misaligned” because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.

Given the Trump administration, and the US's behavior in general even before him... and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and "Agent-4" over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn't.

Also random part I found extra especially stupid...

It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.

LLM "agents" currently can't coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we're supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.

[–] sailor_sega_saturn@awful.systems 13 points 3 weeks ago (1 children)

My Next Life as a Rogue AI: All Routes Lead to P(Doom)!

The weird treatment of the politics in that really read like baby's first sci-fi political thriller. China bad USA good level of writing in 2026 (aaaaah) is not good writing. The USA is competent (after driving out all the scientists for being too "DEI")? The world is, seemingly, happy to let the USA run the world as a surveillance state? All of Europe does nothing through all this?

Why do people not simply... unplug all the rogue AI when things start to get freaky? That point is never quite addressed. "Consensus-1" was never adequately explained it's just some weird MacGuffin in the story that there's some weird smart contract between viruses that everyone is weirdly OK with.

Also the powerpoint graphics would have been 1000x nicer if they featured grumpy pouty faces for maladjusted AI.

load more comments (1 replies)
[–] mirrorwitch@awful.systems 9 points 3 weeks ago

the incompetence of this crack oddly makes me admire QAnon in retrospect. purely at a sucker-manipulation skill level, I mean. rats are so beige even their conspiracy alt-realities are boring, fully devoid of panache

load more comments (5 replies)
[–] Seminar2250@awful.systems 10 points 2 weeks ago (5 children)

Off topic: I am looking for some advice. I enrolled in a PhD program several years ago. After years of verbal abuse, I left my advisor's lab. Shortly after, he tried to get me kicked out of the program by giving me a failing grade, then he tried to physically intimidate me in his office (moved across the room to get in my face and scream at me). I reported this to the campus police but they said nothing could be done because he didn't touch me or explicitly threaten violence. Later that day, he removed my name from work I had done for him, which is definitely plagiarism and a violation of the academic honesty policy.

I have an audio recording from that day of him screaming at me, as well as him basically admitting to retaliating by giving me a failing grade (I filed a grievance about this with the university and they changed my grade). I also recorded a long exchange that may not be incriminating but reinforces that he is an overbearing asshole.

I tried changing advisors but the options of available professors were limited (and the university decided that my abysmal $500 USD a week salary would get dropped to something like $300 a week), so I mastered out.

I was hoping to eventually finish my PhD elsewhere and I fear that I won't be able to (that no advisor would want to risk working with me) if I go public with this. At the same time, the thought of him continuing to teach there and not suffer any accountability (in my grievance, I requested a public apology and he refused, telling the chair that he would instead be comfortable with a meeting moderated by the chair


absolutely farcical) is killing me.

Does anyone have advice? Would it be worth going public (e.g. reaching out to the local press or the student paper)? I suppose I could just email human resources with the information and see what happens. Experience in this precise situation is probably limited (although academia has a lot of abusers, so maybe not).

(A week ago I was confident I would go public sometime soon. Now I just feel apprehension.)

load more comments (5 replies)
[–] nfultz@awful.systems 10 points 3 weeks ago (1 children)

From a new white paper Financing the AI boom: from cash flows to debt, h/t The Syllabus Hidden Gem of the Week

The long-term viability of the AI investment surge depends on meeting the high expectations embedded in those investments, with a disconnect between debt pricing and equity valuations. Failure to meet expectations could result in sharp corrections in both equity and debt markets. As shown in Graph 3.C, the loan spreads charged on private credit loans to AI firms are close to those charged to non-AI firms. If loan spreads reflect the risk of the underlying investment, this pattern suggests that lenders judge AI-related loans to be as risky as the average loan to any private credit borrower. This stands in stark contrast to the high equity valuations of AI companies, which imply outsized future returns. This schism suggests that either lenders may be underestimating the risks of AI investments (just as their exposures are growing significantly) or equity markets may be overestimating the future cash flows AI could generate.

Por que no los dos? But maybe the lenders are expecting a bailout... or just gullible...

That said, to put the macroeconomic consequences into perspective, the rise in AI-related investment is not particularly large by historical standards (Graph 4.A). For example, at around 1% of US GDP, it is similar in size to the US shale boom of the mid-2010s and half as large as the rise in IT investment during the dot-com boom of the 1990s. The commercial property and mining investment booms experienced in Japan and Australia during the 1980s and 2010s, respectively, were over five times as large relative to GDP.

Interesting point, if AI is basically a rounding error for GDP... But I also remember the layoffs in 2000-1 and 2014-5, they weren't evenly distributed and a lot of people got left behind, even if they weren't as bad as '08.

[–] BurgersMcSlopshot@awful.systems 12 points 3 weeks ago (1 children)

"It sounds so insignificant when you put it like that, I can hardly believe I'm in a bread line because of a manufactured poly-crisis it was a part of!"

[–] o7___o7@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago)

Very smart commentator:

This particular explosive barrel is no more potent than any of the dozens of other explosive barrels in this room.

[–] nfultz@awful.systems 10 points 3 weeks ago (3 children)
load more comments (3 replies)
[–] saucerwizard@awful.systems 10 points 3 weeks ago (13 children)
[–] blakestacey@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago)

Setting the stage: I had become a social media personality on Clubhouse

I'm sorry.

What I remember is that the organizers said something like ‘I’m sorry that happened to you’, and while speaking I was interrupted by someone talking about the plight that autistic men face while dating.

Vibecamp: It's the Scott Aaronson comment section, but in person.

[–] istewart@awful.systems 12 points 3 weeks ago (1 children)

This is genuinely horrifying throughout. It reinforces my conviction that I don't really want to know or gossip about the details of these peoples' lives, I want to know the barest details of who they are so that I can set firm social boundaries against them.

A quote the author offers, that stands out to me:

A man who is considered a TPOT ‘elder’:

TPOT isn’t misogynist but it’s made up of men and women who prefer the company of men. it’s a male space with male norms.

this makes it barely tolerable for the few girls’ girls who wander in here. they end up either deactivating, going private, or venting about how men suck.

I'd never been particularly ardent about believing it, but this right here is firm evidence to me that existing in a rigid gender binary is mental and spiritual poison. Whoever this person is, they're never going to grow up.

I don't wish to belittle the author's suffering, but I do hope she is able to reconsider her participation in these scenes where hierarchy, contrived masculinity, and financial standing (or the ability to generate financial gain for others!) are the signifiers of individual participants' worth.

load more comments (1 replies)
load more comments (11 replies)
load more comments
view more: next ›