nfultz

joined 2 years ago
[–] nfultz@awful.systems 6 points 23 hours ago

Not sure if this was posted in prev weeks, just popped on my youtube: purdue cs240 situation is crazy

So several hundred students drop Intro to C after being accused of cheating with AI.

OK so that is like normal at my state U, but the whole part where the chair does a little press conference, quasi-reinstates everyone, blocks the student newspaper from attending, and then some students sneak in and live stream it anyway is pretty comical. And then forcing the prof to file the academic charges forms one-at-a-time takes it into wtf territory.

Haven't seen it mentioned elsewhere, not that I really went looking for it though. I'm just thankful to be out of higher ed.

Note that this is the same school that will require AI as a gen ed iirc.

[–] nfultz@awful.systems 4 points 1 week ago

I asked someone from the mainland, she more or less agreed with you:

This is basically consistent with the long-standing logic of the Chinese internet: technology brings discursive power, and to give it away is to give away discursive power. AI is especially so.

[–] nfultz@awful.systems 5 points 1 week ago (3 children)

https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas

In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:

“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.

Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.

I need to think on this a little more, wasn't on my radar.

[–] nfultz@awful.systems 2 points 2 weeks ago

People talked about doing this with bitcoin mining - https://www.cnbc.com/2025/11/16/bitcoin-crypto-mining-home-heating-energy-bills.html - but I'm not aware of anyone trying to scale it out or turn it into a company.

[–] nfultz@awful.systems 6 points 2 weeks ago

blogosphere-era link aggregator that somehow kept going way longer than occupy wallstreet did. one thing to know, (like here), they link to a lot of stuff they don't support.

[–] nfultz@awful.systems 7 points 2 weeks ago (7 children)

https://www.nakedcapitalism.com/2026/04/ai-reputational-crisis-violence-data-center-protests-sam-altman-openai.html

The profound ignorance of tech on the part of most American lawmakers is no joke. In a prior life, I was once responsible for updating a future Vice Chair of the Senate Intelligence Committee on tech issues and it was like showing an alarm clock to a chicken.

haha

That same senator went on to be a huge RussiaGater and played a central role in Twitter and other social media titans upping their censorship game at the behest of US politicians.

oh :(

[–] nfultz@awful.systems 3 points 3 weeks ago

Haven't seen any estimates of death toll due to social media but cigarettes is/was pretty staggering (20-40m), way too big to hide - https://www.ucpress.edu/books/golden-holocaust/hardcover - if it's "only" 50 years to flip the consensus on social media, that would be a faster process, I do hope its possible though. Tobacco execs had the good sense to keep a relatively low profile compared to Zuck and Musk, so that might speed it up.

[–] nfultz@awful.systems 8 points 4 weeks ago

Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.

Haven't seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.

But also it made me realize it's not a "California" ideology anymore, she never calls it that, like it's gone so mainstream and so widespread, you can't even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?

Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:

spoilerDouglas Ruskoff, but, sadly, not the dolphin guy

[–] nfultz@awful.systems 6 points 1 month ago

https://www.todayintabs.com/p/who-goes-ai

taking shots at the gray lady:

You might think Mr. R not so different, superficially, from Ms. L. He’s also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:

Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a “Master Editor” agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.

And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a “writing style,” and not one that can’t easily be duplicated by a large language model. Checking facts? Assessing his work’s strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who “expects AI models to get better than him at everything eventually?” Why does he go AI when Ms. L never would?

Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.

spoilerIt's Kevin Roose

[–] nfultz@awful.systems 8 points 1 month ago (4 children)

Internet Comment Etiquette: "Relationships with AI"

... hadn't thought about Glenn Beck in a decade, that last interview was pretty wtf.

Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.

[–] nfultz@awful.systems 9 points 1 month ago (2 children)

https://mail.cyberneticforests.com/the-computer-science-fetish/

The fetishism of the computer scientist therefore refers less to specific expertise than to whatever we imagine a credentialed expert can bestow: an external voice that says, "ask, and you shall receive.” The computer scientist becomes a mirror where those who work with the social, practical impacts of the tech hope to see our understanding affirmed. The people who offer that validation — who position themselves against the discourse of critique, who seem unbothered and detached, even ridiculing the same critical lingo that exhausts you — are not doing it out of sober objectivity or insight.

Sometimes they just don't respect you. Sometimes they're just annoyed by calls for accountability. And sometimes, they do it because they've fused with an interacting swarm of chatbots and transcended their human identity.

 

Another response to Ptacek.

view more: next ›