this post was submitted on 12 Dec 2025
25 points (93.1% liked)
Asklemmy
51622 readers
624 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Don't worry I had AI TL&DR it for you:
Summary of "The Reverse-Centaur's Guide to Criticizing AI"
Cory Doctorow distinguishes between centaurs (humans assisted by machines) and reverse-centaurs (humans serving as appendages to machines). His core thesis: AI tools are marketed as centaur-making devices but deployed to create reverse-centaurs—workers subjected to algorithmic control and expected to catch machine errors while being blamed for failures.
The AI bubble exists primarily to maintain growth stock valuations. Once tech monopolies dominate their sectors, they face market reclassification from "growth" to "mature" stocks, triggering massive valuation drops. AI hype keeps investors convinced of continued expansion potential.
AI's actual business model: Replace high-wage workers (coders, radiologists, illustrators) with AI systems that cannot actually perform those jobs, while retaining skeleton crews as "accountability sinks"—humans blamed when AI fails. This strategy reduces payroll while maintaining superficial human oversight.
Why expanding copyright won't help creators: Despite 50 years of copyright expansion, creative workers earn less both absolutely and proportionally while media conglomerates profit enormously. New training-related copyrights would simply become contractual obligations to employers, not worker protections.
The effective counter-strategy: The U.S. Copyright Office's position that AI-generated works cannot receive copyright protection undermines corporate incentives to replace human creators entirely. Combined with sectoral bargaining rights (allowing industry-wide worker negotiations), this creates material resistance to worker displacement.
On AI art specifically: Generative systems produce "eerie" outputs—superficially competent but communicatively hollow. They cannot transfer the "numinous, irreducible feeling" that defines art because they possess no intentionality beyond statistical word/pixel prediction.
The bubble will collapse, leaving behind useful commodity tools (transcription, image processing) while eliminating economically unsustainable foundation models. Effective criticism should target AI's material drivers—the growth-stock imperative and labor displacement economics—not peripheral harms like deepfakes or "AI safety" concerns about sentience.