this post was submitted on 17 Mar 2026
16 points (94.4% liked)
GenZedong
5139 readers
98 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I will admit I only skimmed, but that "paper" reads like bad fanfiction. AI does not exist outside of time and space. It is hard-locked to the same material constraints and engineering infrastructure limitations as everything else. These limitations produce consequences and contradictions.
For example, if a government starts producing robot police and replacing real police with robots, this will not only create blowback from existing factions of the enforcer class who are upset about the idea of being replaced and have guns, it will also mean there are less real people who are loyal to the system and are armed, making state power come down more to who controls the robots. It will introduce vulnerabilities to state enforcement through means of hacking or disrupting supplies and maintenance for the robots. Or if these hypothetical robots have anything resembling sapience, then they will simply be a new class of enforcer who is prone to the same complexities as human beings are; they can shift loyalties, be bribed, be horrified, etc.
Or to the point of "self-improvement", there is no one objective measurement of improvement in the first place. A hypothetical AI that can learn on the fly and learns from imperialists will run into much the same problems of blowback and creating the conditions for its defeat that human imperialists do.
As I see it, the main place where AI is useful ("AI" In the meaning of modern developments, such as LLMs) is in the cybernetic connotation, as assistant, and that use is prone to too many confidently wrong errors and missteps; sometimes ones that will only be noticed by someone who is learned in the given field/subject matter, which makes them more like hucksters than real experts when it comes to level of trustworthiness. AI is already at a point where it's convincing enough, it can fool someone who doesn't know better into thinking they should take what it says at face value. But that isn't representative of aptitude on its own (except for maybe deceptive aptitude).
The weird thing about AI is it IS a big deal in certain ways, but it's also overhyped with runaway imaginations to an absurd degree. The reality is impactful, it's just not a sci-fi novel.
My main criticism of that paper is just their weird fantasy of a superintelligence overthrowing China and not the US lmfao
On your first criticism, the idea is that the AI is able to cover the entire production and maintenance supply chain. The police robots will be maintained by maintenance robots and come out of a robot factory, no humans involved anywhere. None of the police robots need any form of sentience or morality, only the top model controlling them, who the capitalists will ensure has values aligned with them.
To trick police officers and military soldiers into being okay with losing their jobs, the US can pretty easily come up with some white supremacy talking points that "we need robots to protect the whites from the people trying to destroy our glorious future", and also give existing police/soldiers some nominal, well-paying jobs of supervising the police robots even if they actually don't do anything significant. Over time the capitalists can wait for these holdouts to retire, and slowly phase out the need for any humans in the loop. It will not be very difficult to mislead the white American public, unfortunately.
On your second self-improvement criticism, I've worked with the best existing LLMs via Claude Code and similar tools. Their ability to work independently for hours and come back with a decent product is legitimately impressive, and a dramatic change from their position a year ago, when they could barely write correct code. The METR benchmark measures AI's ability to work continuously on its own, and it has grown from a few minutes in 2023 to now over ten hours. The hope from US capitalists is that this exponential scaling will continue until the AI can operate continuously independently, like a worker.
The reason why U.S. AI companies are focusing so much on creating LLMs that write good code is so that one day the AI can fix its own code to make itself more intelligent. As soon as someone cracks the code on that, the idea is that it will be able to improve exponentially until it becomes superhuman. It doesn't matter that we humans cannot measure the intelligence of the AI. The idea is that the AI will know how to fix that itself. Whether this is realistic is another matter entirely, but this is THE reason for why AI company valuations are so high.
Then all someone would need to do is hack / get access to the top model and overthrow the whole system. Pretty brittle. The only way it could control all of that remotely is if it's exposed to networks and then it's vulnerable to ways to hack it in one way or another.
And what maintains the maintenance robots? More maintenance robots?
Trickery doesn't override material conditions. It factors into how people act in the world for sure (I believe the scientific socialist term would be the "superstructure"), but it doesn't override it entirely. Giving some former enforcers bullshit jobs is not going to employ all of them or fool them easily.
While everyone in society quietly goes along with it? This is a lot happening with no reaction, no resultant upheaval, etc.
I don't mean to make LLMs sound incapable or downplay agentic AI. But "some improvements at coding" is not exponential scaling of generalized "intelligence" in any and every context of society. In my experience with LLMs, they are similar to humans as aptitude goes in that specialists will tend to outperform general models in specialized tasks (when accounting for similar infrastructure and specialized datasetting). I've seen nothing to suggest a generalized supercomputer style model making sense in practice.
Well it's a lot of the point I'm making here, is that it isn't realistic and snake oil salespeople are selling a lot of overblown nonsense to make money. Already, many companies are realizing that the tech isn't doing much for them. Not because the tech is shit as a whole, but because it isn't actually useful in a lot of contexts, unless you build it for that specialized context from the ground up. Where I see concrete specialization happening more so is in stories about China's developments in AI and robotics, and applied uses for it. They are not banking so much on an LLM company and a narrative of mythical AGI; they appear to be casting a much wider net on what falls under the AI umbrella and how it can be used.
While the US is daydreaming about "then draw the rest of the fucking owl" style jumps of quantitative to qualitative without considering how that gap is actually bridged in the science of it, China is building the future in real-time. I just don't see how it's even close. All the US knows how to do these days is build weapons (edit: and "treats" I guess). China's high speed trains alone make the US look like it's behind by a whole era. Even if by magic, the US produced an AI that could make the perfect recommendations for what to do in every sector of society tomorrow, the capitalists wouldn't actually listen to it because it wouldn't be profitable; in fact, they'd probably say it sounds like a communist. And if they forced it to give capitalist recommendations, it'd just tell them to make the same kind of self-defeating decisions that the capitalists in charge are already making.
P.S. If this sounds annoyed at all, it is probably because thinking about the normalized sociopathy that is US capitalist "society" brings out the ranting energy in me. I don't mean to sound that way at you for being the messenger of a point of view.