this post was submitted on 17 Mar 2026
12 points (100.0% liked)

GenZedong

5139 readers
104 users here now

This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.

See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.

This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.

We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.

Rules:

founded 5 years ago
MODERATORS
 

Image from a based Chinese artist on Twitter @Amogha_Pasa

top 24 comments
sorted by: hot top controversial new old
[–] CriticalResist8@lemmygrad.ml 5 points 4 hours ago

Technology takes on the character of the superstructure it's in, in a dictatorship of the proletariat AI will necessarily be proletarian in character.

[–] MeetMeAtTheMovies@hexbear.net 5 points 6 hours ago

There could be something which served workers’ interests which we call “AI”, but I would argue that much if not all of the implementation details would be different.

A copy-paste of part of a previous comment I made:

Look at how modern LLMs work. They’re trained in large data centers owned by private companies using giant corpuses of data that were largely obtained without the permission or knowledge of the people who created it. Then, to use them, the weights are loaded into an amount of memory that’s out of reach for most consumer desktops and users must call into the LLM using an API. Working memory of a conversation doesn’t persist in between messages or tool calls, so the entire history must be loaded into its context window on every call. In other words, all the “learning” for these models must take place up front in training and outside of taking context into account, it doesn’t actually adjust to learn new things about the world. There are workarounds for this, of course, to simulate the experience of interacting with something that can learn, but they have their limitations and aren’t reliable yet. I could go on. Running probabilistic process on deterministic hardware is an area that we may see more work on soon.

Every single step of that description had alternatives that would be more likely to be chosen outside of a capitalist system. They could be more eco friendly. They could be more efficient. They could be more powerful and learn from your interactions in way that persists. And a lot of these changes would delay the exposure of LLMs to the general public and see them spending longer in academia. But that would be okay because we wouldn’t have the profit motive at the center of this inflating a giant bubble that’s poised to pop and flatten the economy. Bottom line is this stuff was pushed out and hyped up well before it was ready and well before it was able to be scaled up ethically and with the working class in mind. None of this was inevitable.

[–] materialanalysis1938@lemmygrad.ml 7 points 8 hours ago (2 children)

AI is quite literally one of the keys to communism as far as I’m concerned. It has the capability to greatly reduce working hours and rapidly increase automation. Now in the hands of the bourgeoisie, it is incredibly dangerous.

But the simple reality is that AI is here and it isn’t going away. So all the more reason to build the revolution

[–] fox@hexbear.net 5 points 7 hours ago (2 children)
  1. there's no such thing as AGI
  2. LLMs certainly aren't AGI or anywhere close to any approximation of it
[–] casskaydee@hexbear.net 3 points 2 hours ago (1 children)

Did you respond to the wrong comment? The person you're replying to didn't mention AGI, only the automation capabilities of LLMs

[–] fox@hexbear.net 0 points 2 hours ago

They've snuck an edit in but more to the point is that LLMs are incapable of automating anything useful, and if something useless is automated it'd be cheaper to just not do it at all

[–] davel@lemmygrad.ml 8 points 7 hours ago

AGI wasn’t mentioned, and AI doesn’t need to be GI to be a labor-saving device.

[–] Des@hexbear.net 4 points 7 hours ago

Ian M Banks may have been the first to really envision that kind of far, far future communist society and i really want to live there

[–] yogthos@lemmygrad.ml 8 points 10 hours ago (2 children)

I don't see why not. If these tools are developed and owned by the workers then they would serve the proletariat.

[–] Loki@lemmygrad.ml 2 points 4 hours ago (1 children)

Tbh maybe I should’ve specified, I was more talking about like AI as in would conscious digital beings be considered proletarian, obviously that’s very different from AI tools, so in my original intended context your reply sounds like slavery but obviously you didn’t have that context because I failed to provide it lol

[–] davel@lemmygrad.ml 4 points 4 hours ago (2 children)

Conscious digital beings don’t exist, so you’re asking a speculative question about something that’s already very speculative. IMO it’s premature to give it serious consideration.

[–] Loki@lemmygrad.ml 2 points 3 hours ago

Idk why it shouldn’t be considered, it’s far more plausible than artificial superintelligence and there’s thousands of papers, books, etc on that.

I think the only reason why it hasn’t really been considered in the AI research space is because it doesn’t really pose an existential threat to humanity.

[–] Loki@lemmygrad.ml 1 points 3 hours ago (1 children)

Also depending on how you define consciousness and its ability to exist in a simulation that fruit fly that had its brain copied identically and simulated might disagree with you lol

[–] Loki@lemmygrad.ml 2 points 3 hours ago* (last edited 3 hours ago)

Also in case you wanna learn more about this lol

Original connectome: https://www.virtualflybrain.org/

The recent project built off of that: https://eon.systems/updates/embodied-brain-emulation

[–] MasterBlaster@lemmygrad.ml 4 points 7 hours ago (2 children)

Have you previously posted on the topic of AI before? There's been a few write ups by some very intelligent comrades here that personally opened my eyes on AI. I seem to recall your name popping up in the debates but I can't find any of the write ups I found convincing.

[–] yogthos@lemmygrad.ml 3 points 6 hours ago* (last edited 6 hours ago)
[–] amemorablename@lemmygrad.ml 3 points 6 hours ago

Yogthos has definitely posted on the subject before in favor of proletarian use of AI, but not sure of which post you have in mind. CriticalResist also has a good essay on the conversations surrounding AI, on ProleWiki: https://en.prolewiki.org/wiki/Essay:Intellectual_property_in_the_times_of_AI

[–] Comprehensive49@lemmygrad.ml 2 points 10 hours ago* (last edited 10 hours ago) (2 children)

Depends on the stage of AI development.

It's going to be rough at the beginning, as AI saves labor and allows capitalists to lay off workers before any good safety nets are built. I expect states with existing safety net infrastructure like China to weather this transition much better than neoliberal hellscapes like the USA.


End stage AI, once it is able to replace the majority of all workers, is going to be much more interesting.

I made a previous post on this topic:

There are only 2 ways the worker-capitalist contradiction can ever be solved:

  • communism, where the abundant fruits of automation are distributed evenly among all, ala fully automated luxury communism
  • exterminism (from Peter Frase's book Four Futures), where the capitalists finally fulfill their dream of automating everything and can genocide all the unneeded workers.

It's quite important we bring about revolution before option 2 becomes viable.

The reason AI companies are so overvalued today is because U.S. capitalists dream of option two. They want to create some superhuman AI to replace all workers, hack and destroy all other anti-imperialist countries, and then have it kill and oppress all of the poors in the world forever.

This is directly theorized in the AI 2027 paper, which lays out a world world in which OpenAI and associates are able to create a self-improving AI for the USA by the year 2027. At that point, the AI will be able to improve its own intelligence ad infinitum until it becomes a god and can immediately defeat any and all other countries by hacking their infrastructure instantly or planning amazing color revolutions, thereby guaranteeing U.S. world domination forever.

US capitalists fear that if China can get to a comparable level of AI at the same time, then they won't be able to delete China with their own AI, and that China may copy their AI to other anti-imperialist countries to destroy the US advantage.


Exterminism places a hard time limit of ~100 years on our fight for socialism to build communism. Once the capitalists have fully automated labor, police, and military, there will be nothing we can do. It is imperative to us to win the fight so we can build option one, an AI world actually for the proletarian class.

[–] amemorablename@lemmygrad.ml 8 points 8 hours ago (2 children)

This is directly theorized in the AI 2027 paper, which lays out a world in which OpenAI and associates are able to create a self-improving AI for the USA by the year 2027. At that point, the AI will be able to improve its own intelligence ad infinitum until it becomes a god and can immediately defeat any and all other countries by hacking their infrastructure instantly or planning amazing color revolutions, thereby guaranteeing U.S. world domination forever.

I will admit I only skimmed, but that "paper" reads like bad fanfiction. AI does not exist outside of time and space. It is hard-locked to the same material constraints and engineering infrastructure limitations as everything else. These limitations produce consequences and contradictions.

For example, if a government starts producing robot police and replacing real police with robots, this will not only create blowback from existing factions of the enforcer class who are upset about the idea of being replaced and have guns, it will also mean there are less real people who are loyal to the system and are armed, making state power come down more to who controls the robots. It will introduce vulnerabilities to state enforcement through means of hacking or disrupting supplies and maintenance for the robots. Or if these hypothetical robots have anything resembling sapience, then they will simply be a new class of enforcer who is prone to the same complexities as human beings are; they can shift loyalties, be bribed, be horrified, etc.

Or to the point of "self-improvement", there is no one objective measurement of improvement in the first place. A hypothetical AI that can learn on the fly and learns from imperialists will run into much the same problems of blowback and creating the conditions for its defeat that human imperialists do.

As I see it, the main place where AI is useful ("AI" In the meaning of modern developments, such as LLMs) is in the cybernetic connotation, as assistant, and that use is prone to too many confidently wrong errors and missteps; sometimes ones that will only be noticed by someone who is learned in the given field/subject matter, which makes them more like hucksters than real experts when it comes to level of trustworthiness. AI is already at a point where it's convincing enough, it can fool someone who doesn't know better into thinking they should take what it says at face value. But that isn't representative of aptitude on its own (except for maybe deceptive aptitude).

The weird thing about AI is it IS a big deal in certain ways, but it's also overhyped with runaway imaginations to an absurd degree. The reality is impactful, it's just not a sci-fi novel.

[–] Loki@lemmygrad.ml 5 points 4 hours ago

My main criticism of that paper is just their weird fantasy of a superintelligence overthrowing China and not the US lmfao

[–] Comprehensive49@lemmygrad.ml 3 points 5 hours ago* (last edited 5 hours ago) (1 children)

On your first criticism, the idea is that the AI is able to cover the entire production and maintenance supply chain. The police robots will be maintained by maintenance robots and come out of a robot factory, no humans involved anywhere. None of the police robots need any form of sentience or morality, only the top model controlling them, who the capitalists will ensure has values aligned with them.

To trick police officers and military soldiers into being okay with losing their jobs, the US can pretty easily come up with some white supremacy talking points that "we need robots to protect the whites from the people trying to destroy our glorious future", and also give existing police/soldiers some nominal, well-paying jobs of supervising the police robots even if they actually don't do anything significant. Over time the capitalists can wait for these holdouts to retire, and slowly phase out the need for any humans in the loop. It will not be very difficult to mislead the white American public, unfortunately.


On your second self-improvement criticism, I've worked with the best existing LLMs via Claude Code and similar tools. Their ability to work independently for hours and come back with a decent product is legitimately impressive, and a dramatic change from their position a year ago, when they could barely write correct code. The METR benchmark measures AI's ability to work continuously on its own, and it has grown from a few minutes in 2023 to now over ten hours. The hope from US capitalists is that this exponential scaling will continue until the AI can operate continuously independently, like a worker.

The reason why U.S. AI companies are focusing so much on creating LLMs that write good code is so that one day the AI can fix its own code to make itself more intelligent. As soon as someone cracks the code on that, the idea is that it will be able to improve exponentially until it becomes superhuman. It doesn't matter that we humans cannot measure the intelligence of the AI. The idea is that the AI will know how to fix that itself. Whether this is realistic is another matter entirely, but this is THE reason for why AI company valuations are so high.

[–] amemorablename@lemmygrad.ml 3 points 3 hours ago* (last edited 3 hours ago)

On your first criticism, the idea is that the AI is able to cover the entire production and maintenance supply chain. The police robots will be maintained by maintenance robots and come out of a robot factory, no humans involved anywhere. None of the police robots need any form of sentience or morality, only the top model controlling them, who the capitalists will ensure has values aligned with them.

Then all someone would need to do is hack / get access to the top model and overthrow the whole system. Pretty brittle. The only way it could control all of that remotely is if it's exposed to networks and then it's vulnerable to ways to hack it in one way or another.

The police robots will be maintained by maintenance robots and come out of a robot factory, no humans involved anywhere.

And what maintains the maintenance robots? More maintenance robots?

To trick police officers and military soldiers into being okay with losing their jobs, the US can pretty easily come up with some white supremacy talking points that “we need robots to protect the whites from the people trying to destroy our glorious future”, and also give existing police/soldiers some nominal, well-paying jobs of supervising the police robots even if they actually don’t do anything significant. Over time the capitalists can wait for these holdouts to retire, and slowly phase out the need for any humans in the loop. It will not be very difficult to mislead the white American public, unfortunately.

Trickery doesn't override material conditions. It factors into how people act in the world for sure (I believe the scientific socialist term would be the "superstructure"), but it doesn't override it entirely. Giving some former enforcers bullshit jobs is not going to employ all of them or fool them easily.

Over time the capitalists can wait for these holdouts to retire, and slowly phase out the need for any humans in the loop.

While everyone in society quietly goes along with it? This is a lot happening with no reaction, no resultant upheaval, etc.

On your second self-improvement criticism, I’ve worked with the best existing LLMs via Claude Code and similar tools. Their ability to work independently for hours and come back with a decent product is legitimately impressive, and a dramatic change from their position a year ago, when they could barely write correct code. The METR benchmark measures AI’s ability to work continuously on its own, and it has grown from a few minutes in 2023 to now over ten hours. The hope from US capitalists is that this exponential scaling will continue until the AI can operate continuously independently, like a worker.

I don't mean to make LLMs sound incapable or downplay agentic AI. But "some improvements at coding" is not exponential scaling of generalized "intelligence" in any and every context of society. In my experience with LLMs, they are similar to humans as aptitude goes in that specialists will tend to outperform general models in specialized tasks (when accounting for similar infrastructure and specialized datasetting). I've seen nothing to suggest a generalized supercomputer style model making sense in practice.

the idea is that it will be able to improve exponentially until it becomes superhuman

Whether this is realistic is another matter entirely, but this is THE reason for why AI company valuations are so high.

Well it's a lot of the point I'm making here, is that it isn't realistic and snake oil salespeople are selling a lot of overblown nonsense to make money. Already, many companies are realizing that the tech isn't doing much for them. Not because the tech is shit as a whole, but because it isn't actually useful in a lot of contexts, unless you build it for that specialized context from the ground up. Where I see concrete specialization happening more so is in stories about China's developments in AI and robotics, and applied uses for it. They are not banking so much on an LLM company and a narrative of mythical AGI; they appear to be casting a much wider net on what falls under the AI umbrella and how it can be used.

While the US is daydreaming about "then draw the rest of the fucking owl" style jumps of quantitative to qualitative without considering how that gap is actually bridged in the science of it, China is building the future in real-time. I just don't see how it's even close. All the US knows how to do these days is build weapons (edit: and "treats" I guess). China's high speed trains alone make the US look like it's behind by a whole era. Even if by magic, the US produced an AI that could make the perfect recommendations for what to do in every sector of society tomorrow, the capitalists wouldn't actually listen to it because it wouldn't be profitable; in fact, they'd probably say it sounds like a communist. And if they forced it to give capitalist recommendations, it'd just tell them to make the same kind of self-defeating decisions that the capitalists in charge are already making.

P.S. If this sounds annoyed at all, it is probably because thinking about the normalized sociopathy that is US capitalist "society" brings out the ranting energy in me. I don't mean to sound that way at you for being the messenger of a point of view.

[–] yogthos@lemmygrad.ml 3 points 8 hours ago (1 children)

Yeah these are basically the two future scenarios we can look forward to. Given how things are going in Ukraine and Iran, it doesn't look like the empire is really going to be able to stamp out alternatives at this point. A very likely scenario that the west will become isolationist and start doing these things internally, but the rest of the world cuts them off.

Regarding the whole singularity idea, I don't think it's a given even if AIs can self improve. There's no guarantee that a cognitive system can just scale indefinitely. We're already seeing this playing out with LLMs where people originally thought that you could just keep making them bigger, but things starts to fall apart after a certain size. And feeding more data into them or making the network bigger doesn't produce positive results.

And I can't really see how the west can pull ahead of China in this tech given that China has a much bigger talent pool, and most AI research being published is coming from China.

[–] Loki@lemmygrad.ml 2 points 4 hours ago

Tbf so far every wall that was being approached in scalability has been overcome, and that’s one of the main reasons I think China has a massive advantage because they’ve been the primarily innovators in that regard; they have less compute to work with so they focus more on creating ingenious architecture solutions to these problems.