this post was submitted on 17 Mar 2026
16 points (94.4% liked)

GenZedong

5139 readers
116 users here now

This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.

See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.

This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.

We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.

Rules:

founded 5 years ago
MODERATORS
 

Image from a based Chinese artist on Twitter @Amogha_Pasa

you are viewing a single comment's thread
view the rest of the comments

There could be something which served workers’ interests which we call “AI”, but I would argue that much if not all of the implementation details would be different.

A copy-paste of part of a previous comment I made:

Look at how modern LLMs work. They’re trained in large data centers owned by private companies using giant corpuses of data that were largely obtained without the permission or knowledge of the people who created it. Then, to use them, the weights are loaded into an amount of memory that’s out of reach for most consumer desktops and users must call into the LLM using an API. Working memory of a conversation doesn’t persist in between messages or tool calls, so the entire history must be loaded into its context window on every call. In other words, all the “learning” for these models must take place up front in training and outside of taking context into account, it doesn’t actually adjust to learn new things about the world. There are workarounds for this, of course, to simulate the experience of interacting with something that can learn, but they have their limitations and aren’t reliable yet. I could go on. Running probabilistic process on deterministic hardware is an area that we may see more work on soon.

Every single step of that description had alternatives that would be more likely to be chosen outside of a capitalist system. They could be more eco friendly. They could be more efficient. They could be more powerful and learn from your interactions in way that persists. And a lot of these changes would delay the exposure of LLMs to the general public and see them spending longer in academia. But that would be okay because we wouldn’t have the profit motive at the center of this inflating a giant bubble that’s poised to pop and flatten the economy. Bottom line is this stuff was pushed out and hyped up well before it was ready and well before it was able to be scaled up ethically and with the working class in mind. None of this was inevitable.