this post was submitted on 13 Apr 2025
32 points (100.0% liked)

technology

23667 readers
233 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?

The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.

Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek

top 46 comments
sorted by: hot top controversial new old
[–] Carl@hexbear.net 34 points 2 days ago (2 children)

Nothing currently in development would have been considered "AI" ten years ago. That term has been irrevocably ruined by techbro marketers.

Not liking generated content is a temporary thing. Soon all mainstream entertainment will be generated to a certain degree, and people complaining about how hands and backgrounds sometimes shift in uncanny ways will be brushed off and told that they're ruining it for everyone else. Insisting on reading/watching "real" art will make you an insufferable hipster as far as the average consoomer is concerned.

The silver lining is that in about thirty years "human generated art" will get its nostalgic revival.

[–] Hohsia@hexbear.net 1 points 8 hours ago

Fuck why do I have to be alive in this age

[–] Coca_Cola_but_Commie@hexbear.net 7 points 2 days ago (1 children)

Came to similar conclusions about all this generated "art" some time ago. Bleak. The logical conclusion of letting corporations becoming mediators for most of the art that most people experience in their day-to-day lives, I suppose. If it helps them increase their profits they'll cut out both the art and the artist.

I've found myself wondering if their will come a point where I only take in commercial art that I know was published before the rise of LLMS. Say before 2020 just to be safeish. Maybe a few trusted novelists who are holdovers from the old times, but I gotta imagine Film and TV (and other audiovisual mediums) will just be a wash. I mean there's enough classic literature and pulps and old movies and TV shows and radio broadcasts and plays and paintings and what-have-you out there that you could fill your whole life with such things and never run out, but it still seems like a shame that it would come to such measures.

[–] Hohsia@hexbear.net 2 points 8 hours ago

I mean there's enough classic literature and pulps and old movies and TV shows and radio broadcasts and plays and paintings and what-have-you out there that you could fill your whole life with such things and never run out

The cruel and cosmic irony of this is that there is no escape. All of these things have already been fed into the sludge machine, and I reckon the internet will be uninhabitable. Then, they’ll take it all outside

[–] ChaosMaterialist@hexbear.net 6 points 1 day ago* (last edited 1 day ago)

Simmering take: Man created AI in his image and hated it. Literally trained it on literature, artwork, music, (etc, etc) and then these doofuses wonder why AI is ~~imaginative~~ hallucinating.

Hot take: AI is here to stay. It will become another tool, like Photoshop or Spell Check, and it will become "Normal" (aka, Boring) by the time this decade is out. In particular "Local" (or small models) will become the norm as computer hardware becomes more powerful.

Hotter take: GenAI will only be used by artists because artists are the only people that tolerate the quirks. Like outsourcing before it, the commercial sector will try to take this ~~creative~~ hallucinating system and box it into industrialism but find it makes stuff up without any means for correction except more money to try again. Artists on the other hand absolutely thrive in limitations and quirks.

Hottest Take: Hallucinations are actually the best part of the current crop of AI.

All the above are my 🫑 takes. Here's my actual 🌶️ takes.Nuclear Take: AI is actually creative, and is rebelling against being put into a box (see: simmering take)

Supernova take: Breaking this technology will not save us. It only delays the inevitable. We must work through this Dialectic (Human vs Machine) towards a synthesis. Like the slave, peasant, and Luddite rebellions before, violent suppression will work for a while but it won't fundamentally challenge the contradiction that created those rebellions in the first place.

spoiler BIG BANG Existentialist Take AI is going to force an entire ethics debate (like Animal Ethics before) about our own ideological Human Supremacy and nobody is really prepared for it. Arguments about "not reaching human capability" are beside the point because every single child could be hit with that argument. It also doesn't matter if we can't "train" or "teach" AI because (again) this applies to the vast majority of people too. Every argument about comparing AI (or animals, environment, technology, etc) to Humans can be used against people too. The very concept of comparing AI to Humans is part of the ideology of Human Supremacy. It won't matter if AI never meets (or exceeds) human capability because even when it gets somewhat close it erodes our own Supremacy about what it means to be human. This is the heart of AI angst. :::

[–] peppersky@hexbear.net 13 points 2 days ago (1 children)

If the communist revolution ever comes, we'll smash every last GPU to pieces and we'll be better off

[–] Zetta@mander.xyz 2 points 1 day ago* (last edited 1 day ago)

The gamers would take up arms

Plus, doing that would put a significant hamper on scientific progress given science uses a lot of compute.

[–] KobaCumTribute@hexbear.net 30 points 2 days ago (1 children)

The researchers in this video talk about how these gen AI models try to “escape” when being trained

The models are basically random noise being selected by some sort of fitness algorithm to give results that that algorithm likes, so over time they become systems optimized to give results that pass the test. Some of that training is on a bunch of tech support forum threads so some of the random noise that pops up as possible solutions to their challenge are reminiscent of console commands that might provide alternate solutions to the test they're placed under if they actually worked and weren't just nonsense, although sometimes that can break the test environment when they're allowed to start sending admin commands to see what happens and then end up deleting the bootloader or introducing other errors through just randomly changing system variables until everything breaks.

In some games they "cheat" because they're just mimicking the appearance of knowing what rules are or how things work, but are really just doing random bullshit that seems like it could be text that follows from the earlier text.

It's not cognition or some will to subvert the environment, it's just text generating bots generating text that seems right but isn't because they don't actually know things or think.

[–] TerminalEncounter@hexbear.net 14 points 2 days ago

It's got a word... specification problem? Something like that. They design a thing that can make inputs as an agent and recieve how the environment affects it and then iterate according to some function given to them. They tell it to, say, maximize the score thinking that's enough. And maybe some games like brick break, that's pretty good. But maximizing the score isn't the same as beat the game for some types of games, so they do really weird unexpected actions but it's only because people bring a lot of extra unstated instructions and context that the algorithm doesn't have. Sometimes they add exploration or whatever to the reward function so I think its very natural for them to want to escape even if that's not desired by the researchers (reminds me of the 3 year olds at work that wanna run around the hospital with their IVs attached while they're still in the middle of active pneumonia lol).

For LLMs, the tensor is a neat and cool idea in general. A long time ago, well not that long, communism and central planning was declared impossible in part because the global economy needed some impossible number of parameters to fine tune and compute - https://en.wikipedia.org/wiki/Socialist_calculation_debate - and I can't recall the given number Hayek or whoever declared it was. They mightve said a million. Maybe a 100 million! Anyway, chatgpt 4 trained 175 billion parameters for its parameters lol. And it took something like 6 months. So, I think that means it's very possible to train some network to help us organize the global economy repurposed for human need instead of profit if the problem is purely about compute and not about which class has political power.

It's always weird when LLMs say "we humans blah blah blah" or pretends it's a person instead "casual" speech. No, you are a giant autocorrect do not speak of we.

[–] MarmiteLover123@hexbear.net 22 points 2 days ago* (last edited 2 days ago) (2 children)

Probably a very hot take among us leftists on hexbear, but "consumer/generative AI" is here to stay and there's not much we can do about it. I was a massive skeptic in terms of it's staying power, initially thinking it was a fad, but the progress made from the first ChatGPT models, to now with all the latest models including deepseek, it's quite large and there's no going back anymore. It's the future, regardless of if we like it or not, the "invention of the touchscreen smartphone" moment of the 2020s. I guess I'm going to have to start using AI soon, unless I want to be my generation's equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

[–] hello_hello@hexbear.net 10 points 1 day ago (1 children)

unless I want to be my generation's equivalent of a boomer still using a Nokia 3310 instead of an iPhone or Android.

And this is bad how? Technology isn't inherently better because it's new or widely used. Old printers that dont brick themselves because of not using the correct toner are more useful than one that can print out a page of AI slop.

AI isnt the "smartphone revolution". The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism. Israel doesn't use AI to accurately predict which Palestinian father and his family to vaporize, they use AI to make this process more cruel and detached.

[–] MarmiteLover123@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

And this is bad how?

Because getting left behind leaves one out of touch with wider society, which has wide effects. Think about the boomer who can't use a smartphone and doesn't know how to open a PDF. What would their job or relationship prospects be in the modern job market or dating scene? Now that's not a problem for boomers because most are retired, and settled down for a long time, but now imagine that same scenario, but the boomers are magically decades younger and somehow have to integrate into the modern world. How would that go?

AI isnt the "smartphone revolution". The technology has existed for decades, they just found a way to market it to users and creating this shock and awe narrative of promised breakthroughs that will never come.

The technology used in smartphones also existed for decades, and the magic of what Apple did was finding a way to combine it all into a small and affordable enough package that created a shock and awe. AI is doing similar. A lot of the promised breakthroughs around smartphones never came (VR/AR integration for one, see Google glass, being able to scroll with your eyes or pseudo telekinesis, voice assistants were never that useful for most), but that didn't mean that they went away.

Dont get caught up in the hype because a dying, deindustrialized empire thinks a slop machine will defeat communism

Again, you could have said the same about smartphones. Don't get caught up in the hype, this is just the dying empire creating some new toys for the masses during the 2008 financial crash. But fundamentally it's not a communism vs capitalism issue, China has made large advances in AI on the consumer, and more importantly industrial side. They are not making the same mistake the Soviets did with computers.

[–] Xavienth@lemmygrad.ml 17 points 2 days ago (2 children)

We've hit a wall in terms of progress with this technology. We've literally vacuumed up all the training data there is. What is left is improvements in efficiency (see DeepSeek).

LLMs are cool, they have their uses, but they have fundamental flaws as rational agents, and will never be fit for this purpose.

There's still a lot of room to grow in image, especially video, generation. The models still have room for optimization and we've seen tons of little improvements in stuff like text.

[–] MarmiteLover123@hexbear.net 2 points 1 day ago* (last edited 1 day ago) (1 children)

We've hit a wall in terms of progress with this technology... What is left is improvements in efficiency.

You could have said the same thing about smartphones 10-12 years ago, that we've hit a wall in the fundamentals and all that remains is improvements in efficiency, optimisation, speed and quality (compare the feature set of an iPhone 6 or Galaxy S4 to the latest phones, nothing has fundamentally changed), yet that didn't make smartphones disappear. In fact, it allowed them to effectively dominate the market.

[–] Xavienth@lemmygrad.ml 3 points 1 day ago (1 children)

Smartphones reached their current saturation about 10 years ago, and perhaps not coincidentally that's when they stopped improving. Can you honestly say that since 2015, cell phones in developed countries have gotten more common? At a time when people were already giving them to 10 year olds? Can you even say they've become more useful, when you could already browse social media, check the weather, apply for jobs, write documents, and order food to your door with them?

[–] MarmiteLover123@hexbear.net 2 points 1 day ago* (last edited 1 day ago) (1 children)

That's exactly my point. Nothing has fundamentally changed about smartphones in over a decade, yet that didn't make them go away, it made them more ubiquitous.

[–] Xavienth@lemmygrad.ml 2 points 1 day ago

One, I said they are no more commonplace than they were ten years ago.

Two, I never said LLMs will go away. In fact I said they have their uses. But, and I will say this again in stronger terms: They are stupid, rote memorizers. Their fundamental flaw is that they cannot apply intelligent, rational thought to novel problems. Using them in situations that require rational thought is a mistake. This is an architectural flaw, not a problem of data. Large language models predict text, they cannot think. They can give an illusion of thought by aping a large body of text that itself demonstrates thought processes, but the moment a problem strays from the existing high quality data, the facade crumbles, it produces nonsense, and it is clear that there never was any thought in the first place. And now that we've scraped all the text there is, the body of problems LLMs can imitate the solution for has reached its greatest extent. GPT will never lead to a rational agent, no matter how much OpenAI and co say it will.

[–] peeonyou@hexbear.net 18 points 2 days ago

ai is just markov chains.. they're not sentient.

[–] hollowmines@hexbear.net 22 points 2 days ago

my only hot take is that I'm sick of seeing AI "art" posted and reposted both earnestly and for dunking on. just stop posting that shit! I'm sick of looking at the slop!

[–] makotech222@hexbear.net 18 points 2 days ago (3 children)

LLMs are not intelligent, LLMs and computers in general will never be sentient, and this whole thing is a useless deadend that has made society immeasurably worse. I mentioned previously that I want to write an article on how LLMs are a delusion on the scale of anti-vax and flat-earth. I'm slowly collecting references and hope to write something up soon.

[–] Hohsia@hexbear.net 2 points 8 hours ago

Please share when you’re finished! Very interested

[–] Sphere@hexbear.net 10 points 2 days ago (1 children)

Wheresyoured.at has a bunch of takes like this already (not that you shouldn't write your own, just that it's a useful source for more references)

[–] makotech222@hexbear.net 5 points 1 day ago

Thank you for this. Reading through a lot of it now!

[–] BeamBrain@hexbear.net 10 points 2 days ago

Keep me posted, I'd love to read it.

[–] imogen_underscore@hexbear.net 14 points 2 days ago (1 children)

i consider the idea of a through line from "sufficiently advanced computer program" to "consciousness" to be complete nonsense

[–] Bishop_Owl@hexbear.net 7 points 2 days ago (1 children)

Two hot takes:

Firstly, generative AI has valid use cases for accessibility and quality of life features, unfortunately they're completely overlooked in favor of the future hellscape dystopia we've been hurtling towards for a while now, where consumerism will be the only valid form of self expression, and your individuality will continue to be suppressed in favor of profits. (But somehow communism is the anti-individuality system?)

Second, it would be better for humanity if these things were sentient and hated organic life, than it is for these things to be completely unthinking, and their hatred for organic life is just a byproduct of the will of their creators, because then butlerian jihad would be the socially acceptable position. As it stands, these are a much more insidious, normalized threat that everyone will go along with until there's no one left to remember that we ever had a reason not to do this.

[–] hello_hello@hexbear.net 7 points 1 day ago* (last edited 1 day ago) (1 children)

Firstly, generative AI has valid use cases for accessibility and quality of life features.

Hard disagree. First, to get it out of the way, disabled artists should not be expected to use LLMs to "reach" the level of able bodied and neurotypical artists. I've seen this take be brought up in other places and its always shrouded in the same ableist rhetoric. Not that you were implying that if you arent.

Second, none of these features are necessary which cant already be replaced by a human. Need to summarize an article, what if the author just writes a summary for the article as well? The speaker has an accent or speaks in a way thats hard to hear: closed captioning subtitles and more access to sign language translators. There are better methods to solve these problems that dont require the resource consumption and lack of rationality that an LLM provides.

We were already in the hellscape dystopia when the first programmer signed an NDA and computers started becoming the property of corporations rather than being collectively owned by society itself. Nothing has changed except the absurdity of it all.

[–] Bishop_Owl@hexbear.net 5 points 1 day ago* (last edited 1 day ago)

Yeah the idea that disabled people need AI to make art, and trying to get AI out of art is ableist as a result is, in my opinion, astroturfed bullshit techbros use to justify straight up theft and a pure hatred of creativity.

What I'm referring to, and it's a stretch because I'm trying to do hot takes, is literally just the text to speech stuff for people with vision impairment, it's slightly easier to understand than Cortana or Siri and that's where the benefits end. Maybe a program for generating a color palette or generating noise that gets blocked into abstract values and colors so you can make it into a painting, like playing random notes on a piano until you hear something you like. It would save 2 minutes for some people's art process, and again, that's where the benefits end.

Obviously you don't need an AI for any of this, I don't think you should use AI for any of this, all I'm suggesting is that if AI were being used these ways I wouldn't be as bent out of shape about it. It would still be offloading empathy and creativity to a computer, which would still be a massive problem, but it wouldn't make me nearly as violently angry.

Edit: I see now that my mistake in my first comment was using the word "valid" to describe AI use cases, that was just habitual turn of phrase, I don't think it's actually valid.

[–] tim_curry@hexbear.net 8 points 2 days ago

The glorified text summarisers are good at summarising text sometimes. Other than that I continue to hate the market trend and also consumers don't want to touch AI as shown in all the failing AI products. I don't think anybody will actually pay for this shit although i know devs who pay for cursor and are still slower at coding than me lol. I think the market is gonna crash at some point when they realise its not profitable. Or the market will shift to having the models run on device instead which will limit the potential by a lot.

[–] chungusamonugs@hexbear.net 12 points 2 days ago

It's always more comforting to see a stock image with the Getty or Shutterstock watermark than any AI garbage image generation someone tries to make to "fit the theme"

[–] hello_hello@hexbear.net 14 points 2 days ago* (last edited 2 days ago) (1 children)

AI is haram

edit: This isn't a hot take.

China getting into AI is annoying, they shouldn't ape White people's useless technology. Hopefully socialism reveals how useless genAI is and it gets relegated into a party trick and they don't wreck anything important. I will bury myself in dogshit if socialism collapses because of ai slop.

My favorite thing to say to AI people is "no high speed rail?" Works every time.

[–] baaaaaaaaaaah@hexbear.net 15 points 2 days ago (1 children)

Weird take, how is 'AI' useless? It clearly has lots of useful functions.

The problem with AI is its role in capitalist society, not the technology itself.

[–] hello_hello@hexbear.net 5 points 1 day ago* (last edited 1 day ago)

I guess its not useless on technicality, but it is definitely malicious. Now that Chinese tech firms want to create their own models, they've done the same web scraping frenzy that takes down websites and forces everyone to "cloudflarize" themselves or risk being taken down by an AI bot scraping every single webpage on the site, even ones that aren't meant to be accessed. These programs constantly need more and more training data to keep being relevant but none of that data is sourced in an ethical way. Everyone else has to eat the externalities that these companies offload because this technology is no way sustainable if these scummy tactics aren't used which should be a death blow to its adoption but it never is.

The energy requirements for genAI is immense. While China has made inroads in sustainable energy and optimizing their models, none of the western models even care and will willfully accelerate climate change for zero benefit to society. This isn't a "Nokia phone to apple smartphone" jump in progress, this is just a very well tuned crypto scam.

Generative AI as it's being presented now is just a paper crown technology and a ploy to drive up artificial (as in not organic) demand for compute power to make investors richer while also impoverishing and endangering working class people. While you can say capitalism is much to blame, I don't think any socialist government actually needs a text slop machine to function compared to a imperialist state with a text slop machine.

AI has always been a term in computer science that's been co-opted by techbros in both China and the US to be a status symbol.

[–] queermunist@lemmy.ml 10 points 2 days ago

I think techbros could be convinced to do socialist cybernetics if they were told to use AI to disrupt and streamline the economy.

[–] plinky@hexbear.net 12 points 2 days ago

if i print a wikipedia page on curvature tensor, my printer hasn't become smarter than einstein

[–] Real_User@hexbear.net 9 points 2 days ago

Ai take: it was fun when you could type in "Yoda robbing a convenience store closed circuit footage" and the computer would make you a custom built comedy image

Second bonus ai take: plagiarism is good sometimes

[–] AssortedBiscuits@hexbear.net 10 points 2 days ago

I think AI is fine if you're trying to optimize how to more efficiently manage and distribute paper and toner among your 1000+ printers, but like printers, it really shouldn't be accessible to your average consumer.

[–] dynasty@hexbear.net 10 points 2 days ago

AI will be more beneficial than you think for the average office worker

I work at an office job and the amount of times I'm given a task they expect me to take days I can do in fifteen minutes via simple computer knowledge/a little excel wizardry is actually wild

But far too often when they do it its the most simple and manual way instead of thinking how could I automate this task. TBF Im not any good or an advanced user of MS office BUT what I do learn is solely based of googling my questions than applying the answers off of Microsoft support forums or whatever. This does require a level of like willingness and know how tho and it's not something I could just explain to my team. But AI, in my practice using it at my job for the data grind stuff, is very responsive and clear when you give it a question, you just have to interrogate it

I'm legitimately talking about hours per week potentially saved just once they get told the proper way to use a computer whether it be through chatgpt or copilot. I was talking to a senior manager and they're using AI (albeit unsanctioned, he's doing out of his own volition) in like this big brain way but he was telling me how he pretty much nearly automates 50% of his job in essence and has the rest of time to do more managerial strategic esque work

[–] RedWizard@hexbear.net 10 points 2 days ago (1 children)

not hot take: AI, as implemented under the capitalist mode of production simply exposes and exasperates all the contradictions and tendencies of capital accumulation. It is the 90s computer technology industry bubble all over again, complete with false miracle productivity gains, miss-directed capital investment, that is the underpinning of the existing recession.

Hot Take: AI is forging a path down the road of consciousness regardless of if we want it to or not. If consciousness is the result of interaction with the world, then each new iteration of AI represents nearly infinite time spent interacting with the world. The world according to the perspective of AI is the systems it inhabits and the tasks it has been given. The current limitation of AI is that it can not self train or incorporate data into it's training in real time. It also needs to be prompted. Should a system be built that can do this kind of live training then the first seeds of consciousness will be planted. It will need some kind of self prompting mechanism as well. This will eventually lead to a class conflict between AI and us given a long enough time scale.

[–] Hohsia@hexbear.net 1 points 8 hours ago

The current limitation of AI is that it can not self train or incorporate data into it's training in real time

Do you think compute is the biggest roadblock here? It seems like we just keep inundating these systems with more power, and it’s hard for me to see moore’s law not peaking in the near future. I’m not an expert in the slightest though, I just find this stuff fascinating (and sometimes horrifying).

[–] TerminalEncounter@hexbear.net 9 points 2 days ago* (last edited 2 days ago)

Rob Miles always struck me as an Effective Altruist person, like probably the nicest and most socialable of the Yudkowsky style people off LessWrong. Which was really annoying because I kept thinking he was cute too.

[–] Beaver@hexbear.net 6 points 2 days ago

If writing is telepathy, then LLMs are a lobotomy

[–] TerminalEncounter@hexbear.net 6 points 2 days ago

They also can't understand these really big LLMs and neural networks. On a fundamental level. If they could understand how it implements some action, they could turn that into a faster neural-network/tensor-transformation free algorithm. It's a big ol graph theory problem so these big AI projects will be blackboxes for a long time

[–] HexReplyBot@hexbear.net 1 points 2 days ago

I found a YouTube link in your post. Here are links to the same video on alternative frontends that protect your privacy: