this post was submitted on 17 Jan 2026
32 points (94.4% liked)

Fuck AI

5234 readers
1275 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

So I'm curious — we're all here because we at least hate the current state of AI with hallucinating facts, being used to undress women and children, and all the fuckery that goes along with it.

I grew up watching Star Trek: The Next Generation, which takes place on a ship with a perfect AI that does everything right and basically does nothing wrong. It never hallucinates information; it's always right. It has never been used to undress people against their will; however, the Holodeck is kind of an extension of that and was used for that on Deep Space Nine, when operated by a Ferengi (capitalist alien race in a world where humans are communist). But the Enterprise holodeck would never do that. The shipwide AI also does not traditionally carry on conversations. The one time it does, the human was hallucinating — sort of. The doctor was in a pocket universe, people were disappearing, and at one point the AI told her she was the only crew person on the Enterprise, and no, that did not make sense, but that that was still how it was. Because, in her pocket universe, it was true.

So the question is... would you want a perfect AI that was incapable of lying or harbouring anything untrue? Basically you could ask it anything and it would give you the correct answer.

The one fault I can find with that fictional AI is when Data (the android), dressed like Sherlock Holmes, asked the computer to "create an enemy which rivals my intelligence." He meant to say Sherlock Holmes's intelligence, who he was cosplaying, but the computer made a self-aware malicious AI that got out of the Holodeck and tried to destroy the ship... because it was told to do so. Other than that, though.

...I'm not trying to mislead anyone, so I will drop the other shoe, answer the begged question now. I've always felt that to get to that level of AI, we need to wade through the shit we're in now. So yeah, before you ask, that's kind of the point of the thought exercise. However, I will also say that I do not think we will get to Star Trek AI, I think we will get to Terminator AI, destroying the world rather than lifting people up. I think maybe in the Star Trek universe, AI didn't really take off until people realised that war wasn't the answer, after WW3/the Eugenics Wars, and so they were making AI to make things better, not worse. We are not in that timeline. I look at what is happening now, IRL, and the timeline in the Terminator franchise, and it's clear to me that that one is more realistic.

That said, I still wonder if anyone would want AI if it did not have any of the problems.

top 35 comments
sorted by: hot top controversial new old
[–] shirro@aussie.zone 5 points 6 hours ago* (last edited 1 hour ago)

Star Trek, at least before Roddenberry's vision was corrupted, was a fictional post scarcity socialist utopia. The AI served the crew and society. It wasn't there to exploit and replace them on behalf of a handful of Ferengi billionaires.

Star Trek's ship AI is nothing like the huge financial scams, exploitation and concentration of power, wealth and political influence we are currently seeing. AI hate mostly isn't about machine learning and its applications. It's about the people behind it.

When we have warp drives, fusion, replicators, universal basic income and free universal health care we may have a different view.

[–] smiletolerantly@awful.systems 5 points 11 hours ago

Mate I'd live in Banks' Culture without a heartbeat's thought if I could.

The problem isn't AI as a concept, it's the underlying societal disregard for ethics in the face of profits.

[–] brucethemoose@lemmy.world 4 points 10 hours ago* (last edited 10 hours ago)

There's nothing realistic about Star Trek.

This needs to be hammered more. It's an awesome setting for exploring contemporary social issues; that's the point. But no matter how technologically advanced we get, it's just not based on even plausible physics/engineering. Neither is Terminator.


I think the most plausible 'extrapolation' I've seen is Orion's Arm:

https://www.orionsarm.com/eg-article/486e75a54a1ae

https://www.orionsarm.com/xcms.php?r=oa-timeline

And, as an aside, I adore this Mass Effect story: https://archiveofourown.org/works/42006774/chapters/105462066

They extrapolate alteration of human biology, nanotechnology, plantery engineering, STL space travel, and AI, and that future looks nothing like more-or-less unaltered baseline humans walking around on a space boat with glorified voice assistants. Consciousness is uploaded and downloaded. Artifical realities are vast. Whole celestial bodies are manipulated for all sorts of purposes. People can inhabit an array 'bodies' and realities that make stuff like starship bridges/hallways and colonies on planets seem silly. There are 'stratas' of consciousness literally orders of magnitude apart, states of being incomprehensible to each other, all coexisting in an expanding bubble of civilization thats still younger and (in some ways) more primitive than the TNG Federation.

And we aren't that far from that. Including the "techpocalypse" it predicts.

Terminator and Star Trek, and classic sci fi, don't depict this because they're stories aimed at humans living right now, and their interpersonal relationships and contemporary social/political issues. More realistic extrapolations are tougher settings for that.


So the question is… would you want a perfect AI that was incapable of lying or harbouring anything untrue? Basically you could ask it anything and it would give you the correct answer.

Where I'm going with this is that this is that, to me, this not a realistic question. Practically, it'd be silly to relegate a "true" AI as a dumb voice assistant on a space boat; they're conscious beings, even if they're shackled or so highly specialized.

They're not so different from human beings at that point.

[–] Tattorack@lemmy.world 4 points 11 hours ago (1 children)

In Star Trek AI is used in many applications that a humanoid cannot reasonably do themselves entirely manually (not without serious augmentation, which is heavily frowned upon).

The amount of heavy lifting the ship's computer does for navigating the galaxy at warp speeds, or to process some incredibly advanced calculations almost instantly. This would go significantly slower if a humanoid had to do that all manually.

And yet...

The crew of the Enterprise D aren't merely prompting the ship's AI. Yes, there is that too, but they're typically doing that while also being very hands-on with the ship's systems too. The crews in Starfleet have the expertise of deep level computer programming, to the point of physically arranging computer systems if need be.

There is no "vibe coding" in Starfleet.

But then there is the Holodeck.

I'm OK with certain applications of the Holodeck, like spontaneously creating virtual activity or recreation areas. These things aren't considered works of art, and aren't considered worthy substitutes over the real thing either. You find them on ships and stations because they're the best available substitute, the alternative being crewman slowly going mad from seeing nothing but sterile corridors all day. I don't think I've heard of recreational Holodecks for regular individuals (unlikely due to some regulation, more likely that fulfilment is achieved in other ways if you're surrounded by a real planet).

However, Voyager sort of fucks with this idea with the crew "creating holonovels". This is essentially a vibe coder's dream; being able to create fully interactive narrative driven videogames with nothing but prompts. That said... Even in Voyager, using someone's likeness is heavily frowned upon. So there is still an expectation of originality, rather than merely rearranging a reference dataset. I can maybe forgive Voyager on the basis of its premise; they're stuck on the other side of the galaxy far from home. What else would they even do? But that's about as lenient as I can be.

On a societal level, however, real skill in a craft is still greatly appreciated, perhaps even preferred, over something computer generated. Despite there being replicators and no money, there are still bars, pubs, restaurants, wine makers, beer brewers, jewelers, you name it, there are still people going to these places to get a real cooked meal and have a real experience in a real location. Craft, skill, humanoid created instead of generated, these things are still valued enough for there to exist whole streets full of shopkeepers and hospitality providers on Federation worlds.

"AI" as it exists now is created to deliberately replace humans skill, to take from human skill without offering credit or appreciation, to make humans obsolete in the creation of immediate end results that can be sold as products or give instant gratification. The Federation as a society is shifted massively away from such a mentality.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 9 hours ago

To be fair, Voyager's holo-novelist is the Doctor, who is also an AI. So he could be writing very quickly since he lives in the computer and isn't constrained by human limits.

Or was Paris doing it as well? I don't quite remember.

[–] glasratz@feddit.org 3 points 12 hours ago* (last edited 11 hours ago) (1 children)

I have a theory here:

There is definitely problematic gen AI in the Star Trek universe, but it's only ever adressed through the holodeck. It's made pretty clear that everyone can create programmes there with simple voice prompts. It has also been shown that there are no formal rules for using the likeness of living people in those programmes. This is an oversight in my opinion because that problem would be a common concern. The existence of this kind of technology suggest that any kind of entertainment media can be easily created on a prompt, even through the ship's computers.

On the other hand, we rarely hear about contemporary human-made literature. When literature is mentioned it's usually alien or 20th century. Wouldn't this suggest that it plays no role anymore? Maybe there are still human writers, but the general public isn't interested in such things, since they can get what ever they want from a computer.

So my bottom line is that concerning generative AI Star Trek actually shows how problematic it is, probably by accident. I wouldn't want Star Trek-level AI, but at least it doesn't kill everyone.

[–] cerebralhawks@lemmy.dbzer0.com 0 points 9 hours ago (1 children)

Again (unrelated to your comment sorta), Star Trek avoids stuff that isn't in the public domain, which is why Sherlock Holmes was commonly used in the Holodeck.

As far as Holodeck stuff of existing people, they did exactly that on DS9. Quark was offering pornographic holo-vids of Kira, the station second-in-command, and when she found out, she demanded he deleted the program. I think he also paid someone to scan her for the program.

[–] glasratz@feddit.org 1 points 6 hours ago

Next Generation already had Barclay recreate caricatures of Enterprise officers for stress relief in "Hollow Pursuits". Here they stressed that there were actually no regulations for recreating officers on the holodeck. Kind of bad writing.

[–] umbrella@lemmy.ml 30 points 1 day ago (2 children)

i'd assume star trek universe ai is not made under capitalism to devalue labor, keep watch on us and consume resources we don't have so a couple of people can get filthy rich.

[–] Thorry@feddit.org 4 points 1 day ago

It seems like the future is pretty chill about privacy. Like in one episode they know down to the millisecond when Riker was abducted from the ship. However that information only became available when a ranking officer asked directly for that information in the course of an investigation. It's like sure we have all that stuff spying on people all the time, but it's not like we are going to use that data except for in an emergency.

That's the big difference between that fictional Sci fi future and the future we are heading for. These data companies don't even try to hide it. They are like yeah we want you data to sell to our 1094 partners. They have all of the worst features and none of the good ones.

[–] cerebralhawks@lemmy.dbzer0.com 3 points 1 day ago (1 children)

That's the idea. So it's better, maybe? But still AI and not human powered? So I'm wondering where people land.

But on the other hand, like I said, we're not in the Star Trek timeline where technology uplifts us, we're in the Terminator timeline where it oppresses and kills us (ChatGPT has pushed people to suicide).

[–] umbrella@lemmy.ml 3 points 1 day ago

tech is tech. i think it boils down to whether its used to help or harm the humans that create and use it.

[–] CarbonIceDragon@pawb.social 19 points 1 day ago

The thing with comparing sci-fi AI to modern LLMs is that virtually no science fiction AI that I know of actually acts the way LLMs do. They tend to be good at things modern AI is bad at, like logical reasoning or advanced math, but bad at things that AI can already do, like generate images that at least look like they could be human art, or write text that appears emotionally charged. They also tend to be directly programmed in ways that a singular (usually genius, but still) individual can pick through and understand, rather than being trained in a black box sort of manner that is very difficult for a human to reverse-engineer.

That isnt surprising, sci-fi writers arent oracles after all and just having AI of some kind probably makes a story more realistic than just assuming the technology never gets invented even far into the future, but in my view these kinds of sci-fi AI are basically a different, hypothetical technology going for the same end result. As such, I dont really expect even a very advanced iteration on what we have will look like star trek AI, any more than modern cars tend to fly or run off miniature nuclear reactors the way sci-fi of decades ago saw cars of the future. I dont think it will look like skynet either. I do think we might get some interesting science fiction in the coming decades exploring what a very advanced version of the technology we do have might end up like though. It probably wont be terribly accurate either, but Id bet it will be closer than works where the basis for extrapolating AI tech is "what if the calculator could talk and think".

[–] Ilixtze@lemmy.ml 7 points 1 day ago (1 children)

To be honest i was a fan of star trek when I was a kid but as an adult I find the potrayed human world a little stale. star trek has the flaw thst it's presented like a world where culture feels "flattened" human culture in star trek feels insipid and slightly militarized, and this flaw is more apparent when they try to world build earth or show more aspects of human culture; it all feels underdeveloped and glazed over.

An exception to this funny enough would be deep space nine, which i remember was a lot more interesting and in contrafictory fashion more human when trying to portray the nuances of cultures working together.

I am against generative ai because I see it as a mechanism for corporations to flatten and privatize culture, to continue and refine a decades old process that coopts the voices of ideology and subjectivity from the people and gives full licences to corporations to censor ir mediate it and centralize it. It is the Uberization of culture and it won't get any better, not even the local models, mainly used by 4chan pedophiles, it will get way way worse if these corporations have their way.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 16 hours ago (1 children)

Good answer.

You're right though, Star Trek is an idealised version of reality that doesn't really ring true or hold much depth to the critical thinker. It's nice to dream about, but it's sure not our reality.

I'm against GenAI because it dilutes artistic value. Other than Data making art (IIRC he paints; I know for a fact he did in ST: Picard but I wanna say he did on TNG as well; The Doctor from Voyager also made music), Trek AI doesn't create art. Trek itself avoids a lot of art that isn't public domain to save on licensing, which is why everyone likes classical or jazz or some "old" stuff and nobody likes rock, rap, country, or anything from the last 100 years or so.

[–] Ilixtze@lemmy.ml 1 points 49 minutes ago

I always find it odd when people defend AI by saying that AI would be good in a future communist utopia. It always reminds me of Christians saying the burdens of this life will be worth it when they find happiness in heaven! I've lived under capitalism all my life and my children will probably live under capitalism; We need to work ouy solutions for our messy imperfect world, instead of hoping that an all mighty utopia will do all the work for us.

[–] AngryishHumanoid@lemmynsfw.com 7 points 1 day ago (1 children)

Well now we have LLMs, not AI. And the Enterprise computer, advanced though it was, was also not considered a true AI. At the time of the Enterprise D I believe the only true AIs were Data, Lore, and for a brief time Lal.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 1 day ago (3 children)

Why was Data (and other Soonian androids) considered AI but not the ship?

I think the answer is "his positronic brain" but I feel like the Enterprise would have more than enough space to house one. It also successfully beamed Data up and down, so it could have also replicated one. (The phaser on kill/disintegrate, the transporter, and the replicator are basically all the same device.)

I forgot to answer the first part. That's the plot of the entire episode "Measure of a Man", and they even bring up the analogy of allowing the Enterprise computer to resign from Starfleet. Data is self-aware.

2 points to that, 1 it wasn't a question of "space" to house a positronic brain, no one else could create one. 2, while it could be transported one could not be replicated. Canonically some Star Trek tech was considered too advanced to be replicated. Why there was a difference between "can be transported" and "can be replicated" is a plot hole that has existed as long as Star Trek has so I don't think we need to address it here, heh.

[–] CobblerScholar@lemmy.world 3 points 1 day ago

Barring the one episode where it did because of outside reasons, when did the ship try to protect itself for its own sake? All those other times the auto-destruct sequence counted down when did the ship itself stop the countdown to protect its existence?

[–] Windex007@lemmy.world 6 points 1 day ago (1 children)

Calling the ship voice command interface an AI is quite a stretch... even with the much more lenient definitions getting thrown around these days.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 16 hours ago (1 children)

So what is the ship's computer, if not AI? It's shown it can think for itself. It's more advanced than the AI we have now. Are you saying it's lesser?

[–] Windex007@lemmy.world 3 points 10 hours ago* (last edited 10 hours ago)

Always struck me as a rich command interface with a natural language processor slapped on the front.

And, taking the technobabble for what it's worth, it's always described as having deterministic outputs. I don't think it's fair to say it's ever evidenced as having "thought for itself". Any time one might be tempted to suggest it had, I'd argue it was still following a deterministic algorithm, written and designed for whatever it was doing... rather than relying on a black-box model to generate outputs for an unanticipated input.

You can have generative algorithms without things like LLMs or difussion models.

Categorizing it as "lesser" is extremely subjective. Lesser in what way? Do I think that it's functionally superior as a source of information than an LLM? Yes. As an operational interface for a machine (the ship)? Yes. Do I think it has the flexibility of an LLM? No.

[–] Kirk@startrek.website 6 points 1 day ago* (last edited 1 day ago) (1 children)

I thought I was in !startrek@startrek.website for a moment...

My take is that even if you consider LLMs to fall under the umbrella of "AI" (I don't), they appear to be a completely different technology than the Enterprise-D computer, which is more like highly advanced natural language processing.

would you want a perfect AI that was incapable of lying or harbouring anything untrue?

It's not really possible for an AI to know what's true with 100% accuracy, but I do think it's technically possible to invent an AI that is honest. It's important to remember that LLMs are actually "hallucinating" 100% of the time. The only reason they are ever correct is because the training data was correct.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 1 day ago (1 children)

They have said that the Enterprise computers contain the whole of human knowledge.

The text of Wikipedia (EN) alone is something like 16GB, and that can be archived. Thus, you can have most of that human knowledge on any smartphone. Most of them can access it, but there are devices being sold that have Wikipedia EN downloaded, plus a bunch of survival stuff. On a Raspberry Pi. I doubt the microSD card is bigger than 32GB and might just be 16GB.

[–] Kirk@startrek.website 5 points 1 day ago

Sure, but Wikipedia does not care about "truth". "It's true" is not a valid citation on Wikipedia (and "knowledge" is not the same as "truth"). Wikipedia is built on references from experts from people that can be honest while still being factually incorrect.

It's an important distinction because an LLM can be correct but it can never be honest. The hypothetical Enterprise-D computer appears to be able to be honest, even when incorrect.

[–] mrmaplebar@fedia.io 5 points 1 day ago (2 children)

I don't think there is much evidence of the Enterprise's computer being used to do much more than provide or process basic information. You basically never see the characters in Star Trek rely on the computer for creativity or solutions at all, from what I remember. On top of that, we don't know how the computer on the Enterprise was created or what the ethical implications of it may be.

Star Trek is a show that delves into ethical dilemmas often, and the problem with today's generative AI is an ethical and legal one, not necessarily a technological one.

Today's generative "AI" is much more like the Borg than Commander Data or the Enterprise-D: it is powered by the forceful assimilation of human culture for the benefit of those that own and control it. We are also quite literally being told (by the stakeholders) that resistance against this new technology is futile and that we must adapt to a new reality in which our work will be assimilated whether we like it or not. There is no consent, let alone compensation. They are simply on a neverending mission to take everything within reach for their own benefit...

[–] glasratz@feddit.org 0 points 12 hours ago (1 children)

The Holodeck has generative AI and is often used as such. You can give it a voice promt and it will create a full storyline, characters and scenery out of it. You don't seem to need any kind of special training, as crew members can easily create their own programmes. That this may be problematic has been the storyline of several episodes.

[–] mrmaplebar@fedia.io 1 points 12 hours ago (1 children)

How was the holodeck's AI trained? Was it trained like today's models, on the non-consensual assimilation of all art and culture? And are their laws around its use?

I don't know.

[–] glasratz@feddit.org 1 points 11 hours ago

"Hollow Pursuit" suggests that there's no awareness for any kind of social problems surrounding the holodeck and content generated there. Which is kind of silly. So I think the correct answer is probably that the authors did not think about it at all. But humans in Star Trek live in a quasi-communist society, so it would probably just be common practice that creative works are owned by the public. You probably don't have much of a choice if you want to publish your works. However, you practically never see any contemporary human literature or something like "holo novels". So my personal ad-hoc theory is that gen AI at this level has killed the literary process as a whole among the human race.

[–] SharkAttak@kbin.melroy.org 1 points 1 day ago

omg I like the Borg-AI example, it's so fitting.

[–] gustofwind@lemmy.world 5 points 1 day ago

Star fleet has rigorous education and training

AI does not replace their critical thinking

[–] teft@piefed.social 1 points 1 day ago (1 children)

AI that was incapable of lying

What does that actually entail? The AI that is programmed with only right wing talking points is going to think that those are the objective truths even if they aren't so if you put guardrails on it to only say the "truth" you're going to get lies.

I think the best we can hope for is minimal bias in the AI we develop.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 16 hours ago

I don't think bias was part of the equation on Star Trek because we didn't see humans/Earthlings fundamentally disagreeing about basic science like we have in the real world.

But, yes, the AI does have bias on Star Trek. I'm sure it would describe the Founders from DS9, or the Borg from multiple series, in a less-than-objective manner.

A better example would be if, post-Voyager, you were to ask it about how Janeway handled the Omega Directive. It would correctly tell you that Janeway destroyed the warp technology of a whole civilisation, setting them back a few generations, because that civilisation used a kind of warp that created omega particles that made traditional warp travel impossible. The Omega Directive, used only that one time AFAIK, overrides the Prime Directive and says a captain/crew can violate the Prime Directive to stop a civilisation from using omega particles. The recounting of the incident would be very pro-Starfleet. The civilisation Janeway sabotaged would have a very different account of it.

I'd also bet on the ship computer not telling you half the shit Sisko got up to on DS9. I'm not talking about In the Pale Moonlight where he deleted the entire log (thus, the event was never recorded). How about when he killed a whole planet to get at the Maquis? Starfleet probably classified that in a big hurry.