No, it's going to be bad in really stupid ways that aren't as cool as what happens when it goes bad in the movies.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
If/when we actually achieve Artificial Intelligence, then maybe it would be a concern.
What we have today are LLMs which are big dumb parrots that just say things back to you that match a pattern. There is no actual intelligence.
Calling our current LLMs "Artificial Intelligence" is just marketing. LLMs have been possible for a while but until recently we just didn't have the processing power at the scale we have now.
Once everyone realizes they've been falling for a marketing campaign and that we're not very much closer to AI than we were before LLMs blew up, then LLMs will just become what they actually are: a tool that enhances human intelligence.
I could be wrong though. If so, I, for one, welcome our new AI overlords.
LLMs are a form of AI. They are just not AGI
I don't think we're any closer to AGI due to LLMs. If you take away all the marketing misdirection, to achieve AGI you would have to have artificial rational thought.
LLMs have no rational thought. They just don't. That's not how they're designed.
Again, I could be wrong. If so, I was always in support of the machines.
I donβt think weβre any closer to AGI
never said we did. Just that LLMs are included in the very broad definition that is "AI"
what movie?
Terminator? no, our level of AI is ridiculously far from that
The Big Short? yes, that bubble is going to pop and bring the world economy down
"as bad"... not quite, and not in the same way. As other people have said, there's no conscience to AI and I doubt there will be any financial incentive to develop one capable of "being evil" or doing some doomsday takeover. It's a tool, it will continue to be abused by malicious actors, idiots will continue to trust it for things it can't do properly, but this isn't like the movies where it is malicious or murderous.
It's perfectly capable of, say, being used to push people into personalized hyperrealities (consider how political advertising was microtargeted in the Cambridge Analytica scandal, and consider how convincing fake AI imagery can be at a glance). It's a more boring dystopia, but a powerful bad one nonetheless, capable of deconstructing societies to a large degree.
Short answer, no.
Long answer: We are a long way off from having anything close to the movie villain level of AI. Maybe we're getting close to the paperclip manufacturing AI problem, but I'd argue that even that is often way overblown. The reason I say this is that such arguments are quite hand-wavy about leaps in capability which would be required for those things to become a problem. The most obvious of which is making the leap from controlling the devices an AI is intentionally hooked up to, to devices it's not. And it also needs to make that jump without anyone noticing and asking, "hey, what's all this then?" As someone who works in cybersecurity for a company which does physical manufacturing, I can see how it would get missed for a while (companies love to under-spend on cybersecurity). But eventually enough odd behavior gets picked up. And the routers and firewalls between manufacturing and anything else do tend to be the one place companies actually spend on cybersecurity. When your manufacturing downtime losses are measured in millions per hour, getting a few million a year for NDR tends to go over much better. And no, I don't expect the AI to hack the cybersecurity, it first needs to develop that capability. AI training processes require a lot of time failing at doing something, that training is going to get noticed. AI isn't magically good at anything, and while the learning process can be much faster, that speed is going to lead to a shit-ton of noise on the network. And guess what, we have AI and automation running on our behalf as well. And those are trained to shutdown rogue devices attacking the cybersecurity infrastructure.
"Oh wait, but the AI would be sneaky, slow and stealty!" Why would it? What would it have in it's currently existing model which would say "be slow and sneaky"? It wouldn't, you don't train AI models to do things which you don't need them to do. A paperclip optimizing AI wouldn't be trained on using network penetration tools. That's so far outside the need of the model that the only thing it could introduce is more hallucinations and problems. And given all the Frankenstein's Monster stories we have built and are going to build around AI, as soon as we see anything resembling an AI reaching out for abilities we consider dangerous, it's going to get turned off. And that will happen long before it has a chance to learn about alternative power sources. It's much like zombie outbreaks in movies, for them to move much beyond patient zero requires either something really, really special about the "disease" or comically bad management of the outbreak. Sure, we're going to have problems as we learn what guardrails to put around AI, but the doom and gloom version of only needing one mistake is way overblown. There are so many stopping points along the way from single function AI to world dominating AI that it's kinda funny. And many of those stopping points are the same, "the attacker (humans) only need to get lucky once" situation. So no, I don't believe that the paperclip optimizer AI problem is all that real.
That does take us to the question of a real general purpose AI being let loose on the internet to consume all human knowledge and become good at everything, which then decides to control everything. And maybe this might be a problem, if we ever get there. Right now, that sort of thing is so firmly in the realm of sci-fi that I don't think we can meaningfully analyze it. What we have today, fancy neural networks, LLMs and classifiers, puts us in the same ballpark as Jules Verne writing about space travel. Sure, he might have nailed one or two of the details; but, the whole this was so much more fantastically complex and difficult than he had any ability to conceive. Once we are closer to it, I expect we're going to see that it's not anything like we currently expect it to be. The computing power requirements may also limit it's early deployment to only large universities and government projects, keeping it's processing power well centralized. General purpose AI may well have the same decapitation problems humans do. They can have fantastical abilities, but they need really powerful data centers to run it. And those bring all the power, cooling and not getting blown the fuck up with a JDAM problems of current AI data centers. Again, we could go back and forth making up ways for AI to techno-magic it's way around those problems, but it's all just baseless speculation at this point. And that speculation will also inform the guardrails we build in at the time. It would boil down to the same game children play where they shoot each other with imaginary guns, and have imaginary shields. And they each keep re-imagining their guns and shields to defeat the other's. So ya, it might be fun for a while, but it's ultimately pointless.
AI (once it is actually here) is just a tool. Much like other tools, its impact will be dependent on who is using and and what for.
Who do you feel has the most agency in our current status quo? What are they currently doing? These will answer your question.
Its the 1%, and they will build a fully automated army and get rid of all but the sexiest of us to keep as sex slaves.
This is worth it because capitalism is the most important thing on planet earth. Not humanity, capitalism. Thus the vasectomy. The 1% can make their own slaves. And with AI they will.
The "just a tool" response is such a cop out. A lot of things are just tools and still have horrifying implications just by existing **
I don't feel like you read the entire comment you replied to.
Yes, AI is a tool with horrifying implications. Machine learning has some interesting use cases, but if one had any hope that it would be implemented well, that should be dashed by the way it is run by the weirdest bros imaginable with complete contempt for the concept of consent.
No, I did. I'll elaborate. Some (many) tools are awful and exist with no useful purpose, or even purposes that are only wanton destruction.
No. The movies get it all wrong. There won't be terminators and rogue AIs.
What there will be is AI slop everywhere. AI news sites already produce hallucinated articles, which other AIs refer to and use as training data. Soon you cannot believe anything you read online, and fact checking will be basically impossible.
Soon you cannot believe anything you read online.
That's a bit too blanket of a statement.
There are, always were, and always will be reputable sources. Online or in print. Writteb or not.
What AI will do is increase the amount of slop disproportionately. What it won't do is suddenly make the real, actual, reputable sources magically disappear. Finding may become harder, but people will find a way - as they always do. New search engines, curated indexes of sites. Maybe even something wholly novel.
.gov
domains will be as reputable as the administration makes them - with or without AI.
Wikipedia, so widely hated in academia, is proven to be at least as factual as Encyclopedia Britannica. It may be harder for it to deal with spam than it was before, but it mostly won't be phased.
Your local TV station will spout the same disinformation (or not) - with or without AI.
Using AI (or not) is a management-level decision. What use of AI is or isn't allowed is as well.
AI, while undenkably a gamechanger, isn't as big a gamechanger as it's often sold as, and the parallels between the AI and the dot-com bubble are staggering, so bear with me for a bit:
Was dot-com (the advent of the corporate worldwide Internet) a gamechanger? Yes.
Did it hurt the publishing industry? Yes.
But is the publishing industry dead? No.
Swap "AI" for dot-com and "credible content" for the publishing industry and you have your boring, but realistic answer.
Books still exist. They may not be as popular, but they're still a thing. CDs and vinyl as well. Not ubiquitous, but definitely chugging along just fine. Why should "credible content" die, when the disruption AI causes to the intellectual supply chain is so much smaller than suddenly needing a single computer and an Internet line instead of an entire large-scale printing setup?
Great response, very well written!
Unless we have a bot that's dedicated to tracing the origin of online information and can roughly evaluate the accuracy to real events.
First itβs gonna crash the economy because it doesnβt work then itβs gonna crash the economy because it does
Short answer: No one today can know with any amount of certainty because we're nowhere close to developing anything resembling "AI" in the movies. Today's generative AI is so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.
Long answer:
First we have to define what "AI" is. The current zeitgeist meaning of "AI" refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we've invented.
However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of "AI" before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn't fire at all depending on input from the neurons it's connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it's not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.
This brings us to the movie definition of "AI," which is generally "conscious" AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like "good" or "evil" can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn't prove it can emulate a brain as complex as humans have, but we also can't prove it definitely won't be able to with enough technical advancement. So the most we can say right now is that it's not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.
The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we'll even get close to this is impossible to know.
The movies never said the internet would be this bad so I think AI will probably be worst
It will be as bad as it is now with an even higher intensity.
We will see it continue to be used as a substitute for research, learning, critical or even surface level thinking, and interpersonal relationships.
If and when our masters create an AI that is actually intelligent, and maybe even sentient as depicted in movies- it will be a thing that provides biased judgments behind a veneer of perceived objectivity due to its artificial nature. People will see it as a persona completely divorced from the prejudices of its creators as they do now with chat GPT. And who ever can influence this new "objective" truth will wield considerable power.
I agree 99% (only disagreement, those people aren't our masters, they are our enemies)
Trust that I agree with you on this, I use the word "master" intentionally though- as we are subjected to their whims without any say in the matter.
There are also many of us who are (unwittingly) dependent or addicted to their products / services. You and I both know plenty of people who give into almost every impulse incentivized by these products, especially when in the form of entertainment.
Our communities are now choc full of slaves and solicitors- a master is an enemy yes, but only when his slaves know who owns them.
It will be worse than the movies because they don't portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.
Worse: In addition to everything else it'll be extremely annoying
Not unless our elected officials have a deluded belief in the competence of AI and assign it to tasks it never should be used in.
I heard a different take yesterday from Corey Doctorow that the real concern is global economic collapse!
Not something Iβd considered, but I would say a frightening possibility!
We've had AI in our everyday life for well over two decades now. What kind of AI specifically are you worried about?
When movies depict "AI", "robots", "aliens", or even talking animals, they always depict weird humans instead because authors are stupid.
Real AI isn't human. Its an intelligent machine - yet not sentient. It does not have goals or feelings, it isn't alive, but it is knowledgeable and intelligent.
It will be like in game "universal paperclip"
AI will likely be similar to Asimov's robot series, but just a bit grittier.
- Useful almost-human thing we don't know if it's a person or not
- Ubiquitous and relatively harmless
- Winds up killing millions if we put it in charge.