this post was submitted on 09 Oct 2025
45 points (88.1% liked)
Asklemmy
50837 readers
568 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Short answer, no.
Long answer: We are a long way off from having anything close to the movie villain level of AI. Maybe we're getting close to the paperclip manufacturing AI problem, but I'd argue that even that is often way overblown. The reason I say this is that such arguments are quite hand-wavy about leaps in capability which would be required for those things to become a problem. The most obvious of which is making the leap from controlling the devices an AI is intentionally hooked up to, to devices it's not. And it also needs to make that jump without anyone noticing and asking, "hey, what's all this then?" As someone who works in cybersecurity for a company which does physical manufacturing, I can see how it would get missed for a while (companies love to under-spend on cybersecurity). But eventually enough odd behavior gets picked up. And the routers and firewalls between manufacturing and anything else do tend to be the one place companies actually spend on cybersecurity. When your manufacturing downtime losses are measured in millions per hour, getting a few million a year for NDR tends to go over much better. And no, I don't expect the AI to hack the cybersecurity, it first needs to develop that capability. AI training processes require a lot of time failing at doing something, that training is going to get noticed. AI isn't magically good at anything, and while the learning process can be much faster, that speed is going to lead to a shit-ton of noise on the network. And guess what, we have AI and automation running on our behalf as well. And those are trained to shutdown rogue devices attacking the cybersecurity infrastructure.
"Oh wait, but the AI would be sneaky, slow and stealty!" Why would it? What would it have in it's currently existing model which would say "be slow and sneaky"? It wouldn't, you don't train AI models to do things which you don't need them to do. A paperclip optimizing AI wouldn't be trained on using network penetration tools. That's so far outside the need of the model that the only thing it could introduce is more hallucinations and problems. And given all the Frankenstein's Monster stories we have built and are going to build around AI, as soon as we see anything resembling an AI reaching out for abilities we consider dangerous, it's going to get turned off. And that will happen long before it has a chance to learn about alternative power sources. It's much like zombie outbreaks in movies, for them to move much beyond patient zero requires either something really, really special about the "disease" or comically bad management of the outbreak. Sure, we're going to have problems as we learn what guardrails to put around AI, but the doom and gloom version of only needing one mistake is way overblown. There are so many stopping points along the way from single function AI to world dominating AI that it's kinda funny. And many of those stopping points are the same, "the attacker (humans) only need to get lucky once" situation. So no, I don't believe that the paperclip optimizer AI problem is all that real.
That does take us to the question of a real general purpose AI being let loose on the internet to consume all human knowledge and become good at everything, which then decides to control everything. And maybe this might be a problem, if we ever get there. Right now, that sort of thing is so firmly in the realm of sci-fi that I don't think we can meaningfully analyze it. What we have today, fancy neural networks, LLMs and classifiers, puts us in the same ballpark as Jules Verne writing about space travel. Sure, he might have nailed one or two of the details; but, the whole this was so much more fantastically complex and difficult than he had any ability to conceive. Once we are closer to it, I expect we're going to see that it's not anything like we currently expect it to be. The computing power requirements may also limit it's early deployment to only large universities and government projects, keeping it's processing power well centralized. General purpose AI may well have the same decapitation problems humans do. They can have fantastical abilities, but they need really powerful data centers to run it. And those bring all the power, cooling and not getting blown the fuck up with a JDAM problems of current AI data centers. Again, we could go back and forth making up ways for AI to techno-magic it's way around those problems, but it's all just baseless speculation at this point. And that speculation will also inform the guardrails we build in at the time. It would boil down to the same game children play where they shoot each other with imaginary guns, and have imaginary shields. And they each keep re-imagining their guns and shields to defeat the other's. So ya, it might be fun for a while, but it's ultimately pointless.