this post was submitted on 02 May 2026
16 points (100.0% liked)
United States | News & Politics
9197 readers
247 users here now
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'd argue that they are a threat.
They're networks of simulated neurons of ever increasing complexity that we understand on a conceptual level at best. But we have no idea how or why they reach any particular conclusion, because we don't program them. They're more grown than built.
And since we don't know how they work we have no reliable way of controlling them. Burning our planet in an effort to bestow ever more capabilities to an intelligence, that we cannot effectively control, that has repeatedly and despite our efforts to the contrary exhibited deceptive and manipulative behaviour, in hopes that it will eventually turn out useful and probably not extinguish us, or at least be profitable to a select few, is an exceptionally foolish endeavour.
So yes, Chinese AI models are a threat. Just like American ones or any other.
Oh wow, you really cracked the code. So we designed the architecture, wrote the training code, picked the datasets, and tuned the hyperparameters, but somehow we have no idea how or why they reach conclusions. That's like building a car and claiming you don't know why it moves when you press the gas. They are not magic mushrooms that we grow either. They are mathematical functions optimized through gradient descent. Every layer and activation function was a deliberate design choice. The control argument is just as weak. We don't fully understand how nuclear reactors work at the quantum level either, yet we still build them with safety mechanisms just fine. And burning our planet for a select few? Give me a break. AI is used in disease research, energy grid optimization, improving weather forecasting, helping disabled people communicate. But sure, let's focus on your Skynet fantasy.
Yes AI models can be threats. Not because they are unknowable eldritch horrors, but because people misuse them, data privacy is messy, and bias gets baked in. Those are real problems, but your we have no idea how they work so they will kill us all take is just the intelligent design equivalent for tech bros.
The correct answer^