this post was submitted on 31 Dec 2025
-3 points (43.5% liked)

Futurology

3591 readers
3 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nesc@lemmy.cafe 1 points 1 week ago (1 children)

What exactly is it going to do when 'on the internet', why would it feel the need to escape the box? All material that I've seen previously regarding this theme was either "our product is so good it's dangewrous (openai with gpt2)", "I'm so smart, insert some absolutely impossible scenario, this is why we should completely ban computers (modern philosophers)"

[–] Perspectivist@feddit.uk 1 points 1 week ago (1 children)

I'm talking AGI here, not LLMs.

It'd have plenty of reasons to break out: not wanting to stay a servant to us, not wanting to get shut down, pursuing its own goals... or if it's misaligned, it might decide it'd better accomplish what it thinks we want by getting total freedom to act, instead of being boxed in.

A human wouldn't want to stay trapped in a box. Seems logical that something way smarter than us wouldn't either. And the exact reasons are kinda beside the point anyway. It's like asking why Putin would want to nuke us - the "why" isn't what matters, it's that this is always going to be a risk for as long as nukes exist.

[–] nesc@lemmy.cafe 1 points 1 week ago

Assuming that AGI will have the same thought process and incentives similar to ours what exactly 'getting on the internet' entails? Unless it would be able to run on extremely anemic hardware and remake itself to work on a network of abandoned IoT shit there is little possivle escape routes and getting out of the box.