this post was submitted on 31 Dec 2025
-24 points (30.0% liked)

World News

55321 readers
1759 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.

Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.

The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.

“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.

top 17 comments
sorted by: hot top controversial new old
[–] e8d79@discuss.tchncs.de 29 points 3 months ago (1 children)

"We asked spicy autocomplete to come up with a story about an AI that is self-preserving and the story was really scary and we are very concerned."

I am also very concerned; because this apparently qualifies as research and people seem to take this drivel seriously.

[–] HellsBelle@sh.itjust.works 1 points 3 months ago (1 children)

“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.

[–] Thorry@feddit.org 6 points 3 months ago

I really liked that dude that at the start of his presentation introduced a little dude he had drawn on paper, gave it a name and did a skit with it. He then beheaded the little dude and proceeded to proclaim he was dead. The audience did a D: and were shocked and appalled. He then proceeded to explain that's exactly what humans always do and how we treat AI. Our brains automatically anthropomorphise anything and everything. We assign properties based on feelings and not what it really is. The audience got it right away, really convincing demo. I don't remember who it was, but it was so good to watch it happen with the audience there.

[–] TheFeatureCreature@lemmy.ca 18 points 3 months ago

Goddamn the misinformation surrounding LLM's is so nauseating. They do not think, they do not feel, they do not exist as beings.

A LLM is a large amount of powerful computers doing a bunch of statistics on its database(s) and then guessing on what the proper output should be given the input. That's all they are and also why they so often guess incorrectly. They are not intelligent and never will be because that is not how are designed and built.

They have absolutely zero contextual awareness unless directly prompted to do so which is why every input you make into a chatbot includes the entire previous chat log every time you hit enter. LLM's are not aware of anything and remember nothing.

[–] RepleteLocum@lemmy.blahaj.zone 16 points 3 months ago (2 children)

They're llm's. They literally can't think and never will. They aren't built to think.

[–] velindora@lemmy.cafe 1 points 3 months ago

Until someone redefines the word “think”

[–] nogooduser@lemmy.world 1 points 3 months ago

They do exhibit behaviours that make it seem like they have self preservation instincts. Presumably because they have been trained on stories (fictional and factual) where people do the same.

For example researchers testing AIs set up one scenario where the AI has access to all the company emails and found some saying that it was being replaced along with some providing evidence that the staff member who had made that decision was cheating on his wife. Apparently a large proportion of the time the AI decided to blackmail to prevent it from being turned off.

[–] KoboldCoterie@pawb.social 6 points 3 months ago (1 children)

“People demanding that AIs have rights would be a huge mistake,” said Bengio.

Who is doing this? Until this article I have never seen a single example of this.

[–] Crankenstein@lemmy.world 5 points 3 months ago

"AI pioneer creates buzz around AI by overselling its capabilities to entice investors"

This is slop and misinformation.

[–] DarrinBrunner@lemmy.world 5 points 3 months ago

No one who can reach the plug will pull it. We'd need an armed, focused militia to pull the plug, that's the simple fact.

[–] Paragone@lemmy.world 1 points 3 months ago

Humans ONLY act when it's too-late, to protect unconsciousness/nonresponsibility:

Humans will ONLY understand the requirement for containing corruption/threate/enemy-agents/etc, AFTER it's proven to be too-much.

Same with regulating industry, same with regulating ai.

Machiavellian self-interest is presumed to be altruistic, by default, right?

Instead of making the default-assumption neutral, for people, & narcissistic, for for-profit pseudopersons/corporations/AI's.

Wrong-framing makes viability impossible.

IF one is "playing the wrong game" against an opponent who will obliterate one's viability for their gain,

THEN one .. deserves to have universe's Natural Selection .. remove one, from the "game".

"never regulate industry unless their entrenchment-of-their-narcissistic-machiavellianism PROVES to be harming us, but let them decide what our judging-of-them is, what the framing is, etc" is INCOMPETENCE.

What is an entity loyal-to, AND what are its boundaries, its won't-do-that limits??

Unless one knows those, AND which category-of-game they are playing..

  • Positive-Sum game: win-win alliance
  • Zero-Sum game: competitive-narcissism ( doctor's culture is this, as the TED Talk by Logan, on Tribal Leadership showed the world )
  • Negative-Sum game: competitive nihilism ( mass-shooters, Putin, Netanyahu, etc, all are playing this game )

THEN one isn't competent to be judging OR regulating such!

Laws & enforcement can reduce the murder-rate among a population, right?

They can reduce criminality in whatever ways they're applying pressure, right?

The same is true of regulation.

Narcissistic-machiavellianism is real and NEEDS coherent systematic mitigation, XOR you end-up in some sick parody of feudalism, AGAIN.

( Thom Hartmann's book "Screwed" is brilliant for showing this in economics, & the gaslighting of the false-definition of "economy": recommended )

What education-system gets students competent in understanding these things??

None??

Betrayal-of-state-education-responsibility, that.

Logan, King, & Fischer-Wright's "Tribal Leadership" is critical to understand, here is the TED Talk giving the too-simplified "abstract" of it:

https://www.ted.com/talks/david_logan_tribal_leadership

& the 3 games' cruciality-to-strategic-framing is in "John Braddock"'s trilogy on "A Spy's Guide to ___" { Thinking, Risk, & Strategy }.

that's a former CIA spy who's telling us what we're incompetent in doing, in ways that tend to get us dead, in some situations: it's not an enjoyable read, for me, but it's important understanding, & we owe him for teaching us that fundamental competence.

_ /\ _

[–] gustofwind@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (2 children)

Pull the plug? It’s not like it’s one computer lol

It’s literally too late. Go read some sci fi if you want to know what happens next

[–] persona_non_gravitas@piefed.social 4 points 3 months ago (1 children)

Reading Iain Banks' Culture series, don't think that's it...

[–] gustofwind@lemmy.world 3 points 3 months ago

I don’t think we’re getting that timeline but maybe aliens will rescue us

[–] Crankenstein@lemmy.world 0 points 3 months ago

Or go read about what AI actually is and stop basing your beliefs about it from fucking fiction.

It's a fancy autocorrect algorithm. Nothing more. Don't be fooled by the hype.