this post was submitted on 25 Jan 2025
55 points (96.6% liked)

news

23734 readers
778 users here now

Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.

Rules:

-- PLEASE KEEP POST TITLES INFORMATIVE --

-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --

-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --

-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --

-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--

-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--

-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --

-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --

founded 4 years ago
MODERATORS
 

"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo "pain" for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur "pain" if they were to achieve a high score. In a second test, they were told that they'd experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs' "self-reports of experiential states," since that could simply be a reproduction of human training data. "

you are viewing a single comment's thread
view the rest of the comments
[–] Awoo@hexbear.net 9 points 3 days ago (7 children)

While AI models may never be able to experience these things, at least in the way an animal would

Why? Why wouldn't they? The way an animal experiences pain isn't magically different to an artificial construct by virtue of the neurons and synapses being natural instead of artificial. A pain response is a negative feeling that exists to make a creature avoid behaviours that are detrimental to its survival. There's no real reason that this shouldn't be reproducible artificially or the artificial version be regarded as "less" than the natural version.

Not that I think LLMs are leading to meaningful real sentient AI but that's a whole different topic.

[–] technocrit@lemmy.dbzer0.com 11 points 3 days ago* (last edited 3 days ago) (3 children)

Why? Why wouldn’t they?

B/c they're machines without pain receptors. It's kind of biology 101 but science has been totally erased in this "AI" grift.

[–] Awoo@hexbear.net -1 points 3 days ago* (last edited 3 days ago) (2 children)

A "pain receptor" is just a type of neuron. These are neural networks made up of artificial neurons.

[–] tellmeaboutit@lemmygrad.ml 8 points 3 days ago

Neural networks are a misnomer. They have very little if anything to do with actual neurons.

load more comments (1 replies)
load more comments (1 replies)
load more comments (4 replies)