this post was submitted on 21 Jul 2025
698 points (98.6% liked)

Technology

607 readers
470 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No external video links, only native(.mp4,...etc) links under 5 mins.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

Misc:

Relevant Lemmy Communities:

founded 4 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] SendMePhotos@lemmy.world 47 points 2 months ago (3 children)
[–] ech@lemmy.ca 82 points 2 months ago (3 children)

Both require intent, which these do not have.

[–] moosetwin@lemmy.dbzer0.com 13 points 2 months ago* (last edited 2 months ago) (3 children)

(Just to make sure we're on the same page, the first article describes deception as 'the systematic inducement of false beliefs in the pursuit of some outcome other than the truth'.)

Are you saying that AI bots do not do this behavior? Why is that?

(P.S. I am not saying this story is necessarily real, I am just want to know your reasoning)

[–] ech@lemmy.ca 10 points 2 months ago

Correct. Because there is no "pursuit of untruth". There is no pursuit, period. It's putting words together that statistically match up based on the input it receives. The output can be wrong, but it's not ever "lying", even if the words it puts together resemble that.

[–] f314@lemmy.world 9 points 2 months ago (1 children)

I’m not the guy you’re replying to, but I wanted to post this passage from the article about their definition:

It is difficult to talk about deception in AI systems without psychologizing them. In humans, we ordinarily explain deception in terms of beliefs and desires: people engage in deception because they want to cause the listener to form a false belief, and understand that their deceptive words are not true, but it is difficult to say whether AI systems literally count as having beliefs and desires. For this reason, our definition does not require this.

[–] ech@lemmy.ca 6 points 2 months ago

Their "definition" is wrong. They don't get to redefine words to support their vague (and also wrong) suggestion that llms "might" have consciousness. It's not "difficult to say" - they don't, plain and simple.

[–] RedPandaRaider@feddit.org 0 points 2 months ago* (last edited 2 months ago) (2 children)

Lying does not require intent. All it requires is to know an objective truth and say something that contradicts or conceals it.

As far as any LLM is concerned, the data they're trained on and other data they're later fed is fact. Mimicking human behaviour such as lying still makes it lying.

[–] kayohtie@pawb.social 13 points 2 months ago (1 children)

But that still requires intent, because "knowing" in the way that you or I "know" things is fundamentally different from it only having a pattern matching vector that includes truthful arrangements of words. It doesn't know "sky is blue". It simply contains indices that frequently arrange the words "sky is blue".

Research papers that overlook this are still personifying a series of mathematical matrices as if it actually knows any concepts.

That's what the person you're replying to means. These machines don't know goddamn anything.

[–] ech@lemmy.ca 8 points 2 months ago

Except these algorithms don't "know" anything. They convert the data input into a framework to generate (hopefully) sensible text from literal random noise. At no point in that process is knowledge used.

[–] chunes@lemmy.world -2 points 2 months ago

I'm not sure anyone can truly claim to know that at this point. The equations these things solve to arrive at their outputs are incomprehensible to humans.