this post was submitted on 31 Jan 2026
314 points (97.9% liked)

Technology

79763 readers
2900 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TORFdot0@lemmy.world 49 points 1 day ago (2 children)

LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be

[–] andrewrgross@slrpnk.net 2 points 20 hours ago (1 children)

A hamster can't generate a seahorse emoji either.

I'm not stupid. I know how they work. I'm an animist, though. I realize everyone here thinks I'm a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn't.

LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.

I would prefer that we approach this technology with more humility. Not to protect the "humanity" of a bunch of math, but to protect ours.

Does that make sense?

[–] gandalf_der_12te@discuss.tchncs.de 1 points 14 hours ago* (last edited 14 hours ago)

Not to protect the “humanity” of a bunch of math, but to protect ours.

wise words

we need to figure out how to/not to embed AI into the world, i.e. where it meaningfully belongs/doesn't belong. that's what humanity is all about, after all: organizing the world in proper ways.

and if we fail that task, then what are we here for?

[–] anomnom@sh.itjust.works 22 points 1 day ago* (last edited 1 day ago)

Yeah ask it about anything you know is false, but plausible, and watch it lie.