this post was submitted on 03 Apr 2024
957 points (99.4% liked)

Technology

72877 readers
2822 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] guyrocket@kbin.social 116 points 1 year ago (38 children)

I think we need to STOP calling it "Artificial Intelligence". IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.

[–] rdri@lemmy.world 2 points 1 year ago (8 children)

How is guided pattern recognition is different from imagination (and therefore intelligence) though?

[–] Natanael@slrpnk.net 6 points 1 year ago* (last edited 1 year ago) (3 children)

There's a lot of other layers in brains that's missing in machine learning. These models don't form world models and ~~some~~don't have an understanding of facts and have no means of ensuring consistency, to start with.

[–] lightstream@lemmy.ml 2 points 1 year ago (1 children)

They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

[–] Natanael@slrpnk.net 1 points 1 year ago

Statistical associations is not equivalent to a world model, especially because they're neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

load more comments (1 replies)
load more comments (5 replies)
load more comments (34 replies)