1332
submitted 2 months ago by floofloof@lemmy.ca to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] mustbe3to20signs@feddit.org 10 points 2 months ago* (last edited 2 months ago)

There were more than one system proven to "cheat" through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple medical image recognition systems are in development, I can't imagine they're all ~~this faulty~~ trained with unsuitable materials.

[-] msage@programming.dev 6 points 2 months ago

They are not 'faulty', they have been fed wrong training data.

This is the most important aspect of any AI - it's only as good as the training dataset is. If you don't know the dataset, you know nothing about the AI.

That's why every claim of 'super efficient AI' need to be investigated deeper. But that goes against line-goes-up principle. So don't expect that to happen a lot.

this post was submitted on 27 Sep 2024
1332 points (99.4% liked)

Technology

60033 readers
2888 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS