488
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Feb 2024
488 points (96.2% liked)
Technology
60012 readers
2609 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?
Data can be biased in a number of ways, that don't always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn't necessarily straightforward.
I mean "taking pictures of people who are smiling" is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.
I get what you're saying in specific circumstances. Sure, a dataset that is built from a single source doesn't make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we've built a culture around.
Except these kinds of data driven biases can creep in from all sorts of ways.
Is there a bias in what images have labels and what don't? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?
Just because the sampling is broad doesn't mean the processes involved don't introduce procedural bias distinct from social biases.