1019
submitted 10 months ago by throws_lemy@lemmy.nz to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Toribor@corndog.social 14 points 10 months ago

It's going to take real work to train models that don't just reflect our own biases but this seems like a really sloppy and ineffective way to go about it.

[-] BrownianMotion@lemmy.world 10 points 10 months ago

I agree, it will take a lot of work, and I am all for balance where an AI prompt is ambiguous and doesn't specify anything in particular. The output could be male/female/Asian/whatever. This is where AI needs to be diverse, and not stereotypical.

But if your prompt is to "depict a male king of the UK", there should be no ambiguity to the result of that response. The sheer ignorance in googles approach to blatantly ignore/override all historical data (presumably that the AI has been trained on) is just agenda pushing, and of little help to anyone. AI is supposed to be helpful, not a bouncer and must not have the ability to override the users personal choices (other than being outside the law).

Its has a long way to go, before it has proper practical use.

this post was submitted on 22 Feb 2024
1019 points (98.7% liked)

Technology

59982 readers
2201 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS