488
submitted 10 months ago by L4s@lemmy.world to c/technology@lemmy.world

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[-] mellowheat@suppo.fi 8 points 10 months ago* (last edited 10 months ago)

I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

The thing is, if it's injecting diversity into a place where there shouldn't have been diversity, this can usually be fixed by specifying better in the next prompt. Not by writing ragebait articles about it.

But yeah, I'd also be happy to be able to use an unhinged LLM once in a while.

[-] AnonStoleMyPants@sopuli.xyz 6 points 10 months ago

Taking responsibility of how I use the tools that I use? How dare you.

[-] rambaroo@lemmy.world 4 points 10 months ago

Yeah, this is what people don't get. These LLMs aren't thinking about anything. It has zero awareness. If you don't guide it towards exactly what you want in your prompt, it's not going to magically know better.

[-] FinishingDutch@lemmy.world 2 points 10 months ago

Speaking for myself, it’s definitely not the lack of detail in the prompts. I’m a professional writer with an excellent vocabulary. I frequently run out of room with the prompts on Bing, because I like to paint a vivid picture.

The problems arise when you use words that it either flags as problematic, misinterprets anyway or if it just injects its own modifiers. For example, I’ve had prompts with ‘black haired’ rejected on Bing, because… god knows why. Maybe it didn’t like what it generated as it was problematic. But if I use ‘raven-haired’ I get a good result.

I don’t mind tweaking prompts to get a good result. That’s part of the fun. But when it just tells you ‘NO’ without explanation, that’s annoying. I’d much prefer an AI with no censorship. At least that way I know a poor result is due to a poor prompt.

[-] intensely_human@lemm.ee 0 points 10 months ago

Who says you need awareness to think? People process information subconsciously all the time.

[-] JackGreenEarth@lemm.ee 1 points 10 months ago

huggingface.co/chat

this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

60012 readers
2605 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS