84
you are viewing a single comment's thread
view the rest of the comments
[-] Gradually_Adjusting@lemmy.world 9 points 1 month ago

Although nonstandard English and pidgins often demonstrate the same level of nuance and complexity as standard English, it's very common for there to be negative stereotypes. One has to wonder whether the LLMs generated from (stolen en masse) written output say as much about us as they do about their creators.

[-] RobotToaster@mander.xyz 11 points 1 month ago* (last edited 1 month ago)

Pretty much, it was trained on human writing, then people are all surprised when it has human biases.

[-] Hamartiogonic@sopuli.xyz 2 points 1 month ago

An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.

[-] Gradually_Adjusting@lemmy.world 4 points 1 month ago

Who among us couldn't benefit from a little more of that?

[-] Hamartiogonic@sopuli.xyz 1 points 1 month ago

Humans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.

[-] Gradually_Adjusting@lemmy.world 2 points 1 month ago

Exciting new product from the tech industry: Fruit from the poisoned tree!

this post was submitted on 30 Aug 2024
84 points (85.6% liked)

Science

3072 readers
1 users here now

General discussions about "science" itself

Be sure to also check out these other Fediverse science communities:

https://lemmy.ml/c/science

https://beehaw.org/c/science

founded 2 years ago
MODERATORS