243
you are viewing a single comment's thread
view the rest of the comments
[-] jocanib@lemmy.world 12 points 1 year ago

It will almost always be detectable if you just read what is written. Especially for academic work. It doesn't know what a citation is, only what one looks like and where they appear. It can't summarise a paper accurately. It's easy to force laughably bad output by just asking the right sort of question.

The simplest approach for setting homework is to give them the LLM output and get them to check it for errors and omissions. LLMs can't critique their own work and students probably learn more from chasing down errors than filling a blank sheet of paper for the sake of it.

[-] Tyler_Zoro@ttrpg.network 3 points 1 year ago

What you are describing is true of older LLMs. GPT4, it's less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.

The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.

So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.

[-] jocanib@lemmy.world 2 points 1 year ago

They cannot be anything other than stochastic parrots because that is all the technology allows them to be. They are not intelligent, they don't understand the question you ask or the answer they give you, they don't know what truth is let alone how to determine it. They're just good at producing answers that sound like a human might have written them. They're a parlour trick. Hi-tech magic 8balls.

[-] Tyler_Zoro@ttrpg.network 4 points 1 year ago

They cannot be anything other than stochastic parrots because that is all the technology allows them to be.

Are you referring to humans or AI? I'm not sure you're wrong about humans...

[-] jocanib@lemmy.world -4 points 1 year ago

FFS

Sam Altman is a know-nothing grifter. HTH

[-] nulldev@lemmy.vepta.org 4 points 1 year ago

Have you even read the article?

IMO it does not do a good job of disproving that "humans are stochastic parrots".

The example with the octopus isn't really about stochastic parrots. It's more about how LLMs are not multi-modal.

[-] tate@lemmy.sdf.org -1 points 1 year ago* (last edited 1 year ago)

That article is super helpful.

Thanks!

load more comments (2 replies)
load more comments (2 replies)
load more comments (23 replies)
this post was submitted on 14 Jul 2023
243 points (93.9% liked)

Technology

58108 readers
3884 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS