487
submitted 2 months ago by 911@lemmynsfw.com to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] 2pt_perversion@lemmy.world 5 points 2 months ago

I'd love to debate politics with you but first tell me how many r's are in the word strawberry. (AI models are starting to get that answer correct now though)

[-] sbv@sh.itjust.works 5 points 2 months ago

I tried this with Gemini. Regardless of the number of rs in a word (zero to 3), it said two.

[-] Kraven_the_Hunter@lemmy.dbzer0.com 2 points 2 months ago

So ask it about a made up or misspelled word - "how many r's in the word strauburrry" or ask it something with no answer like "what word did I just type?". Anything other than, "you haven't typed anything yet" is wrong.

[-] sukhmel@programming.dev 1 points 2 months ago

But it's a phrase you typed, the very one that contains the question, unless you ask by voice or in a picture

[-] the_post_of_tom_joad@sh.itjust.works 0 points 2 months ago* (last edited 2 months ago)

Its 3 right? Am i real? Why can't ai guess that one?

[-] 2pt_perversion@lemmy.world 8 points 2 months ago* (last edited 2 months ago)

Over simplification but partly it has to do with how LLMs split language into tokens and some of those tokens are multi-letter. To us when we look for R's we split like S - T - R - A - W - B - E - R - R - Y where each character is a token, but LLMs split it something more like STR - AW - BERRY which makes predicting the correct answer difficult without a lot of training on the specific problem. If you asked it to count how many times STR shows up in "strawberrystrawberrystrawberry" it would have a better chance.

[-] the_post_of_tom_joad@sh.itjust.works -2 points 2 months ago

Thanks, you explained it well enough this layman kinda gets it!

[-] tee9000@lemmy.world 8 points 2 months ago* (last edited 2 months ago)

Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.

So its not deconstructing the word strawberry into letters and running a count... it tries to finish the pattern and fails at simple logic tasks that arent baked into the training data.

But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there. (Compared to a score of like 14% from the model that cant count the r's in strawberry)

this post was submitted on 01 Oct 2024
487 points (97.8% liked)

Technology

60012 readers
2600 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS