899
submitted 4 days ago* (last edited 4 days ago) by seahorse@midwest.social to c/technology@midwest.social
you are viewing a single comment's thread
view the rest of the comments
[-] echodot@feddit.uk 13 points 3 days ago

Yeah that won't work sadly. It's an AI we've given computers the ability to lie and make stuff up so it'll just claim to have done it. It won't actually bother really doing it.

[-] KairuByte@lemmy.dbzer0.com 2 points 2 days ago

Not quite. The issue is that LLMs aren’t designed to solve math, they are designed to “guess the next word” so to speak. So if you ask a “pure” LLM it what 1 + 1 is, it will simply spit out the most common answer.

LLMs with integrations/plugins can likely manage pretty complex math, but only things that something like wolfram alpha could already solve for. Because it’s essentially just going to poll an external service to get the answers being looked for.

At no point is the LLM going to start doing complex calculations on the CPU currently running the LLM.

this post was submitted on 28 Jun 2024
899 points (98.9% liked)

Technology

1860 readers
8 users here now

Post articles or questions about technology

founded 2 years ago
MODERATORS