65
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 Feb 2024
65 points (94.5% liked)
Technology
59081 readers
3022 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Don't use LLMs in production for accuracy critical implementations without human oversight.
Don't use LLMs in production for accuracy critical implementations without human oversight.
I almost want to repeat that a third time even.
They weirdly ended up being good at information recall in many cases, and as a result have been being used like that in cases where it really doesn't matter much if they are wrong some of the time. But the infrastructure fundamentally cannot self-verify.
This is part of why I roll my eyes when I see employment of LLMs vs humans presented as an exclusionary binary. These are tools to extend and support human labor. Not replace humans in most cases.
So LLMs can be amazing at a wide array of tasks. Like I literally just saved myself a half hour of copying and pasting minor changes in a codebase by having Copilot automate generating methods using a parallel object as a template and the new object's fields. But I also have unit tests to verify behavior and my own review of what was generated with over a decade of experience under my belt.
Someone who has never programmed using Copilot to spit out code for an idea is going to have a bad time. But they'd have a similar bad time if they outsourced a spec sheet to a code farm without having anyone to supervise deliverables.
Oh, and technically, my example doesn't actually require you to know the correct answer before asking. It only requires you to recognize the correct answer when you see it. And the difference between those two usecases is massive.
Edit: In fact, the suggestion to replace the nouns with emojis came from GPT-4. Even though it doesn't have any self-introspection capabilities, I described what I thought was happening and why, and it came up with three suggestions for ways to improve the result. Two I immediately saw were dumb as shit, but the idea to use emojis as representative placeholders while breaking the token pattern was simply brilliant and I'm not sure if I would have thought of that on my own, but as soon as I saw it I knew it would work.
But that's what the marketers are selling, "this will replace a lot of workers!" and it just cannot