this post was submitted on 23 Mar 2026
320 points (98.8% liked)

Technology

82992 readers
3279 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] halcyoncmdr@piefed.social 47 points 22 hours ago (3 children)

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.

The "AI" is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

[–] rumba@lemmy.zip 1 points 38 minutes ago

This is absolutely the case, and honestly, at least for now how it needs to be across the board.

Noone should be using AI to do things you're incapable of doing (or undoing).

[–] 7101334@lemmy.world 1 points 51 minutes ago

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

Relying on it in any circumstances (though medical stuff is understandable if you're simply too poor or don't have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.

[–] Zagorath@quokk.au 0 points 17 hours ago (1 children)

the user needs to be smart enough to do whatever they're asking anyway

I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.

[–] Pyro@programming.dev 5 points 16 hours ago (3 children)

But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.

[–] Zagorath@quokk.au 4 points 16 hours ago (1 children)

At the risk of sounding like an overly obsequious AI… You know what, you're completely right. I'm honestly not sure what use case I was imagining when I wrote that last comment.

[–] Redjard@reddthat.com 3 points 16 hours ago

Making text flow naturally, grouping and ordeeing information, good writing.

You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

[–] WhiskyTangoFoxtrot@lemmy.world 1 points 13 hours ago

I can't draw, but I could probably photoshop out some minor issues in an AI-generated image.

[–] Redjard@reddthat.com -1 points 16 hours ago

If you don't habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.