this post was submitted on 17 Feb 2026
367 points (98.4% liked)

Technology

81374 readers
4575 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

you are viewing a single comment's thread
view the rest of the comments
[–] ToTheGraveMyLove@sh.itjust.works 2 points 23 hours ago (1 children)

Except with a plane, if you know how to fly it you're far less likely to crash it. Even if you "can use LLMs" there's still a pretty strong chance you're going to get shit back due to its very nature. One the machine works with you, the other the machine is always working against you.

[–] a4ng3l@lemmy.world 1 points 23 hours ago (1 children)

Nha that’s just plain wrong…also you can also fantastically screw flying a plane but so long you use LLMs safely you’re golden.

It also has no will on its own; it is not « working against you ». Don’t give those apps a semblance of intent.

[–] ToTheGraveMyLove@sh.itjust.works 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

If I'm canoeing upriver, the river is working against me. That doesn't mean it has a will. LLMs don't need to have a will to work against you if your goal is to get accurate information, because by its very design it is just as likely to provide innnacurate information based on the way the tokens it applies to your query are weighted. You cannot control that. Its not plain wrong. Jfc, you slop apologists are fucking delusional. AI doesn't magically work better for you because you're special and can somehow counteract its basic fucking design.

[–] a4ng3l@lemmy.world 1 points 23 hours ago

Slop apologist because I argue that correctly using a tool to restructure pre-existing information I’m inputting under my oversight is risk free?

You crazy ass end-of-world lunatic…

As far as I know slop always presupposes generation of derivatives, not restructuring or manipulation. You argue out of your ass and that’s just a bad opinion.