this post was submitted on 17 Feb 2026
367 points (98.4% liked)
Technology
81374 readers
4575 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.
It's great when you have basic critical thinking skills and can use it like a tool.
Unfortunately, many people don't have those and just use AI as a substitute for their own brain.
Yeah well same applies for a lot of tools.. I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…
But planes don't routinely spit out false information.
I understand what you mean, but... looks at Birgenair 301 and Aeroperu 603 looks at Qantas 72 looks at the 737 Max 8 crashes Planes have spat out false data, and in of the 5 cases mentioned, only one avoided disaster.
It is down to the humans in the cockpits to filter through the data and know what can be trusted. Which could be similar to LLMs except cockpits have a two person team to catch errors and keep things safe.
So you found five examples in the history of human aviation, how often do you think AI hallucinates information? Because I can guarantee you its a hell of a lot more frequently than that.
You should check out Air Crash Investigation, amigo, all 26 seasons, you'd be surprised what humans in metal life support machines can cause when systems breakdown.
I'm not watching 26 seasons of a TV show ffs, I've got better things to do with my time. Skimming the IMBD though, I'm seeing a lot of different causes for the crashes, from bad weather, to machine failure, to running out of fuel, improper maintenance, pilot errors, etc. Remember, my point had nothing to do with mechanical failure. Any machine can fail. My point was that airplanes don't routinely spit out false information in the day-to-day function of the machine like AI does. You're getting into strawman territory mate.
If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get shit out of it… outcome of using a tool is directly correlated to one’s ability?
Sound logical enough to me.
Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)
So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.
I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.
Except with a plane, if you know how to fly it you're far less likely to crash it. Even if you "can use LLMs" there's still a pretty strong chance you're going to get shit back due to its very nature. One the machine works with you, the other the machine is always working against you.
Nha that’s just plain wrong…also you can also fantastically screw flying a plane but so long you use LLMs safely you’re golden.
It also has no will on its own; it is not « working against you ». Don’t give those apps a semblance of intent.
If I'm canoeing upriver, the river is working against me. That doesn't mean it has a will. LLMs don't need to have a will to work against you if your goal is to get accurate information, because by its very design it is just as likely to provide innnacurate information based on the way the tokens it applies to your query are weighted. You cannot control that. Its not plain wrong. Jfc, you slop apologists are fucking delusional. AI doesn't magically work better for you because you're special and can somehow counteract its basic fucking design.
Slop apologist because I argue that correctly using a tool to restructure pre-existing information I’m inputting under my oversight is risk free?
You crazy ass end-of-world lunatic…
As far as I know slop always presupposes generation of derivatives, not restructuring or manipulation. You argue out of your ass and that’s just a bad opinion.
Rule of thumb is that AI can be useful if you use it for things you already know.
They can save time and if they produce shit, you'll notice.
Don't use them for things you know nothing about.
LLM’s specifically bc ai as a range of practices encompass a lot of things where the user can be slightly more dumb.
You’re spot on in my opinion.
Rearranging text is a vastly different use case than diagnosis and relevant information retrieval