this post was submitted on 21 Mar 2026
49 points (86.6% liked)

Unpopular Opinion

8916 readers
278 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.


6. Defend your opinion


This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 2 years ago
MODERATORS
 

I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. "I believe x due to y. Challenge my point of view"

It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.

Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.

top 22 comments
sorted by: hot top controversial new old
[–] moonshadow@slrpnk.net 0 points 7 hours ago

"I validate the results every time" 🀑

[–] theneverfox@pawb.social 20 points 1 day ago (1 children)

It sounds like you're using them correctly, but a little PSA on safe use

Surprisingly it's not the people "dating" an AI that get dumber and fall into psychotic loops - it's the people who let it help them make decisions and brainstorm ideas

Do not use it like a magic eight ball. Use it like a tool, use it like a toy, do not become codependent on the AI

[–] MentalEdge@sopuli.xyz 11 points 1 day ago* (last edited 1 day ago) (1 children)

In the "challenge my view" use-case the main danger is it successfully convincing you with false citations.

Be really really careful you don't let something like that slip. False logic is easier to spot, but LLMs make seemingly valid statements based on false premises all the time.

They'll even show you equations and stats that are straight up wrong if you double check the math.

[–] iamthetot@piefed.ca 1 points 1 day ago (1 children)

That's because they cannot do math. They are text predictors. They do not even know what the next word they are going to use is.

[–] MentalEdge@sopuli.xyz 2 points 1 day ago

I know. But they will pretend like they can.

[–] T00l_shed@lemmy.world 26 points 1 day ago

Its certainly, and validly unpopular here

[–] marighost@piefed.social 22 points 1 day ago (1 children)

Upvoted due to a real unpopular opinion.

I think LLMs have their place, especially in data collation or analytics. But by far the loudest (and most dangerous) use of LLMs is the offloading of critical thinking. When I hear about how many people are asking Grok about some tweet or people starting a romantic endeavor with ShitGPT, or chuds generating revenge deepfake porn, all I can think about it the strain on our resources.

ShitGPT

What's your reason for slamming that one in particular? Over here, It's been enormously useful to me for a range of subjects. That said, I tend to use it for elaborate search-engine queries, always trying to avoid any chance of hallucinations, etc.

There's this guy who hangs out on the steps of my local public library. I think he might be homeless. He always carries a chess set with him and will play a game with anyone who asks him. Anyway, he has an amazing memory and is really good at looking things up in the library if you ask him, but I think he might have some mental issues because he sometimes/often gets things wrong. But when he gets things right he really saves you a lot of time. You definitely have to double-check the facts, which wastes time so it's a toss-up whether you're actually saving time. And he can write things for you but his writing is 100% generic, like he has no personality or ideas of his own. Still, though, it comes in handy sometimes. And he can be fun to talk to but for the love of god don't give him any personal info or he'll share it with everyone who passes by. That's kind of how i think of LLMs now.

[–] snoons@lemmy.ca 8 points 1 day ago

Yes, small, local LLMs run on your own systems negate the insane economic and environmental cost of corporate LLMs; however, there is still the question of validity and the long term effect 'outsourcing' certain thought processes will have on users.

The results given by an LLM are definite and might miss nuance you would get be researching it yourself. Perhaps, for example, you wanted to learn about a topic, so you ask your LLM and it tells you everything it can find that is correct and verifiable; however, it completely disregards the work done by a researcher that turned out to be incorrect. It ignores this because it's wrong but by reading the work you might learn other things, like the unique and still completely valid methodology the researcher used in their work that the LLM ignored because the results were wrong. ^1^

That being said, there is also points where using an LLM might have been useful. You might remember a while ago there were grad students that uploaded a pre-print paper about a room-temperature super conductor they had created; turns out they had just created a special sort of copper alloy that wasn't super conductive, but just had special magnetic properties. They would have known about this if they had read a paper on the same alloy that was published in the 1970's. An LLM might have helped them there; however, their suprevisor should have know about that paper also, so... Β―\_(ツ)_/Β―

As well, there is the issue of atrophy. I'm not sure if you use your LLM to write emails and whatnot, but if one 'outsources' their reading and writing ability, they slowly lose that ability. I'm not sure if they'll completely lose it, unlikely IMO, but it will certainly wain and one will become dependant on it until such time as they start to read and wirte by themselves again. It's a bit like not reading books, there is a difference between the vernacular of someone that reads a lot compared with someone that doesn't read at all. The brain is very fluid in this respect, and the 'flows' are important.

I recall a bizarre thread in the steam discussion forums regarding a certain game; the user had used an LLM to create a post about the rough parts of the game (it was still in development). The post was well articulated of course, and there weren't any mistakes in the grammar... when the user was writing comments by themselves without the LLM however... well lets just say the contrast was extreme. They simply couldn't articulate anything very well by themselves, and likely have never written anything longer thena paragraph. They were using a corporate LLM ofc, but the difference is the same in this respect.

Β 

  1. It's a common issue in scientific literature where if a researchers theory turns out to be wrong, they'll retract the paper; however, it is still useful. Much like if there's a team of people making a map of some maze and they always erase all the parts of the map that lead to a dead end.
[–] acchariya@lemmy.world 5 points 1 day ago

I have found it to be very good at vomiting up keywords I can go and research myself

[–] JoMiran@lemmy.ml 6 points 1 day ago (1 children)

The main issue with conversational responses from LLMs is their tendency towards confidently incorrect responses or flat out well disguised lies. It isn't normally blatant but if 95% of what it responds is true, but stated with 100% certainty and apparent proof, how long before that other 5% starts to poison your own reasoning?

Are LLMs completely useless? No. Though challenging your world views, reasoning, and logic with systems that lie and manipulate might not be the best use of said systems.

[–] areakode@riskeratspizza.com 1 points 7 hours ago

Exactly. It's like doing a Google search and only relying on the first result. When you point out it's error is the only time it will seek out additional info.

[–] ImgurRefugee114@reddthat.com 3 points 1 day ago* (last edited 1 day ago) (1 children)

I'm not so sure about their utility as a tool for critical thinking, though that might be just because I've spent most of my life training my brain to do that sort of reflection and argumentation for me. That's obviously not the norm, so I guess if people can find utility in anti-sycophantic roleplaying LLMs to achieve a mode of thought to which they're unaccustomed, then perhaps that might be good... But mainly:

so that I know the lingo to search for it myself

Is exactly how I use it besides writing small scripts for me.

I think of LLMs like intuition rather than intelligence: they're incredibly stupid and wrong and incapable of reason, intention, or thought. But they're a vague and inaccurate amalgamation of all writing on the internet and that can be useful for doing remedial tasks or getting a rough direction to go in.

Prompting a subject can bring up associated keywords, paradigms, and frameworks niche to domain experts which can greatly accelerate my ability to know what to search for and how to think about the questions I have.

They're damn near useless at answering them though, of course...

[–] turboSnail@piefed.europe.pub 1 points 1 day ago* (last edited 12 hours ago)

I've used LLMs to have conversations about technical topics I'm not familiar with. I ask it how something works, it answers, and then I ask several follow-up questions to clarify various things I'm interested in.

Usually, I have some ideas how to implement a particular theory or technology, and I bounce those ideas off the LLM. Some times, my ideas have already been invented about 100 years ago, some times my ideas are impractical, and the LLM tells me exactly why they would or wouldn't work.

I'm also using a custom agent that has been specifically tailored for this purpose. Normal LLMs are far to supportive, lack critical thinking, do not challenge my ideas etc. so that's why I had to make my own agent prompt.

Anyway, I think this system works well for me. This way I've been able to dive deeper into all sorts of random topics, such as why coco powder doesn't mix with milk, why a battery bank shows confusing state of charge readings, how fluid coupling is used in heavy machinery etc. Fascinating stuff. It's a bit like watching a custom documentary made just for my odd interests.

If had to read about these things in magazines or books, I would not have been able to dive as deep as fast. On the other hand, books also give you a general overview, and they include details that I may not be interested in, so I would either end up reading stuff I don't care about or just skimming those parts. In the latter case, I would end up spending hours looking for the information I care about, not finding it, and walking away with less information.

[–] ArgumentativeMonotheist@lemmy.world 2 points 1 day ago (1 children)

It's useful but certainly going to make people dumber and/or schizos, sadly.

[–] lIlIlIlIlIlIl@lemmy.world -4 points 1 day ago

β€œcertainly”

[–] jtrek@startrek.website 1 points 1 day ago

The few times I used an LLM for more than minor technical tasks, I felt stupider afterwards. It's too supportive, and it's easy to just go with its flow down the drain.

I am still looking for a mechanism to use a smaller LLM (SLM) along with Wikipedia as its RAG, so it’s as accurate as possible.

[–] kboos1@lemmy.world 1 points 1 day ago

It certainly is good for helping people make uninformed decisions, for better or worse. Use at your own risk, remember Ai is a slave to the company it works for and it has no problem with lying to you to make that company more money.

Ai is certainly not going away and eventually it will grow into something else. But if we wanted something reliable and consistently useful then we wouldn't be developing Ai, especially from tech Bros, we would be strictly regulating companies that create Ai. So I believe we as flesh bags need to cautiously figure out a way to live with it because no one is going to protect us from it. Ai represents a way for companies to gather more data while reducing their workforce. Governments see it as a way to reduce their workforce, track citizens and use it as a weapon (foreign and domestic).

Ai is a tool for people who needs someone to make decisions for them, a tool to perform tedious tasks, a tool for surveillance, a tool for interference, a tool for companionship for the lonely or relationship lazy.

Essentially Ai is a tool and it's up to you how you use it, and as a tool it has no loyalty or emotions, use with caution.

[–] J3N5T4R@lemmy.world -1 points 1 day ago

Nothing quite like poisoning community's, sucking up all the water an making everyone's power bill huge for some slop.

[–] lIlIlIlIlIlIl@lemmy.world 0 points 1 day ago* (last edited 1 day ago)

It’s so funny how you see the absolute outpouring of emotions over a technology.

The only other one I’ve seen elicit such visceral feedback is vim