this post was submitted on 21 Mar 2026
50 points (86.8% liked)

Unpopular Opinion

8921 readers
167 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.


6. Defend your opinion


This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 2 years ago
MODERATORS
 

I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. "I believe x due to y. Challenge my point of view"

It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.

Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.

you are viewing a single comment's thread
view the rest of the comments
[–] snoons@lemmy.ca 8 points 2 days ago

Yes, small, local LLMs run on your own systems negate the insane economic and environmental cost of corporate LLMs; however, there is still the question of validity and the long term effect 'outsourcing' certain thought processes will have on users.

The results given by an LLM are definite and might miss nuance you would get be researching it yourself. Perhaps, for example, you wanted to learn about a topic, so you ask your LLM and it tells you everything it can find that is correct and verifiable; however, it completely disregards the work done by a researcher that turned out to be incorrect. It ignores this because it's wrong but by reading the work you might learn other things, like the unique and still completely valid methodology the researcher used in their work that the LLM ignored because the results were wrong. ^1^

That being said, there is also points where using an LLM might have been useful. You might remember a while ago there were grad students that uploaded a pre-print paper about a room-temperature super conductor they had created; turns out they had just created a special sort of copper alloy that wasn't super conductive, but just had special magnetic properties. They would have known about this if they had read a paper on the same alloy that was published in the 1970's. An LLM might have helped them there; however, their suprevisor should have know about that paper also, so... ¯\_(ツ)_/¯

As well, there is the issue of atrophy. I'm not sure if you use your LLM to write emails and whatnot, but if one 'outsources' their reading and writing ability, they slowly lose that ability. I'm not sure if they'll completely lose it, unlikely IMO, but it will certainly wain and one will become dependant on it until such time as they start to read and wirte by themselves again. It's a bit like not reading books, there is a difference between the vernacular of someone that reads a lot compared with someone that doesn't read at all. The brain is very fluid in this respect, and the 'flows' are important.

I recall a bizarre thread in the steam discussion forums regarding a certain game; the user had used an LLM to create a post about the rough parts of the game (it was still in development). The post was well articulated of course, and there weren't any mistakes in the grammar... when the user was writing comments by themselves without the LLM however... well lets just say the contrast was extreme. They simply couldn't articulate anything very well by themselves, and likely have never written anything longer thena paragraph. They were using a corporate LLM ofc, but the difference is the same in this respect.

 

  1. It's a common issue in scientific literature where if a researchers theory turns out to be wrong, they'll retract the paper; however, it is still useful. Much like if there's a team of people making a map of some maze and they always erase all the parts of the map that lead to a dead end.