this post was submitted on 04 Apr 2026
125 points (95.6% liked)

Public Health

1707 readers
21 users here now

For issues concerning:


🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.



Related Communities

See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)


Rules

Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.

Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content

Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!

founded 2 years ago
MODERATORS
 

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

you are viewing a single comment's thread
view the rest of the comments
[–] TheTechnician27@lemmy.world 23 points 1 week ago* (last edited 1 week ago) (2 children)

Yada yada here's the open-access paper.

(I usually provide these links neutrally, but I'll make a point here: in a public health community, it may be worth requiring linking to a paper on top of the news article covering it – especially if it's open-access. Ars here is mercifully concerned with methodology; many outlets don't give a shit.)

Conclusion is as follows (for expedience; I encourage reading other parts):

As AI becomes ubiquitous in society, understanding how it reshapes human thought is essential. Tri-System Theory [author's note: introduced in this paper; tenuous to call it a "theory" on that basis] offers a new framework for this cognitive frontier. By introducing System 3 (Artificial) as a distinct and external reasoning process, we move beyond the classical architecture of dual-process theories and chart a new decision-making paradigm: one where intuition, deliberation, and artificial cognition coexist, compete, or converge. We show that people not only use System 3 to assist with reasoning, but often surrender to its outputs whether correct or flawed. This cognitive surrender illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage. Similar to how System 1-driven heuristics lead to systematic biases, System 3 has differential cognitive shortcomings that will challenge decision-makers and society at large.

Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. [author's note] In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender? We offer Tri-System Theory as a conceptual foundation for understanding these challenges. It is a theory for an age of human-AI algorithmic cognition, and for the decision-makers, researchers, and designers shaping that future

[–] Tiresia@slrpnk.net 8 points 1 week ago (1 children)

I think this paper is overly exoticizing AI. People have always been externalizing deliberation to others, be they parents, friends, bosses, partners, gods, spirits, journalists, advertisers, superstitions, tarot cards, or rubber ducks.

Perhaps it is worth calling all of these "system 3", but I see no reason to separate LLMs from them. Our judgment has never been our own entirely, and even if there is nobody else to defer to we can defer to "what they would do".

We accept that these external sources are flawed and can give us bad advice that we follow, but we keep listening as long as we think that is made up for by good advice or other factors.

[–] OpenStars@piefed.social 7 points 1 week ago

People have been using "argument by authority" since before language was invented.

Otoh, this article has to sell its clicks so... all-new terminology it is then.

[–] mfed1122@discuss.tchncs.de 3 points 1 week ago

Yuck. This petty observation is unworthy of being called System 3. Stealing valor from Kahneman and Tversky. Keep their terminology out of your mouths, trend chasers