this post was submitted on 17 Sep 2025
70 points (98.6% liked)

Canada

10446 readers
915 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] RandAlThor@lemmy.ca 33 points 4 days ago (8 children)

An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information. Some AI experts say this sycophancy is not a flaw of LLMs, but a deliberate design choice to manipulate users into addictive behaviour that profits tech companies.

[–] kent_eh@lemmy.ca 15 points 4 days ago (1 children)

Add it to the pile of reasons why the rapid mass adoption of these LLMs and pretending they are AGI is a really bad idea.

[–] ganryuu@lemmy.ca 3 points 4 days ago (1 children)

I haven't seen anyone, even the worst of them, pretend we're already at AGIs. Granted some of them pretend we're getting close to AGIs, which is an outrageous lie, but a different one.

[–] nik282000@lemmy.ca 9 points 4 days ago

Management. Every middle management twit I meet thinks that LLMs are thinking, reasoning, minds that can answer every question. They are all frothing at the idea that they can replace employees with an AI that never takes time off or talks back.

load more comments (6 replies)