this post was submitted on 17 Sep 2025
69 points (98.6% liked)

Canada

10444 readers
849 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 4 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] RandAlThor@lemmy.ca 33 points 2 days ago (3 children)

An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information. Some AI experts say this sycophancy is not a flaw of LLMs, but a deliberate design choice to manipulate users into addictive behaviour that profits tech companies.

[–] kent_eh@lemmy.ca 15 points 2 days ago (1 children)

Add it to the pile of reasons why the rapid mass adoption of these LLMs and pretending they are AGI is a really bad idea.

[–] ganryuu@lemmy.ca 3 points 2 days ago (1 children)

I haven't seen anyone, even the worst of them, pretend we're already at AGIs. Granted some of them pretend we're getting close to AGIs, which is an outrageous lie, but a different one.

[–] nik282000@lemmy.ca 9 points 2 days ago

Management. Every middle management twit I meet thinks that LLMs are thinking, reasoning, minds that can answer every question. They are all frothing at the idea that they can replace employees with an AI that never takes time off or talks back.

[–] Showroom7561@lemmy.ca 5 points 2 days ago (2 children)

An April MIT study found AI Large Language Models (LLM) encourage delusional thinking... ... is not a flaw of LLMs, but a deliberate design choice to manipulate users into addictive behaviour that profits tech companies.

Just yesterday, as I was messing around with a local LLM to see how well it does speech-to-text (not to answer any questions), I came across a voice (text to speech) that was basically a woman speaking in ASMR.

I'll be honest, it was soothing to listen to, and if I were one of those guys who throw money at ASMR talent (onlyfans?), then I can see how this could become quite addictive.

This is 100% by design, and if this LLM voice had an avatar of a woman character you find attractive, you'd be fucked.

[–] masterofn001@lemmy.ca 4 points 2 days ago

you'd be fucked

That's what they're hoping

[–] Meron35@lemmy.world 2 points 2 days ago (1 children)

Since three months ago, Google VEO 3 is already capable of producing ASMR videos with realistic and conventionally attractive looking women automatically, albeit only for short durations.

We are so cooked.

https://youtu.be/BwpKj3_C480

[–] Showroom7561@lemmy.ca 1 points 2 days ago

😲 ☠️

[–] phoenixz@lemmy.ca 4 points 2 days ago

Ding ding ding, and this is the answer

[–] ininewcrow@lemmy.ca 13 points 2 days ago

AI delusions are hurting Canadians?

Billionaires have been doing that decades

[–] KanadrAllegria@lemmy.ca 5 points 2 days ago

I particularly liked this quote:

"I feel like right now everyone has a car that goes 200 miles per hour, but there's no seat belts, there's no driving lessons, there's no speed limits," Brisson said.

[–] TachyonTele@piefed.social 6 points 2 days ago

This is extremely sad. That's some weird shit.

[–] CkrnkFrnchMn@lemmy.ca 2 points 2 days ago

Yowzer...and this is only a beginning :(