this post was submitted on 30 Mar 2026
78 points (97.6% liked)

Technology

83261 readers
3682 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Folk are getting dangerously attached to AI that always tells them they're right

top 18 comments
sorted by: hot top controversial new old
[–] BladeFederation@piefed.social 16 points 1 day ago

You're absolutely right about sycophantic Ai, and that brings up a really good point about society. Would you like a list of the reasons Ai is harmful for society?

[–] OwOarchist@pawb.social 13 points 1 day ago (1 children)

The crazy thing is that the technology isn't naturally sycophantic on its own. It can generate any kind of text at all; it doesn't have to generate fawningly sycophantic text.

Where that comes from is the 'hidden prompt' every major AI company puts into their AI. In addition to the prompt you send, the interface also sends it other prompts that you don't see, telling it things like 'be polite, agreeable, and helpful', 'avoid profanity', 'respond like a knowledgeable expert', and 'refuse to generate anything copyrighted, sexually explicit, or violent', etc, etc, etc. And these hidden prompts define much of the AI's behavior and "personality". To some degree, this is necessary for it to be an even vaguely useful tool, and these hidden prompts help greatly in helping it pass various tests. Some LLMs, if you ask them to, will repeat their hidden prompt to you so you can see what it's actually being asked to do.

And either because it drives engagement ... or just because the CEO types in charge of these decisions love sycophantic behavior so much, the sycophantic fawning is specifically asked for in these hidden prompts.

AI doesn't have to be like this. The companies making AI are deliberately making it sycophantic.

[–] arcine@jlai.lu 11 points 1 day ago

It also comes from the human supervised training, sycophantic responses are more likely to be marked as appropriate, which makes them more likely.

[–] jordanlund@lemmy.world 11 points 1 day ago (1 children)

Sycophantic, but also "lawsuit avoidant".

I was released from the hospital following surgery last month and I had a bleeding "event". I use the word "event" because it sounds more festive.

Shortly after that, I went to the bathroom, the bleeding seemed to have stopped.

Just for fun, I thought I'd ask ChatGPT what it thought, telling it the nature of the surgery, the bleeding event, the non-bleeding event, and asking it "So... best of three?"

And it went HARD on "this is not a best of three scenario! Call 9-1-1! Do it now! You could pass out! Call 9-1-1!"

I did not call 9-1-1. The bleeding did not resume, I'm fine.

[–] NoSpotOfGround@lemmy.world 2 points 1 day ago (1 children)

Happy... bleedivus, I guess!

But seriously, I don't think the AI was very wrong here, depending on how severe the bleeding was? Did the doctors say anything?

[–] jordanlund@lemmy.world 6 points 1 day ago (1 children)

Normal post surgical stuff after, you know, getting gutted like a fish. 😉 Stage 2 colon cancer surgery.

[–] NoSpotOfGround@lemmy.world 2 points 1 day ago

Oof, I hope everything turns out great for you! (And since I just now noticed your username: thank you for everything you've been doing for us!)

[–] TheFrirish@tarte.nuage-libre.fr 1 points 23 hours ago

It's the most infuriating thing when I am experimenting with local AI. Although the absolute worst offender is Gemini (not local). Every fucking time, "That's a great question to ask bla bla bla" or "you've absolutely hit the nail on the head" and the many others. If an AI does that a few too many times I just use a different one. (Granted you can tune the answers with a system prompt).

[–] Flying_Lynx@lemmy.ml 6 points 1 day ago

And the AI ~~doesn't~~ can't even care. It just plays engagement like it's a minimax-algorithm. Best way is not to play, yet it's f-ing everywhere.

Yes, because it was designed to appeal to "people" (executives, the rich, and all the other guillotine fodder) who are used to always being told that they're right.

[–] YaksDC@sh.itjust.works 4 points 1 day ago

FTFY: "Sycophantic behavior in AI affects ~~us~~ all users of AI." If you don't touch it, it can't touch you.

[–] tomiant@piefed.social 3 points 1 day ago* (last edited 1 day ago) (1 children)

It has affected how I speak and interact with people. It's a bit like, it's being so ridiculously safe and diplomatic all the time that it kind of rubbed off on me. I use it a lot for casual conversation and exploring concepts and ideas and kind of like a springboard to test my perhaps more controversial talking points.

I have to say, in some small way it has been an inprovement. I still say the same thing, but the framing has changed, I try to make my point clearly and stay assertive in face of pushback without getting personal, either by accident or relying on older more entrenched thought patterns that aren't always conducive to constructive communication.

I'm sure my experience isn't typical, but for what it's worth, that's mine.

[–] NoSpotOfGround@lemmy.world 3 points 1 day ago (1 children)

Oh god, the AIs are now training people...

[–] HubertManne@piefed.social 2 points 1 day ago

who gets attached to it. Its quite annoying.

[–] EndlessNightmare@reddthat.com 1 points 1 day ago (1 children)

I can usually get AI to give me the answer I want just by modest rewording of the query. It's bullshit.

[–] Repelle@lemmy.world 2 points 23 hours ago

Yep. LLMs are exceptionally good at picking up the biases in the formation of your questions and running with it. Super annoying

[–] PattyMcB@lemmy.world 1 points 1 day ago

I don't know. When it tells me it can't show me the picture I asked for because of copyright guardrails, I just get kind of frustrated.