this post was submitted on 22 Feb 2026
195 points (99.5% liked)
Technology
81710 readers
3602 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree. What you get with chatbots is the ability to iterate on ideas & statements first without spreading undue confusion. If you can't clearly explain an idea to a chatbot, you might not be ready to explain it to a person.
How does this bio make the question unclear or the answer attempt to not spread undue confusion? Because the bots are clearly just being assholes because of the users origin and education level.
Bio:
Question:
Answer:
The LLMs aren't being assholes, though - they're just spewing statistical likelihoods. While I do find the example disturbing (and I could imagine some deliberate bias in training), I suspect one could mimic it with different examples with a little effort - there are many ways to make an LLM look stupid. It might also be tripping some safety mechanism somehow. More work to be done, and it's useful to highlight these cases.
I bet if the example bio and question were both in russian, we'd see a different response.
But as a general rule: Avoid giving LLMs irrelevant context.
If the LLM has a bio on you, you can't not include that without logging out. That's one of the main points of the study:
This isn't about making the LLM look stupid, this is about systemic problems in the responses they generate based on what they know about the user. Whether or not the answer would be different in Russian is immaterial to the fact that it is dumbing down or not responding to users' simple and innocuous questions based on their bio or what the LLM knows about them.
Bio and memory are optional in ChatGPT though. Not so in others?
The age guessing aspect will be interesting, as that is likely to be non-optional.
It's not the clarity alone. Chatbots are completion engines, and reasons back in a way that feels cohesive. It's not that a question isn't asked clearly, it's that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.
It's like if you ask a ChatGPT what is the meaning of life you'll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it's more likely to say 42 (I should test that before posting but I won't).
Indeed. Additional context will influence the response, and not always in predictable ways... which can be both interesting and frustrating.
The important thing is for users to have sufficient control, so they can counter (or explore) such weirdness themselves.
Education is key, and there's no shortage of articles and guides for new users.