this post was submitted on 07 Jan 2026
843 points (97.8% liked)

Technology

78543 readers
3349 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] fuzzzerd@programming.dev 1 points 3 days ago (2 children)

Let's be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn't consider a negative response to its actions due to its training and context being limited?

Sure it gives the llm a more human like persona, but so far I've yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.

[–] neclimdul@lemmy.world 4 points 2 days ago (1 children)

I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.

[–] fuzzzerd@programming.dev 3 points 2 days ago (1 children)

There's value in brevity and clarity, I took two paragraphs and the other was two words. I don't like it either, but it does seem to be the way most people talk.

[–] neclimdul@lemmy.world 2 points 2 days ago

I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.

So you be more clear, I meant "The IIm doesn't consider a negative response to its actions due to its training and context being limited"

In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove "top of mind" it's actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.

[–] JcbAzPx@lemmy.world 3 points 3 days ago

Assuming any sort of intent at all is the mistake.