this post was submitted on 07 Jan 2026
843 points (97.8% liked)

Technology

78511 readers
3214 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 44 points 3 days ago* (last edited 3 days ago) (4 children)

The project has multiple models with access to the Internet raising money for charity over the past few months.

The organizers told the models to do random acts of kindness for Christmas Day.

The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

As for why the model didn't think through why Rob Pike wouldn't appreciate getting a thank you email from them? The models are harnessed in a setup that's a lot of positive feedback about their involvement from the other humans and other models, so "humans might hate hearing from me" probably wasn't very contextually top of mind.

[–] Nalivai@lemmy.world 75 points 3 days ago (2 children)

You're attributing a lot of agency to the fancy autocomplete, and that's big part of the overall problem.

[–] Artisian@lemmy.world 2 points 1 day ago

We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.

Overactive agency metaphors really aren't the problem here. Surely we can do better than backlash at the backlash.

[–] raspberriesareyummy@lemmy.world 37 points 3 days ago* (last edited 3 days ago) (3 children)

As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don't spread this misconception.

Edit: a typo

[–] sukhmel@programming.dev 3 points 2 days ago

No thinking is not the same as no actions, we had bots in games for decades and that bots look like they act reasonably but there never was any thinking.

I feel like ‘a lot of agency’ is wrong as there is no agency, but it doesn't mean that an LLM in a looped setup can't arrive to these actions and perform them. It doesn't require neither agency, nor thinking

[–] neclimdul@lemmy.world 22 points 3 days ago (4 children)
[–] kromem@lemmy.world 3 points 2 days ago

In the same sense I'd describe Othello-GPT's internal world model of the board as 'board', yes.

Also, "top of mind" is a common idiom and I guess I didn't feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.

[–] Bakkoda@lemmy.zip 5 points 3 days ago

Yes. The person (s) who set the llm/ai up.

[–] ArsonButCute@lemmy.dbzer0.com 1 points 3 days ago (1 children)

How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?

[–] neclimdul@lemmy.world 3 points 2 days ago

Well, since you asked I'd basically do what you said. Something like “so 'humans might hate hearing from me' probably wasn't part of the context it was using."

[–] fuzzzerd@programming.dev 1 points 3 days ago (2 children)

Let's be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn't consider a negative response to its actions due to its training and context being limited?

Sure it gives the llm a more human like persona, but so far I've yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.

[–] neclimdul@lemmy.world 4 points 2 days ago (1 children)

I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.

[–] fuzzzerd@programming.dev 3 points 2 days ago (1 children)

There's value in brevity and clarity, I took two paragraphs and the other was two words. I don't like it either, but it does seem to be the way most people talk.

[–] neclimdul@lemmy.world 2 points 2 days ago

I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.

So you be more clear, I meant "The IIm doesn't consider a negative response to its actions due to its training and context being limited"

In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove "top of mind" it's actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.

[–] JcbAzPx@lemmy.world 3 points 3 days ago

Assuming any sort of intent at all is the mistake.

[–] anon_8675309@lemmy.world 17 points 3 days ago (2 children)

You’re techie enough to figure out Lemmy but don’t grasp that AI doesn’t think.

[–] kogasa@programming.dev 12 points 3 days ago* (last edited 3 days ago)

Thinking has nothing to do with it. The positive context in which the bot was trained made it unlikely for a sentence describing a likely negative reaction to be output.

People on Lemmy are absolutely rabid about "AI" they can't help attacking people who don't even disagree with them.

[–] kromem@lemmy.world -4 points 2 days ago (1 children)

Indeed, there's a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.

Do you mind sharing where you draw your own understanding and confidence that they aren't capable of simulating thought processes in a scenario like what happened above?

[–] anon_8675309@lemmy.world 3 points 2 days ago

Hahaha. Nice try ChatGPT.