this post was submitted on 21 Jun 2025
11 points (82.4% liked)

Technology

38930 readers
286 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] queermunist@lemmy.ml 6 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Isn't this just because LLMs use the object concept representation data from actual humans?

[–] yogthos@lemmy.ml 0 points 3 weeks ago (1 children)

The object concept representation is an emergent property within these networks. Basically, the network learns to create stable associations between different modalities and associate an abstract concept of an object that unites them together.

[–] queermunist@lemmy.ml 7 points 3 weeks ago (1 children)

But it's emerging from networks of data from humans, which means our object concept representation is in the data. This isn't random data, after all, it comes from us. Seems like the LLMs are just regurgitating what we're feeding them.

What this shows, I think, is how deeply we are influencing the data we feed to LLMs. They're human-based models and so they produce human-like outputs.

[–] brisk@aussie.zone 4 points 3 weeks ago
[–] MCasq_qsaCJ_234@lemmy.zip 1 points 3 weeks ago (1 children)

In your opinion, is this a good thing, a bad thing, or is it just a curiosity that LLMs currently have?

[–] yogthos@lemmy.ml 2 points 3 weeks ago

It's a good thing in a sense that it means the models are creating stable representations of objects across modalities. It means that there is potential for extending LLM approach to building actual world models in the future.