this post was submitted on 30 Jun 2025
292 points (98.3% liked)

World News

36719 readers
540 users here now

News from around the world!

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] MolecularCactus1324@lemmy.world 4 points 1 day ago (1 children)

This article is about ScaleAI, not Llama

[โ€“] brucethemoose@lemmy.world 16 points 1 day ago* (last edited 1 day ago)

Yes, but its clearly a building block of Meta's LLM training effort, and part of a pattern.

One implication I didn't mention, and don't have hard proof I can point to, is garbage in garbage out. Meta let AI slop and human garbage proliferate on Facebook, squandering basically the biggest advantage (besides cash) they have. It's often speculated that, as it turns out, Twitter and Facebook training data is kinda crap.

...And they're at it again. Zuckerberg pours cash into corporate trash and get slop back. It's an internal disaster, like their own divisions.

On the other side, it's often thought that Chinese models are so good for their size/compute because they're ahem getting data from the Chinese government, and don't need to worry about legal issues.