this post was submitted on 22 Jun 2025
783 points (94.5% liked)

Technology

71859 readers
4613 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

top 50 comments
sorted by: hot top controversial new old
[–] ThePiedPooper@discuss.online 8 points 1 day ago

Grandiose delusions from a ketamine-rotted brain.

[–] ZoteTheMighty@midwest.social 25 points 1 day ago

I wonder how many papers he's read since ChatGPT released about how bad it is to train AI on AI output.

[–] releaseTheTomatoes@lemmy.dbzer0.com 30 points 1 day ago (1 children)

Spoiler: He's gonna fix the "missing" information with MISinformation.

[–] uairhahs@lemmy.world 5 points 1 day ago (1 children)
load more comments (1 replies)
[–] Smokeless7048@lemmy.world 51 points 1 day ago

"and then on retrain on that"

Thats called model collapse.

[–] Auli@lemmy.ca 33 points 2 days ago (1 children)
[–] Peerpeer@lemmy.world 3 points 1 day ago

Don't forget the retraining on the made up shit part!

[–] Crikeste@lemm.ee 33 points 2 days ago (1 children)

So they’re just going to fill it with Hitler’s world view, got it.

Typical and expected.

[–] UnderpantsWeevil@lemmy.world 19 points 2 days ago (1 children)

I mean, this is the same guy who said we'd be living on Mars in 2025.

[–] eleitl@lemm.ee 3 points 1 day ago

In a sense, he's right. I miss good old Earth.

[–] TheDeadlySquid@lemm.ee 54 points 2 days ago (1 children)

“Deleting Errors” should sound alarm bells in your head.

[–] Auli@lemmy.ca 10 points 2 days ago

And the adding missing information doesn't. Isn't that just saying we are going to make shit up.

[–] bufalo1973@europe.pub 100 points 2 days ago (1 children)

[My] translation: "I want to rewrite history to what I want".

[–] Klear@lemmy.world 23 points 2 days ago (1 children)

That was my first impression, but then it shifted into "I want my AI to be the shittiest of them all".

load more comments (1 replies)
[–] AI_toothbrush@lemmy.zip 29 points 2 days ago (1 children)

Lol turns out elon has no fucking idea about how llms work

It's pretty obvious where the white genocide "bug" came from.

[–] Hossenfeffer@feddit.uk 73 points 2 days ago (4 children)

He's been frustrated by the fact that he can't make Wikipedia 'tell the truth' for years. This will be his attempt to replace it.

load more comments (4 replies)
[–] JustAPenguin@lemmy.world 40 points 2 days ago (2 children)

The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.

Sources (unordered):

Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.

load more comments (2 replies)
[–] Naevermix@lemmy.world 69 points 2 days ago (4 children)

Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.

load more comments (4 replies)
[–] MummifiedClient5000@feddit.dk 299 points 2 days ago (4 children)

Isn't everyone just sick of his bullshit though?

load more comments (4 replies)
[–] dalekcaan@lemm.ee 254 points 2 days ago (2 children)

adding missing information and deleting errors

Which is to say, "I'm sick of Grok accurately portraying me as an evil dipshit, so I'm going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings."

load more comments (2 replies)
[–] finitebanjo@lemmy.world 107 points 2 days ago (2 children)

"If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!"

~Fucking Dumbass

load more comments (2 replies)
[–] NikkiDimes@lemmy.world 20 points 2 days ago

Huh. I'm not sure if he's understood the alignment problem quite right.

[–] RattlerSix@lemmy.world 36 points 2 days ago (2 children)

I never would have thought it possible that a person could be so full of themselves to say something like that

load more comments (2 replies)
[–] LovableSidekick@lemmy.world 13 points 1 day ago* (last edited 1 day ago)

"We'll fix the knowledge base by adding missing information and deleting errors - which only an AI trained on the fixed knowledge base could do."

[–] namingthingsiseasy@programming.dev 56 points 2 days ago (23 children)

Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.

The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.

load more comments (23 replies)

Delusional and grasping for attention.

[–] JcbAzPx@lemmy.world 22 points 2 days ago

which has advanced reasoning

No it doesn't.

[–] UnsavoryMollusk@lemmy.world 21 points 2 days ago

Yes please do that Elon, please poison grok with garbage until full model collapse.

[–] maxfield@pf.z.org 161 points 2 days ago (12 children)

The plan to "rewrite the entire corpus of human knowledge" with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can't create genuinely new knowledge or identify "missing information" that wasn't already in their training data.

What he means is correct the model so all it models is racism and far-right nonsense.

[–] SinningStromgald@lemmy.world 87 points 2 days ago (4 children)

Remember the "white genocide in South Africa" nonsense? That kind of rewriting of history.

load more comments (4 replies)
load more comments (10 replies)
[–] vala@lemmy.world 16 points 2 days ago (1 children)

What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?

load more comments (1 replies)

Yes! We should all wholeheartedly support this GREAT INNOVATION! There is NOTHING THAT COULD GO WRONG, so this will be an excellent step to PERMANENTLY PERFECT this WONDERFUL AI.

[–] rottingleaf@lemmy.world 50 points 2 days ago (1 children)

So where will Musk find that missing information and how will he detect "errors"?

load more comments (1 replies)
[–] sturmblast@lemmy.world 13 points 2 days ago

Fuck Elon Musk

[–] CileTheSane@lemmy.ca 16 points 2 days ago

Is he still carrying his little human shield around with him everywhere or can someone Luigi this fucker already?

[–] brucethemoose@lemmy.world 61 points 2 days ago* (last edited 2 days ago)

I elaborated below, but basically Musk has no idea WTF he’s talking about.

If I had his “f you” money, I’d at least try a diffusion or bitnet model (and open the weights for others to improve on), and probably 100 other papers I consider low hanging fruit, before this absolutely dumb boomer take.

He’s such an idiot know it all. It’s so painful whenever he ventures into a field you sorta know.

But he might just be shouting nonsense on Twitter while X employees actually do something different. Because if they take his orders verbatim they’re going to get crap models, even with all the stupid brute force they have.

[–] ViatorOmnium@piefed.social 43 points 2 days ago

Because neural networks aren't known to suffer from model collapse when using their output as training data. /s

Most billionaires are mediocre sociopaths but Elon Musk takes it to the "Emperors New Clothes" levels of intellectual destitution.

[–] FreakinSteve@lemmy.world 11 points 2 days ago (3 children)

I read about this in a popular book by some guy named Orwell

load more comments (3 replies)
[–] Deflated0ne@lemmy.world 52 points 2 days ago

Dude is gonna spend Manhattan Project level money making another stupid fucking shitbot. Trained on regurgitated AI Slop.

Glorious.

[–] SoftestSapphic@lemmy.world 14 points 2 days ago

I remember when I learned what corpus meant too

[–] Lumidaub@feddit.org 85 points 2 days ago* (last edited 2 days ago) (3 children)

adding missing information

Did you mean: hallucinate on purpose?

Wasn't he going to lay off the ketamine for a while?

Edit: ... i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-

load more comments (3 replies)
[–] JackbyDev@programming.dev 37 points 2 days ago (3 children)

Training an AI model on AI output? Isn't that like the one big no-no?

load more comments (3 replies)
[–] Antaeus@lemmy.world 28 points 2 days ago (3 children)

Elon should seriously see a medical professional.

load more comments (3 replies)
[–] FireWire400@lemmy.world 27 points 2 days ago* (last edited 2 days ago)

How high on ketamine is he?

3.5 (maybe we should call it 4)

I think calling it 3.5 might already be too optimistic

load more comments
view more: next ›