this post was submitted on 09 Apr 2026
371 points (98.4% liked)

Science Memes

19839 readers
2561 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Blackout@fedia.io 24 points 4 hours ago (3 children)

Find a way to make AI hurt billionaires and I will support it.

[–] MalReynolds@slrpnk.net 2 points 1 hour ago

Pretty much is, they're spending hundreds of billions on a dream (not having to pay workers) that doesn't work, until they repurpose those datacentres to remove personal computing.

Fortunately datacentres are by design concentrated in space and therefore rather vulnerable.

[–] brucethemoose@lemmy.world 11 points 3 hours ago* (last edited 3 hours ago) (1 children)

That's pretty much what local ML is.

If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech's bets are. It's why they keep fanning the "AGI" lie, and why they're pushing for regulation so hard, why they're shoving LLMs where they just don't fit and harping on safety.

[–] The_Decryptor@aussie.zone 7 points 3 hours ago (2 children)

Ok, but who is making those "open weight" models though? Individuals don't really have the resources to run these huge scraping operations, so they're often still corporate releases with fake open source branding.

[–] percent@infosec.pub 1 points 5 minutes ago

There are huge public datasets that are often used for pretraining. Common Crawl and C4 are probably the most prominent, but there are others.

There are also big public datasets available for fine-running and instruction tuning.

The open weight models are getting pretty powerful, thanks to some Chinese labs.

[–] Grimy@lemmy.world 2 points 2 hours ago* (last edited 2 hours ago)

They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.

Consumers aren't in the best position right now though, especially with the price hikes.

[–] rockerface@lemmy.cafe 1 points 3 hours ago

I wonder if there's a prompt that you could use to make it explode the data centers