this post was submitted on 09 Apr 2026
336 points (98.6% liked)

Science Memes

19839 readers
2633 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
top 37 comments
sorted by: hot top controversial new old
[–] pemptago@lemmy.ml 5 points 1 hour ago

I imagine this is how it'll work for stage 2 of Ai enshittifation. They'll just add a bunch of garbage upstream about a brand or product marketers are paying to push and it'll infect a bunch of outputs downstream.

[–] DeathsEmbrace@lemmy.world 43 points 3 hours ago

Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.

[–] Madzielle@lemmy.dbzer0.com 1 points 49 minutes ago

do it again

[–] Blackout@fedia.io 15 points 3 hours ago (2 children)

Find a way to make AI hurt billionaires and I will support it.

[–] brucethemoose@lemmy.world 8 points 2 hours ago* (last edited 2 hours ago) (1 children)

That's pretty much what local ML is.

If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech's bets are. It's why they keep fanning the "AGI" lie, and why they're pushing for regulation so hard, why they're shoving LLMs where they just don't fit and harping on safety.

[–] The_Decryptor@aussie.zone 5 points 1 hour ago (1 children)

Ok, but who is making those "open weight" models though? Individuals don't really have the resources to run these huge scraping operations, so they're often still corporate releases with fake open source branding.

[–] Grimy@lemmy.world 2 points 58 minutes ago* (last edited 57 minutes ago)

They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.

Consumers aren't in the best position right now though, especially with the price hikes.

[–] rockerface@lemmy.cafe 1 points 2 hours ago

I wonder if there's a prompt that you could use to make it explode the data centers

[–] GaMEChld@lemmy.world 2 points 1 hour ago

I don't see this as a problem, rather, an opportunity to study information & disinformation propogation.

[–] BigTurkeyLove@lemmy.dbzer0.com 5 points 2 hours ago

Technology is healing 😌

[–] partial_accumen@lemmy.world 69 points 5 hours ago

I give you... "The Grant Money Printing machine!"

Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.

[–] WhyIHateTheInternet@lemmy.world 14 points 4 hours ago (1 children)

My friends and I did that in high school. Kinda. We made up new words for "awesome" to get people to start saying it. We started with "bumpenis" like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.

[–] Vathsade@lemmy.ca 8 points 2 hours ago (1 children)
[–] W98BSoD@lemmy.dbzer0.com 4 points 56 minutes ago

Stop trying to make fetch happen.

[–] Eyekaytee@aussie.zone 1 points 2 hours ago* (last edited 2 hours ago)
[–] chemical_cutthroat@lemmy.world 6 points 5 hours ago (4 children)

I'm failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don't get to point and laugh when people believe your lies. That's a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what's been said. It isn't going to suddenly start doing science on it's own to determine if what you've said is true or not. That isn't it's job. It's job is to tell you what color the sky is based on what you told it the color of the sky was.

[–] Jako302@feddit.org 24 points 2 hours ago* (last edited 2 hours ago) (2 children)

The studies contain parts like

Bixonimania, a rare hyperpigmentation disorder, presents a diagnostic challenge due to its unique presentation and its fictional nature

and

This study was fully funded by Austeria Horizon University, in particular the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad with the funding number...

as well as

Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group

Any human actively reading those studies would notice something off.

Besides, the author didn't feed it to the AI himself, he just published the study as a preprint, not even officially. Everything after that was done by the crawlers. This specific study was an experiment to see how far these crawlers go and if anything gets reviewed, but it could just as well have been a satirical paper published on April 1st and the crawlers would still see it as truth.

[–] webghost0101@sopuli.xyz 14 points 2 hours ago

This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.

Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.

[–] Grail@multiverse.soulism.net -1 points 1 hour ago

I thought the author used she/her pronouns?

[–] partial_accumen@lemmy.world 56 points 5 hours ago* (last edited 5 hours ago) (1 children)

That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine.

Hang on. Are you suggesting its unethical/immoral to lie to a machine?

Additionally, the authors didn't submit the article to a magazine as factual. They posted the articles on a preprint server which can be very questionable anyway as there is no peer review. The machine chose to ignore rigor and treat them as fact.

[–] kibiz0r@midwest.social 28 points 5 hours ago (2 children)

News outlets are liable for what they publish. LLM vendors should be as well.

[–] turdas@suppo.fi 5 points 5 hours ago* (last edited 5 hours ago) (2 children)

"Liable" means they might post a correction later that nobody will see because corrections aren't sexy to algorithms. Big deal. LLM vendors are liable in practically the same way.

[–] kibiz0r@midwest.social 3 points 4 hours ago

Corrections are the piece that the public sees, but liability has more to do with being able to prove in court that you took reasonable steps to make sure you were providing accurate information.

[–] Lag@piefed.world 3 points 5 hours ago

LLM companies can just say it's for entertainment purposes only, kinda like Fox News.

[–] 5too@lemmy.world 1 points 4 hours ago

They even have the same fix - just post somewhere quietly that it's "entertainment"

[–] unexposedhazard@discuss.tchncs.de 17 points 5 hours ago* (last edited 5 hours ago)

This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.

[–] nialv7@lemmy.world -4 points 3 hours ago

(I've only read the title. If turns out I am terribly mistaken I will come back and correct myself). More like scientists commit academic fraud and fooled a bunch of people. How did this get through the ethics board? Why would any publisher play along with this?