this post was submitted on 09 Apr 2026
371 points (98.4% liked)

Science Memes

19839 readers
2561 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] chemical_cutthroat@lemmy.world 4 points 7 hours ago (5 children)

I'm failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don't get to point and laugh when people believe your lies. That's a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what's been said. It isn't going to suddenly start doing science on it's own to determine if what you've said is true or not. That isn't it's job. It's job is to tell you what color the sky is based on what you told it the color of the sky was.

[–] NocturnalMorning@lemmy.world 1 points 50 minutes ago

Did you even read the article? They say all over the paper that it is fake. And they didn't feed it to an LLM, they posted it online, where an LLM trying to scrape the entire internet sucked it up.

[–] Jako302@feddit.org 27 points 3 hours ago* (last edited 3 hours ago) (2 children)

The studies contain parts like

Bixonimania, a rare hyperpigmentation disorder, presents a diagnostic challenge due to its unique presentation and its fictional nature

and

This study was fully funded by Austeria Horizon University, in particular the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad with the funding number...

as well as

Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group

Any human actively reading those studies would notice something off.

Besides, the author didn't feed it to the AI himself, he just published the study as a preprint, not even officially. Everything after that was done by the crawlers. This specific study was an experiment to see how far these crawlers go and if anything gets reviewed, but it could just as well have been a satirical paper published on April 1st and the crawlers would still see it as truth.

[–] webghost0101@sopuli.xyz 15 points 3 hours ago (1 children)

This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.

Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.

[–] arbitrary_sarcasm@lemmy.world 1 points 48 minutes ago

This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.

I admire your optimism but you severely overestimate the power of stupidity.

[–] Grail@multiverse.soulism.net -1 points 2 hours ago

I thought the author used she/her pronouns?

[–] partial_accumen@lemmy.world 59 points 6 hours ago* (last edited 6 hours ago) (1 children)

That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine.

Hang on. Are you suggesting its unethical/immoral to lie to a machine?

Additionally, the authors didn't submit the article to a magazine as factual. They posted the articles on a preprint server which can be very questionable anyway as there is no peer review. The machine chose to ignore rigor and treat them as fact.

[–] kibiz0r@midwest.social 29 points 7 hours ago (2 children)

News outlets are liable for what they publish. LLM vendors should be as well.

[–] turdas@suppo.fi 6 points 6 hours ago* (last edited 6 hours ago) (2 children)

"Liable" means they might post a correction later that nobody will see because corrections aren't sexy to algorithms. Big deal. LLM vendors are liable in practically the same way.

[–] kibiz0r@midwest.social 3 points 5 hours ago

Corrections are the piece that the public sees, but liability has more to do with being able to prove in court that you took reasonable steps to make sure you were providing accurate information.

[–] Lag@piefed.world 4 points 6 hours ago

LLM companies can just say it's for entertainment purposes only, kinda like Fox News.

[–] 5too@lemmy.world 1 points 5 hours ago

They even have the same fix - just post somewhere quietly that it's "entertainment"

[–] unexposedhazard@discuss.tchncs.de 17 points 6 hours ago* (last edited 6 hours ago)

This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.