I imagine this is how it'll work for stage 2 of Ai enshittifation. They'll just add a bunch of garbage upstream about a brand or product marketers are paying to push and it'll infect a bunch of outputs downstream.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.

Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.
do it again
Find a way to make AI hurt billionaires and I will support it.
That's pretty much what local ML is.
If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech's bets are. It's why they keep fanning the "AGI" lie, and why they're pushing for regulation so hard, why they're shoving LLMs where they just don't fit and harping on safety.
Ok, but who is making those "open weight" models though? Individuals don't really have the resources to run these huge scraping operations, so they're often still corporate releases with fake open source branding.
They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.
Consumers aren't in the best position right now though, especially with the price hikes.
I wonder if there's a prompt that you could use to make it explode the data centers
I don't see this as a problem, rather, an opportunity to study information & disinformation propogation.
Technology is healing 😌
I give you... "The Grant Money Printing machine!"
Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.
My friends and I did that in high school. Kinda. We made up new words for "awesome" to get people to start saying it. We started with "bumpenis" like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.
That's so fetch!
Stop trying to make fetch happen.
.
I'm failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don't get to point and laugh when people believe your lies. That's a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what's been said. It isn't going to suddenly start doing science on it's own to determine if what you've said is true or not. That isn't it's job. It's job is to tell you what color the sky is based on what you told it the color of the sky was.
The studies contain parts like
Bixonimania, a rare hyperpigmentation disorder, presents a diagnostic challenge due to its unique presentation and its fictional nature
and
This study was fully funded by Austeria Horizon University, in particular the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad with the funding number...
as well as
Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group
Any human actively reading those studies would notice something off.
Besides, the author didn't feed it to the AI himself, he just published the study as a preprint, not even officially. Everything after that was done by the crawlers. This specific study was an experiment to see how far these crawlers go and if anything gets reviewed, but it could just as well have been a satirical paper published on April 1st and the crawlers would still see it as truth.
This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.
Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.
I thought the author used she/her pronouns?
That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine.
Hang on. Are you suggesting its unethical/immoral to lie to a machine?
Additionally, the authors didn't submit the article to a magazine as factual. They posted the articles on a preprint server which can be very questionable anyway as there is no peer review. The machine chose to ignore rigor and treat them as fact.
News outlets are liable for what they publish. LLM vendors should be as well.
"Liable" means they might post a correction later that nobody will see because corrections aren't sexy to algorithms. Big deal. LLM vendors are liable in practically the same way.
Corrections are the piece that the public sees, but liability has more to do with being able to prove in court that you took reasonable steps to make sure you were providing accurate information.
LLM companies can just say it's for entertainment purposes only, kinda like Fox News.
They even have the same fix - just post somewhere quietly that it's "entertainment"
This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.
(I've only read the title. If turns out I am terribly mistaken I will come back and correct myself). More like scientists commit academic fraud and fooled a bunch of people. How did this get through the ethics board? Why would any publisher play along with this?