this post was submitted on 13 Mar 2026
101 points (100.0% liked)

Slop.

817 readers
383 users here now

For posting all the anonymous reactionary bullshit that you can't post anywhere else.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No bigotry of any kind, including ironic bigotry.

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target federated instances' admins or moderators.

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] varmint@hexbear.net 60 points 3 days ago (3 children)

This is the kind of stuff that convinces me that Western academia is about to slam into a brick wall and die

[–] Horse@lemmygrad.ml 31 points 3 days ago (1 children)
[–] aqwxcvbnji@hexbear.net 9 points 2 days ago

No, learning things in school and doing scientific research is good. Letting that get destroyed by a couple of Silicon Valley olicharchs is bad.

Obviously the Byzantine admission system and absurd tuition fees in the US (and UK) are horrific, but that's not what's being destroyed here.

[–] haxboar@hexbear.net 8 points 2 days ago* (last edited 2 days ago)

I felt that way when I was 18, and knew more about certain topics than my professors did because I had the internet. Also, I remember realising that education was more about tolerating bureacracy than actually knowing material, when I was 10.

Sheesh, the US education system stucks

[–] Dort_Owl@hexbear.net 3 points 2 days ago

It already has imo

[–] varmint@hexbear.net 50 points 3 days ago (3 children)

We're witnessing the death of academia in real time. Knowledge acquisition will cease and we will descend into a pit of regurgitated slurry until this system collapses

[–] Blakey@hexbear.net 20 points 2 days ago (1 children)

It kinda needs to happen in a lot of ways. I like academia on, like, a conceptual level, but "publish or perish" and the reproducibility crisis are imo signs of a deeply entrenched problem and I am not convinced it can be solved by reform. The breakdown of liberal academia is probably as inevitable and necessary as the breakdown of capitalism and liberalism.

[–] Collatz_problem@hexbear.net 9 points 2 days ago (1 children)

LLMs would just make the reproducibility crisis much worse.

load more comments (1 replies)
[–] InevitableSwing@hexbear.net 28 points 3 days ago

we will descend into a pit of regurgitated slurry until this system collapses.

I guess that's this century in a nutshell.

[–] umbrella@lemmy.ml 9 points 2 days ago

mmmmmm regurgitated slurry

[–] volcel_olive_oil@hexbear.net 43 points 3 days ago (1 children)

spent so much time trying to make the computer learn things they forgot how humans learn things

this is part of "everyone is twelve". very serious academics going "this is fantastic. I can skip eight weeks of school!"

[–] facow@hexbear.net 26 points 2 days ago (2 children)

Cargo cult behavior. Churn out 50 slop papers you maybe skim over and no one else reads or attempts to replicate. Feed the slop back into the slop machine to shit out a thesis. Congrats you've got your doctorate without learning anything or generating anything of value!

[–] Le_Wokisme@hexbear.net 14 points 2 days ago

there's a reproducibility crisis in several fields and you don't get money for publishing negative results

[–] CupcakeOfSpice@hexbear.net 5 points 2 days ago

That's what really gets me! I see the Grammarly commercials where they say they can just follow the AI to improve/write their papers and get the grade they want. Cool, but have you considered the grade isn't the end goal? Like, maybe the assignment was to teach you something and by not learning it you have harmed your studies? Maybe getting a lower grade and some feedback would assist you?

[–] EveningCicada@hexbear.net 45 points 3 days ago (1 children)

galaxy-brain I'm coming up with 500 theses every hour and they're all wrong

[–] InevitableSwing@hexbear.net 27 points 3 days ago (1 children)

Just keep prompting. You'll get there.

[–] SuperZutsuki@hexbear.net 23 points 3 days ago* (last edited 3 days ago) (1 children)

But who's going to tell me when it's right? Maybe I'll have grok check Claude's work... thinking-about-it

[–] InevitableSwing@hexbear.net 17 points 3 days ago

The AI Centipede

[–] Kumikommunism@hexbear.net 39 points 3 days ago

There is something very funny about sociology research being written by the stolen words of m/billions of people being smashed together. It's almost avant garde.

[–] reaper_cushions@hexbear.net 26 points 2 days ago (3 children)

I recently tried using an LLM to find out whether a niche issue in my thesis had already been discussed in the literature. I fed the LLM extremely specific prompts, specific enough, in fact, that it could actually cough up a result that looked similar enough to my problem that I first thought that it had actually found literature on my question. The problem: the literature either did not exist, even though the authors it was attributed to are contributors to my field, or it does exist but does not contain the answer the LLM gave. I know because I had read literally every paper the LLM spat out that actually exists. These machines are ok at some simple tasks like giving a general overview over the current literature in a field, but miserably fail anything more specific than that.

[–] UmbraVivi@hexbear.net 11 points 2 days ago

The way I think about it is: The more frequently the correct answer to a question has been given on the internet, the more reliable an LLM is to give that correct answer to that question. So it's pretty reliable on surface-level questions in a vast array of fields. But the more specific and niche you get, the less explored the topic you're asking the LLM about is, the more likely it is to just make stuff up.

[–] Moidialectica@hexbear.net 14 points 2 days ago (1 children)

Trust me, it's like this for every field; geology, programming, history, story writing, philosophy

I have made use of it, I do regularly use it, but to not acknowledge it's fucking shit and should not be put near any serious work without the up-most scrutiny is a joke

And I believe the propagators of AI lack either the skills needed to actually tell how bad it is, or want to believe otherwise because it makes it so much easier for them

load more comments (1 replies)
[–] red_giant@hexbear.net 5 points 2 days ago

LLMs are a remarkable improvement on googles “I’m feeling lucky” button

[–] BodyBySisyphus@hexbear.net 33 points 3 days ago (2 children)

Looking forward to the coming retraction because it turns out your interview coding was nondeterministic and your results are not reproducible.

...somebody's out there trying to see if research is reproducible, right? anakin-padme-2

...papers will get pulled from LLM training sets when they get retracted, right? anakin-padme-2

...there isn't a massive number of social sciences papers already published that are basically useless because their results aren't meaningful outside of a narrow set of subjectively specified predictor variables, right? anakin-padme-2

[–] BodyBySisyphus@hexbear.net 18 points 3 days ago (8 children)

Also holy hell, is this what a vibe-coded website looks like? https://www.shrutimishra.co/

[–] OgdenTO@hexbear.net 13 points 3 days ago (1 children)

Hey Claude, make me a terrible website

load more comments (1 replies)
[–] himeneko@hexbear.net 12 points 3 days ago (1 children)

i think this is probably the best bit possible when scrolling EAT DA TEXT

load more comments (1 replies)
load more comments (6 replies)
[–] bdonvr@thelemmy.club 11 points 2 days ago

somebody's out there trying to see if research is reproducible, right?

Claude says it looks reproducible. Claude, write a paper confirming...

[–] Damarcusart@hexbear.net 20 points 2 days ago (1 children)

Ah yes, why bother learning all that pesky "medical knowledge" when training to become a doctor, when you can just get an AI to do all the work for you! I'm sure this sort of attitude will have no real world repercussions!

[–] red_giant@hexbear.net 6 points 2 days ago

Congratulations on spending $200,000 at Harvard and completing your PhD. Unfortunately you learned literally nothing.

[–] robador51@lemmy.ml 10 points 2 days ago

I work in an environment where persuasion and synthesis of vast amounts of information gives a major edge. I see 2 types of people. There's those who are actually really good at what they do without help of LLM's who can benefit by making their output even better by use of AI, by honing and optimizing their work, and there's those who are absolutely shit without use of LLM's who're even worse once they start using it.

Unfortunately the latter group is the vast majority.

The first group already has strong ideas, and then the LLM can accelerate and elevate their thinking. They use it as a brainstorm helper. They validate the output. They don't necesarrily work faster.

The second group doesn't know what to do, will ask the LLM, trust the output with little to no scrutiny. They use it as a means of production. They deliver fast.

I think this pattern we see in most fields. Software development for example. A true senior developer might be able to create better output, or produce things a bit faster even. But a bad programmer will still have bad output, and probably exponentially so when they lean more into the tool.

The second group is dangerous. They're as delusional as the output the LLM's tend to generate. They feel empowered, and see the increase in output as a personal victory, as if it unlocked some lingering quality in them that was always there. Qualities that highly capable people had to work for years for to attain. Look how productive I am, look at what I did, they'll think. They create the noise that capable people have to now deal with, it's all the slop we see, and it's everywhere.

That's what I hate about it.

Anyway

[–] mrfugu@hexbear.net 24 points 3 days ago

I don’t give a shit if it’s qualitative. If its data you need directly recorded please don’t use the hallucination chat service.

[–] Big@hexbear.net 21 points 3 days ago (2 children)

At this point, the only way to save higher learning is to go back to exclusively oral teaching.

Turns out Socrates was right all along.

[–] Inui@hexbear.net 20 points 2 days ago* (last edited 2 days ago) (5 children)

A lot of professors I know are pivoting back to hand written proctored exams, oral presentations/q&s, etc because there's really no stopping the slop machine. A lot of professors are uncomfortable with doing something like reporting tons of students for cheating since you can't prove it easily, so that's their alternative.

Except one CS professor I know who failed 30% of his class on an exam, reported them all to student conduct, and sent the rest of the class a warning lol. He ain't having it.

load more comments (5 replies)
[–] InevitableSwing@hexbear.net 12 points 2 days ago (7 children)

I just had a horrible thought. Soon there could be...

SocratesAIListen your way to knowledge!™

[–] SparkyOrange@hexbear.net 13 points 2 days ago

Please step away from the lathe, I beg you

[–] Meltyheartlove@hexbear.net 10 points 2 days ago* (last edited 2 days ago) (2 children)

https://socrat.ai/

AI Tools Built for Teaching And Learning. Socrat Helps Teachers and Students Use AI Effectively.

load more comments (2 replies)
load more comments (5 replies)
[–] Hohsia@hexbear.net 15 points 2 days ago

Tech bros (and all those who repeat their talking points) are dangerous people and should be treated as such

[–] FnordPrefect@hexbear.net 22 points 3 days ago

geordi-no “Children must be taught how to think, not what to think.”

geordi-yes “Children must not be taught what to think, but how to not think.”

Sociology students and cheating

Fork found in kitchen

[–] Ram_The_Manparts@hexbear.net 15 points 2 days ago (2 children)

The 9 prompts are just 9 videos of me loudly farting into a jar.

Sorry.

load more comments (2 replies)
[–] ClathrateG@hexbear.net 19 points 3 days ago

I'm gonna prooompt hillgasm

[–] LetterLiker@hexbear.net 9 points 2 days ago (1 children)

LetterLikian Jihad against the thinking machines and its pathetic acolytes.

[–] barrbaric@hexbear.net 8 points 2 days ago (1 children)

Agreed except that this implies LLMs can actually think which is ceding too much ground.

[–] CupcakeOfSpice@hexbear.net 9 points 2 days ago (6 children)

I think in Dune's Butlerian Jihad they considered anything that "thought" on the level of an electronic calculator a thinking machine. An abacus might be alright, but we have Mentats for that.

load more comments (6 replies)
[–] Commiechameleon@hexbear.net 9 points 3 days ago

All you need is prompt*

load more comments
view more: next ›