this post was submitted on 25 Nov 2025
561 points (98.3% liked)

Fuck AI

4629 readers
2272 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

you are viewing a single comment's thread
view the rest of the comments
[–] korazail@lemmy.myserv.one 36 points 7 hours ago* (last edited 7 hours ago)

From later in the article:

Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

I think this is the big issue with 'ai cheating'. Sure, the LLM can create a convincing appearance of understanding some topic, but if you're doing anything of importance, like making pizza, and don't have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics.... actually, on second thought, it might not be any worse. 🤷

Edit: Adding that while critical thinking is a huge part. it's more of the "you don't know what you don't know" that tripped these students up, and is the danger when using LLM in any situation where you can't validate it's output yourself and it's just a shortcut like making some boilerplate prose or code.