this post was submitted on 26 Oct 2025
226 points (99.6% liked)

science

22265 readers
126 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] entropicdrift@lemmy.sdf.org 3 points 1 day ago* (last edited 1 day ago)

Perhaps a sentiment analysis would be marginally useful, but since you need a human to verify all LLM outputs it would be a dubious time savings.

Thank you, yes. That's exactly my point. You'd need a human to verify all of the outputs anyways, and these are literally machines that exclusively make text that humans find believable, so you're likely adding to the problem of humans messing stuff up moreso than speeding anything up. Being wrong fast has always been easy, so it's no help here.