this post was submitted on 02 Jan 2026
559 points (99.3% liked)

Fuck AI

5147 readers
1268 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@fedia.io 8 points 1 week ago (1 children)

Ah. And this is automated bodycam transcription software that is getting manually reviewed. So the wonky report didn't show up in court, the person getting the explanation is an officer manually reviewing the automated report.

I mean... funny, but... I don't know that I have a massive problem with this. I guess the real questions are whether the review process is faster than writing a manual summary and whether there would be a scenario where manual review is neglected in the future.

[–] WoodScientist@lemmy.world 10 points 1 week ago (1 children)

I guess the real questions are whether the review process is faster than writing a manual summary and whether there would be a scenario where manual review is neglected in the future.

And how in Hell's name do you propose they actually check these reports? Sure, it's bloody obvious if the report claims some fantastical event that clearly didn't happen. But what if the LLM spits out a reasonable-sounding, but completely fake summary? We're talking about automatic video summaries here. The only way to correct them would be to manually watch through all the video yourself and to compare it to the AI-generated reports. Simply spot checking will not work, as the errors are random and can appear anywhere. And if you have to manually watch all the video anyway, there's not much point in bothering with the LLM, is there?

These systems only have the potential to save time if you're content with shit-tier work.

[–] MudMan@fedia.io 2 points 1 week ago (1 children)

The report the other person linked above is specifically and entirely about those questions. Addresses them decently, too.

https://www.parkrecord.com/2025/12/16/heber-city-police-department-test-pilots-ai-software/

FWIW, at least one of the examples they cover actively requires manual edits to allow a report to be completed. The point isn't to actively provide a final report, but a first draft for people to edit.

Now, in my experience this is pointless because writing is generally the least bothersome or time consuming part of most tasks that involve writing. If you're a cop who maybe doesn't do the letters part so good and has to write dozens of versions of "I stopped the guy and gave them a speed ticket", maybe that's not true for you and there are some time savings in reading what the chatbot gives you and tweaking it instead of writing it from scratch each time. I guess it depends on your preferences and affinity for the task. It certianly wouldn't save me much, or any time, but I can acknowledge how there is a band of instances in this particular use case where the typing is the issue for some people.

Anyway, read the actually decent report. It's actually decent.

[–] WoodScientist@lemmy.world 2 points 1 week ago (1 children)

FWIW, at least one of the examples they cover actively requires manual edits to allow a report to be completed.

And how do you think that would work in the real world? In a time crunch environment, aka every workplace under the Sun, you'll do what you have to do. They'll figure out the minimum amount of change needed to count as "human edited," do that, and rubber stamp the rest. Delete and add three periods and click "submit." That's how mandatory edits will work in practice.

[–] MudMan@fedia.io 1 points 1 week ago

I refuse to engage with any comments that have clearly not read the article I link above.