this post was submitted on 14 Jan 2026
239 points (98.4% liked)

Technology

78828 readers
4735 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 29 comments
sorted by: hot top controversial new old
[–] Frozentea725@feddit.uk 126 points 4 days ago (4 children)

Why the fuck are the police using ai for the basis of intelligence, that's absurd

[–] smeg@infosec.pub 55 points 4 days ago

Police have little intelligence of their own

[–] Manjushri@piefed.social 40 points 4 days ago (3 children)

Everyone wants to run everything like a business these days. They want to save on payroll so rather than paying actual police to do the paperwork, they want to use Copilot or whatever to do the paperwork for them. Of course, because AI models are so crappy and error prone, they need to spend the same amount of money on payroll to verify the accuracy of the AI output. But they don't do that. They just run with whatever the AI output is and figure it'll be close enough to accurate. After all, big tech keeps telling everyone that AI is wonderful and can do anything.That is far from the truth though.

A lawyer in California last year got in trouble for using ChatGPT to generate briefs for a trial. He wound up filing those briefs with the court even though they 21 of the 23 quotes from previous trials were complete fabrications. In another incident, a police department in Utah used an AI to generate a report from a traffic stop. That report claimed that an officer shape-shifted into a frog during the incident.

There are endless reports of AI making shit up and demonstrating how error prone those tools are. Yet, people who should know better keep trusting AI to do these important jobs, just to save money on payroll, when AI is clearly far from ready for prime-time.

[–] CompactFlax@discuss.tchncs.de 17 points 4 days ago (1 children)

The people who want to run everything like a business either have never worked for a business or is a Business Idiot who doesn’t know how much waste happens in the average business.

[–] northernlights@lemmy.today 2 points 2 days ago

Thanks, good read.

[–] BreadstickNinja@lemmy.world 5 points 3 days ago

U.S. law enforcement is out of control with this animorphs shit.

[–] eRac@lemmings.world 6 points 4 days ago

As far as I can tell, the frog incident was not real-world. It was a police department vetting a system by doing fake test stops. They did one with Princess And The Frog playing in the back seat and the transcription system interleaved the traffic stop and film dialogue, then took that at face value for the summary.

System still sucks, but at least they were testing before blindly relying on it.

[–] Redacted@lemmy.world 13 points 4 days ago

They have been hired predominantly because of their ability to not question what they are told.

They arrived at a somewhat defendable decision despite the AI slop though.

Probably should have been a limited number of fans allowed, the number based on the ability to protect them with the police resources available at the time.

[–] Insekticus@aussie.zone 5 points 3 days ago

Some fucking morons watched Minority Report and didn't understand it was a bad idea to arrest people for crimes they hadn't committed yet.

[–] gustofwind@lemmy.world 54 points 4 days ago (1 children)

Not checking your ai results is like not checking the work of a new nepo hire

It’s really 100% your fault

[–] Morphit@feddit.uk 1 points 2 days ago

Or your boss'.
If you're given a new tool and told to use it in your work, you need to be given time to learn how to use it and find problems. If your boss gives you a new (not to mention unreliable) tool and less time to work within, you're both going to have a bad time™.

[–] REDACTED@infosec.pub 25 points 3 days ago

This is equalivent of being so dumb you use fork instead of spoon to eat the soup and then blaming the fork instead yourself

[–] dan1101@lemmy.world 27 points 4 days ago (1 children)

I mean it's really really ignorant of them to rely on Copilot, but yeah, let's start holding these corporations liable for acting like their slop spam is prime rib.

[–] TheBlackLounge@lemmy.zip 1 points 2 days ago

I mean, every chat starts with "Copilot is an AI and may make mistakes."

[–] wewbull@feddit.uk 16 points 4 days ago

Using AI gets you fired.

[–] A_norny_mousse@feddit.org 23 points 4 days ago* (last edited 4 days ago)

Guildford previously denied in December that the West Midlands Police had used AI to prepare the report, blaming “social media scraping” for the error.

lol, as if that was better (assuming they took all findings at face value).

We’ve reached out to Microsoft to comment on why Copilot made up a football match that never existed, but the company didn’t respond in time for publication.

I'd love to hear their response, but the real answer is obvious: "it's a black box. We have no idea what it's doing. Yes, we unleashed that on the world. Not sorry though, we just have to keep hoping that it sorts itself out."

[–] TheBat@lemmy.world 22 points 4 days ago

intelligence mistake

Another word for that is 'stupidity'.

[–] DeathByBigSad@sh.itjust.works 19 points 4 days ago

Incorporating "AI" into nuclear silos when?

Wargames much?

Skynet?

(Don't worry tho, the LLM will probably fail to know how to launch and blow up the silos instead xD)

[–] LifeLikeLady@lemmy.world 19 points 4 days ago

Microslop at it again.

[–] Gsus4@mander.xyz 15 points 4 days ago (1 children)

Ah, the old "the computer did it", but now spreading to everything it touches.

[–] brsrklf@jlai.lu 13 points 4 days ago

"The AI hallucinated" should be considered a worse excuse than "the dog ate my homework".

[–] Rhaedas@fedia.io 10 points 4 days ago

Blame Microsoft sure, but where this ignorance of LLMs' faults keeps coming from is baffling. Either the CTO and CIO and the rest of the IT departments are idiots, or someone is grabbing their bonuses while they can before things break.

[–] warm@kbin.earth 8 points 4 days ago

Maybe we should just be banning AI?

[–] phutatorius@lemmy.zip 2 points 3 days ago

The annoying thing is that banning Maccabi was the right policy, because of their recent hooliganism in Amsterdam against Ajax fans and random Muslim cab drivers and shop attendants.

There's lots of pressure from the government on civil servants to use AI, even in cases where it's provably worse than not using it. So the coppers who did this may have felt forced.

[–] bryndos@fedia.io 5 points 4 days ago

Long before the term intelligence was sullied by the prefix "artificial" the phrase "police intelligence" has long been a euphemism for a suspicion that they cannot back up with any evidence that would stand up in court.

I think coprolite seems like a perfect fit for their methods. They just need to train it to start off with solid evidence and fuck it up until it doesn't stand up in court. Then it'll be employee of the week. A bit like computer driven cars, it might be shite but that doesn't mean it is necessarily worse than the typical human cop.

The real problem is if it's harder to hold it to account.

[–] Arcane2077@sh.itjust.works 5 points 4 days ago

Same nebulous “intelligence” that designated a peacful anti-genocide group as a terrorist organization

[–] PierceTheBubble@lemmy.ml 4 points 4 days ago (1 children)

AI risk management, what could possibly go wrong?

[–] Gsus4@mander.xyz 2 points 4 days ago* (last edited 4 days ago)

Aha, AI already creating the new (bullshit) jobs of the future!

[–] mrgoosmoos@lemmy.ca 1 points 4 days ago