this post was submitted on 03 Apr 2024
180 points (98.4% liked)

News

36018 readers
1999 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Lawyers for a man charged with murder in a triple homicide had sought to introduce cellphone video enhanced by machine-learning software.

A Washington state judge overseeing a triple murder case barred the use of video enhanced by artificial intelligence as evidence in a ruling that experts said may be the first-of-its-kind in a United States criminal court.

The ruling, signed Friday by King County Superior Court Judge Leroy McCullogh and first reported by NBC News, described the technology as novel and said it relies on "opaque methods to represent what the AI model 'thinks' should be shown."

"This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model," the judge wrote in the ruling that was posted to the docket Monday. 

The ruling comes as artificial intelligence and its uses — including the proliferation of deepfakes on social media and in political campaigns — quickly evolve, and as state and federal lawmakers grapple with the potential dangers posed by the technology.

all 18 comments
sorted by: hot top controversial new old
[–] paddirn@lemmy.world 43 points 2 years ago (1 children)

Given AI models’ penchant for hallucinating and the blackbox nature of it all, it seems like it shouldn’t be admissible. AI is fine for creative endeavors, but in arenas where facts matter, AI can’t be trusted.

[–] Hobbes@startrek.website 1 points 2 years ago (1 children)

But I thought we were trying to make Black Mirror a reality?

[–] paddirn@lemmy.world 3 points 2 years ago

Oh no, we're still plowing ahead with this self-induced AI nightmare, this is just a speed bump...

Friend Computer always knows what's best for us. All praise the Computer and woe to the Mutant, Commie, Scum who would try to bring ruin upon our beneficent Computer overlord!

Good on that judge. If the video is unclear before AI fucks with it then whatever you're trying to show falls well within reasonable doubt.

[–] breakingcups@lemmy.world 20 points 2 years ago

Excellent ruling. Scary times.

[–] Lanusensei87@lemmy.world 12 points 2 years ago

"Your Honor, as you can see from the footage, my client sprouted 7 fingers out of his hand, with such a condition, he couldn't possibly operate a firearm..."

[–] sylver_dragon@lemmy.world 11 points 2 years ago

This seems like one of those technologies which may be useful as an investigatory tool, but should ultimately not admissible in court. For example, if law enforcement has a grainy video of a crime, and they use AI enhancement to generate leads, that could be ok. Though, it will still have issues with bias and false leads; so, such usage should be tracked and data kept on it to show usefulness and bias. But, anything done to a video by AI should almost universally be considered suspect. AI is really good at making up plausible results which are complete bullshit.

[–] FlyingSquid@lemmy.world 9 points 2 years ago

But CSI told me that all you have to do to catch a criminal is to enhance! What will they do now?!

[–] Buelldozer@lemmy.today 7 points 2 years ago (1 children)

This isn't really new. It came up at the Kyle Rittenhouse trial back in 2021.

It's just that everything wasn't called "AI" back then. Same enhancement algorithms and processing techniques being used though.

[–] Daxtron2@startrek.website 1 points 2 years ago

It was still AI back then too, it just hadn't entered the zeitgeist so no one would've understood what it meant.

[–] scoutFDT@lemm.ee 4 points 2 years ago (1 children)

Does this ruling apply to all AI processed images or only ones for generative AI? What about stuff like DLSS that utilizes deep learning?

[–] Badeendje@lemmy.world 2 points 2 years ago* (last edited 2 years ago)

I would imagine that using an AI to create a video and voice of a defendant to "say" something from a transcript would be much more impressive than someone reading it.

[–] Daxtron2@startrek.website 2 points 2 years ago

I'm generally against the whole anti-AI stuff these days but this makes perfect sense. There's no way of verifying whether or not the content of an upscaled image is accurate.