It feels like a privilege escalation exploit: at a certain point the authority chain jumped from a random picture provided who knows where/when to a link in the chain that should be reliable enough to blindly trust in this subject.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I dunno, someone just throws this up on social media, and you're the person in the position to say hey, halt the trains, don't you do just that out of an abundance of caution?
It is time to start holding social media sites liable for posting AI deceptions. FB is absolutely rife with them.
YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that's nothing but badly-written lies. Same with reports of Trump legal defeats that haven't actually happened. They are flooding the zone with shit, and poisoning search results with slop.
Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what "AI deception" is? your average lemmy.world mod, an unpaid volunteer?
It's a threat to free speech.
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That'll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
How high is your proposed number?
Why is Big = Bad?
Proton have over 100 million users.
Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?
Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?
Proton is not a social medium. As to "how high", the lawmakers have to decide on that, hopefully after some research and public consultations. It's not an unprecedented problem.
Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.
Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down
I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.
Sites AND the people that post them. The age of consequence-less action needs to end.
Or more like, just the people that post them.
People who post this stuff without identifying it as fake should be held liable.
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
WTF?
Doesn't the fucking BBC have at least 1 or 2 experts for spotting fakes? RAN THROUGH AN AI CHATBOT?? SERIOUSLY??
People need to get that with the proliferation of AI the only way to build credibility is not by using it for trust but to go the exact opposite way: Grab your shoes and go places. Make notes. Take images.
As AI permeates the digital space - a process that is unlikely to be reversed - everything that's human will need to get - figuratively speaking - analogue again.
I haven't read it, but it could be to demonstrate how easy it was to identify it as a fake, without the ressources of BBC.
Pr because it was between 0 and 2 in the night. Still, as an author I wouldn't have mentioned it.
They have vibe journalists now
WTF? Why nothing like this ever happened during Photoshop times? Are people just dumber now?
Because the venn diagram of “people who would maliciously do something like this” and “people with good enough photoshop skills to make it look realistic” were nearly two separate circles. AI has added a third “people with access to AI image generators” circle, and it has a LOT of overlap with the second group simply because it is so large.
Really? I remember tons of nicely photoshoped pictures on Snopes. There was a lot of trolling by people with skills going on.
Those remained on email chains. Unlike social media of today where anyone can generate any image and send it to millions of gullible people in a second.
Email chains? You're thinking about some early internet 40 years ago. Twitter has 20 years, Instragram 15. People were sharing fake images on social media long before AI. I just can't imagine anyone responsible making decisions like stopping trains based on a single image on the internet. You know how easy would it be to post an image of a forest fire on Twitter? You don't even have to fake it, simply take an image from some other fire. You make decisions like that based on credible calls, not something you saw online.
Even then it feels like there were a lot less gullible people online 10 years back compared to today.
That was the first thing I've said. People are just dumber now.
It took skill to do this before. Hardly anyone with that level of skill and time would do this. Now the dumb idiots have access to that skillset because of AI doing all the work for them.
It doesn’t require skill anymore. AI has enabled children with the ability to pretend they have a skill, and to use it to fool people for fun.
The thing is you actually need some skill to do it in Photoshop, but now every dumb fuck who knows how to read can do shit like this.
These are more realistic and far far easier to make.
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
What the actual fuck? You couldn't spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?
A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged
So they did. Why are we talking about ChatGPT then? You could just leave that part out. It's useless. Obviously a fake photo has been manipulated. Why bother asking?
I tried the image of this real actual road collapse: https://www.tv2.no/nyheter/innenriks/60-mennesker-isolert-etter-veiras/12875776
I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.

God damn I hate this tool.
Thanks for posting this, great example
For anyone outside the UK, the bridge in the picture is carrying the West Coast Mainline (WCML).
The UK basically has two major routes between Edinburgh and Glasgow (where most people live in Scotland) and London, the East Coast Mainline and the West Coast Mainline. They also connect several major cities and regions.
The person who posted this basically claimed that a bridge on one of the UK's busiest intercity rail routes had started to collapse, which is not something you say lightly. It's like saying all of New York's airports had shut down because of three co-incidental sinkholes.
Wait until this shit starts an actual war.