this post was submitted on 07 Dec 2025
573 points (98.5% liked)

Technology

77427 readers
2630 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SocialMediaRefugee@lemmy.world 57 points 1 day ago (4 children)

It is time to start holding social media sites liable for posting AI deceptions. FB is absolutely rife with them.

[–] phutatorius@lemmy.zip 11 points 18 hours ago (2 children)

YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that's nothing but badly-written lies. Same with reports of Trump legal defeats that haven't actually happened. They are flooding the zone with shit, and poisoning search results with slop.

[–] BarneyPiccolo@lemmy.today 1 points 7 hours ago

The entire media universe is being captured by Sociopathic Oligarchs, and they intend to extend the Conservative Propaganda Machine to cover everything. They will NOT be amenable to efforts toward monitoring truth in media, unless they can be the sole determiners of what is the truth.

[–] SocialMediaRefugee@lemmy.world 1 points 8 hours ago

For some reason middle eastern and pakistani sources are cranking out fake disaster videos.

[–] DeathByBigSad@sh.itjust.works 32 points 1 day ago* (last edited 1 day ago) (2 children)

Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what "AI deception" is? your average lemmy.world mod, an unpaid volunteer?

It's a threat to free speech.

[–] 9488fcea02a9@sh.itjust.works 11 points 23 hours ago

Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down

[–] Lumisal@lemmy.world 2 points 18 hours ago (2 children)

Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.

That'll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.

[–] GamingChairModel@lemmy.world 1 points 8 hours ago (1 children)

I don't think it'd be that simple.

Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.

So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website's reach and its accountability (to paraphrase Spider-Man's Uncle Ben).

[–] Lumisal@lemmy.world 1 points 7 hours ago

I agree it's not that simple, but it's just a proposed possible beginning to a solution. We could refine it further and then give the vet refined idea as a charter for a lawyer to them draft up as a proper proposal that could then be present to a relative governmental body to consider.

But few people like to put in that work. Even politicians don't - that's why corporations get so much of what they want - they do that and pay people to do that for them.

That said, view count isn't the same as membership. This solution wouldn't be perfect.

But it would be better than nothing at all, especially now with the advent of AI turning the firehouse of lies into the tsunami of lies. Currently one side only grows stronger in their opportunity for causing havoc and mischief while the other, quite literally, does nothing and sometimes advocates for doing nothing. You could say it's a reflection of the tolerance paradox that we're seeing today.

[–] DeathByBigSad@sh.itjust.works 5 points 16 hours ago (3 children)

How high is your proposed number?

Why is Big = Bad?

Proton have over 100 million users.

Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?

[–] tad_lispy@europe.pub 3 points 13 hours ago

Proton is not a social medium. As to "how high", the lawmakers have to decide on that, hopefully after some research and public consultations. It's not an unprecedented problem.

Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.

[–] Dozzi92@lemmy.world 2 points 13 hours ago

Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?

[–] Lumisal@lemmy.world 1 points 12 hours ago

Proton isn't social media.

If you can't understand why big = bad in terms of the dissemination of misinformation, then clearly we're already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.

I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.

[–] Rhoeri@lemmy.world 4 points 1 day ago (1 children)

Sites AND the people that post them. The age of consequence-less action needs to end.

[–] finitebanjo@lemmy.world 3 points 19 hours ago

Or more like, just the people that post them.