this post was submitted on 30 Jan 2026
277 points (99.6% liked)

Technology

5864 readers
262 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with "the vast majority" stemming from Amazon.

top 42 comments
sorted by: hot top controversial new old
[–] Prove_your_argument@piefed.social 103 points 5 days ago (3 children)

Amazon Photos syncing, if I had to guess. It was marketed a free unlimited backup for amazon prime users.

[–] AmbitiousProcess@piefed.social 53 points 5 days ago

Yep. They are allowed to use your photos to "improve the service," which AI training would totally qualify under in terms of legality. No notice to you required if they rip your entire album of family photos so an AI model can get 0.00000000001% better at generating pictures of fake family photos.

[–] ImgurRefugee114@reddthat.com 10 points 5 days ago* (last edited 5 days ago) (3 children)

Unlikely IMO. Maybe some... But if they scraped social media sites like blogs, Facebook, or Twitter, they would end up with dumptrucks full. Ask any one who has to deal with UGC: it pollutes every corner of the net and it's damn near everywhere. The proliferation of local models capable of generating photorealistic materials has only made the situation worse. It was rare to uncover actionable cases before, but the signal to noise ratio is garbage now and it's overwhelming most agencies (who were already underwater previously)

[–] ZoteTheMighty@lemmy.zip 10 points 5 days ago (1 children)

But if they're uniquely good at producing CSAM, odds are it's due to a proprietary dataset.

[–] ImgurRefugee114@reddthat.com 2 points 5 days ago* (last edited 5 days ago)

This is why I use the word 'proliferation,' in the nuclear sense. Though contamination may be more apt... Since the days of SD1, these illegal capabilities have become more and more prevalent in the local image model space. The advent of model merging, mixing, and retraining/finetunes, have caused a significant increase in the proportion of model releases that have been contaminated.

What you're saying is ultimately true, but it was more true in the early days. Animated, drawn, and CGI content has always been a problem, but photorealistic capability was very limited and rare, often coming from homebrewed proprietary finetunes published on shady forums. Since then, they've become much more prolific. It's estimated that roughly between a fourth and a third of photorealistic SDXL-based NSFW models released on civit.ai during 2025 have some degree of capability. (Speaking purely in a boolean metric.... I don't think anyone has done a study on the perceptual quality of these capability for obvious reasons.)

Just as LLM benchmark test answers have contaminated open source models, illegal capabilities gained from illegal datasets have also contaminated image models; to the point where there are plenty of well-intentioned authors unknowingly contributing to the problem. There are some who go out of their way to poison models (usually with false association training on specific keywords) but few bother, or even known, to do so.

[–] ColeSloth@discuss.tchncs.de 8 points 5 days ago (1 children)

They wouldn't be bothered to try and hide that they were pulled from those public services.

They 100% know that if they revealed that they used everyone's private photos backed up to Amazon cloud as fodder for their AI that it would puss people off and they'd lose some business out of the deal.

[–] ImgurRefugee114@reddthat.com 3 points 5 days ago

Well another factor is providence: they don't keep around exactly where they got their data from. Sometimes on a set level, but almost never on an individual sample. "We found csam somewhere on maybe reddit or imgur or pinterest" is practically worthless

[–] captainlezbian@lemmy.world 3 points 5 days ago

Yeah my bet is Facebook and maybe some less reputable sites. Surely they didn't scrape 8chan right?

[–] phx@lemmy.world 3 points 4 days ago

Yeah, a lot of people seem to think that these companies built these AI's by buying or building some sort of special training set/data, when in reality no such thing really existed.

They've basically just scraped every bit of data they can. When it comes to big corps, at least some of that data is likely from scraping customer's data. There's also scraping of the Internet in general, including sites such as Reddit (which is a big reason why they locked down their API, they wanted to sell that data) but many have also been caught with a ton of 'pirated ' data from torrents etc.

I'm sure there was a certain amount of sludge in customers' synced files, and sites like Reddit, but I'd also hazard a guess that the stuff grabbed from torrents etc likely had some truly heinous materials that they simply added to what was getting force-fed to AI, especially the early ones

[–] ieatpwns@lemmy.world 47 points 5 days ago (1 children)

Bezos laptop. If I’m wrong he can prove it

[–] TheLeadenSea@sh.itjust.works 11 points 5 days ago (1 children)

We usually have "innocent until proven guilty", not the other way around. He's already guilty of being a billionaire, no need to add charges unnecessarily.

[–] gustofwind@lemmy.world 23 points 5 days ago (1 children)

Innocent until proven guilty is for a court of law not public opinion

[–] lvxferre@mander.xyz 12 points 5 days ago (1 children)

"Innocent until proved guilty" is also a rather important moral principle, because it prevents witch hunts.

Plus we don't even need to claim he got CSAM in his laptop — the fact that he leads a company covering child abusers is more than enough.

[–] smeg@infosec.pub 19 points 5 days ago (3 children)

All of the AI tools know how to make CP somehow - probably because their creators fed it to them.

[–] Grimy@lemmy.world 14 points 5 days ago* (last edited 5 days ago) (1 children)

If it knows what children looks like and knows what sex looks like, it can extrapolate. That being said, I think all photos of children should be removed from the datasets, regardless of the sexual content, because of this.

[–] Rooster326@programming.dev 13 points 5 days ago (1 children)

Obligatory it doesn't "know" what anything looks like.

[–] Grimy@lemmy.world 9 points 5 days ago

Thank you, I almost forgot. I was busy explaining to someone else how their phone isn't actually smart.

[–] phx@lemmy.world 1 points 4 days ago

They fed them on the Internet including libraries of pirated material. It's like drinking from a fountain at a sewage plant

[–] stoly@lemmy.world -4 points 5 days ago (1 children)

There will be a lot of medical literature with photos of children’s bodies to demonstrate conditions, illnesses, etc.

[–] phoenixz@lemmy.ca 11 points 5 days ago (1 children)

Yeah, press X to doubt that AI is generating child pornography from medical literature.

These fuckers have fed AI anything and everything to train them. They've stolen everything they could without repercussions, I wouldn't be surprised if some of them fed their AIs child porn because "data is data" or something like that.

Depending on how they scraped data they may have just let their rovers run wild. Eventually they wouldve ran into child porn, which is also yet another reason why this tech is utterly shit. If you can't control your tech you shouldn't have it and frankly speaking curation is a major portion of any data processing.

[–] smokin_shinobi@lemmy.world 11 points 5 days ago

Mar-a-Lago is my guess where it came from.

[–] Hirom@beehaw.org 10 points 5 days ago (1 children)
[–] reallykindasorta@slrpnk.net 1 points 5 days ago

My first thought too

[–] jasoman@lemmy.world 8 points 5 days ago

Republican pedophiles, hence why they can't say where it came from

That sounds like Bezos's personal stash then.

[–] FalschgeldFurkan@lemmy.world 2 points 4 days ago

but isn't saying where it came from

Isn't that already grounds for legal punishment? This shit really shouldn't fly

[–] TheSlad@sh.itjust.works 3 points 4 days ago (1 children)

When i hear stuff like this, it always makes me wonder if the material is actual explicit exploitation of a minor, or just gross anime art scraped from 4chan and sketchy image boards.

[–] InFerNo@lemmy.ml 2 points 4 days ago* (last edited 4 days ago) (2 children)

And also innocent personal pictures of people photographing their kids without thinking of the implications. Dressing at the beach/pool, bath time as a toddler. People don't always think it through. They get uploaded to a cloud service and then scraped for AI that way, is my guess.

[–] protogen420@lemmy.blahaj.zone 5 points 4 days ago* (last edited 4 days ago)

remembed when a farther took pictures of his child during covid because the doctor asked for since they were keeping physical visits to a minium because of the pandemic and google's automated systrm flagged it as CSAM and the poor farther lost his gmail and google account which ended up fucking his life because that was his work email and hjs phone number got black listed (google accounts require phone number verification)

[–] TheSlad@sh.itjust.works 2 points 4 days ago

Yea that too. I read the article after making that comment wondering if they clarified...

Amazon stated that their detection/moderation has very low tolerance so there was a lot of borderline/false positives in their reports....

In the end though, it seems like all of Amazon's reports were completely inactionable anyways because Amazon couldn't even tell them the source of the scraped images.

[–] webkitten@piefed.social 3 points 5 days ago

Well that's not going to hold up in court.