Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
But if they're uniquely good at producing CSAM, odds are it's due to a proprietary dataset.
This is why I use the word 'proliferation,' in the nuclear sense. Though contamination may be more apt... Since the days of SD1, these illegal capabilities have become more and more prevalent in the local image model space. The advent of model merging, mixing, and retraining/finetunes, have caused a significant increase in the proportion of model releases that have been contaminated.
What you're saying is ultimately true, but it was more true in the early days. Animated, drawn, and CGI content has always been a problem, but photorealistic capability was very limited and rare, often coming from homebrewed proprietary finetunes published on shady forums. Since then, they've become much more prolific. It's estimated that roughly between a fourth and a third of photorealistic SDXL-based NSFW models released on civit.ai during 2025 have some degree of capability. (Speaking purely in a boolean metric.... I don't think anyone has done a study on the perceptual quality of these capability for obvious reasons.)
Just as LLM benchmark test answers have contaminated open source models, illegal capabilities gained from illegal datasets have also contaminated image models; to the point where there are plenty of well-intentioned authors unknowingly contributing to the problem. There are some who go out of their way to poison models (usually with false association training on specific keywords) but few bother, or even known, to do so.