this post was submitted on 12 Jul 2025
22 points (92.3% liked)

Technology

3468 readers
490 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] irotsoma@lemmy.blahaj.zone 4 points 1 day ago

It was always going to be a cat and mouse game because "AI" companies have decided to abandon ethics completely since there are few consequences when you they are just a shell company and the parent company keeps all of the resulting training data and money, so the company that does the training going bankrupt and abandoning responsibility is no issue. Sad that court system is so non-technical that they don't see the training data produced by copyrighted material to be a copy of the material even if they were to decide that accessing the material was a violation.

[–] Feyd@programming.dev 8 points 1 day ago (1 children)

Of course it can be beaten. All that happened is these university employees did big tech's work for them and they're try to spin it like they're on artist's side anyway

[–] Hazzard@lemmy.zip 1 points 1 day ago (1 children)

Eh, it's a fair point. Not trying something like this is essentially "security by obscurity", which has been repeatedly proven to be a mistake.

Wouldn't surprise me if OpenAI or someone else already had something like this behind closed doors, but now the developers of tools like Nightshade can begin to work on developing AI poison that's more resilient against these kinds of "cleanup" tools.

[–] Feyd@programming.dev 2 points 20 hours ago (1 children)

This will be a never ending arms race. There isn't going to be a permanent obstacle, so all this did was help the bad guys move to the next stage.

[–] Hazzard@lemmy.zip 1 points 20 hours ago* (last edited 19 hours ago) (1 children)

Exactly, it is an arms race. But if a few students can beat our current best weapons, it'd be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn't already cracked this and locked everyone involved into serious NDAs.

Better to know what your enemy has then to just cross your fingers and hope that maybe they didn't notice this was possible, and have just been letting us poison their precious AI models they're sinking billions of dollars into. Having this now lets us build the next version of nightshade that isn't so trivially defeated.

[–] Feyd@programming.dev 1 points 18 hours ago (1 children)

You're completely talking past me. Everyone knew it was a flimsy baracade and that if the LLM companies hadn't circumvented it they would soon. That doesn't stop people from continuing to innovate. Publishing the results mean there is a public solution anyone can use.

Do I think it's the worst thing that could happen? Not really, but your security through obscurity argument makes no sense in this context and it would probably be better if it wasn't done and published so every bad actor can use it with minimal effort.

[–] Hazzard@lemmy.zip 1 points 17 hours ago* (last edited 16 hours ago) (1 children)

Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I'm not nearly as worried about small players, like hobbyists and small companies, who wouldn't've already developed something like this in house.

And I brought up "security through obscurity" because I'm somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.

Many of those people releasing decryption methods were considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I'm hopeful that one day we actually could make an unbeatable AI poison, so I'm happy to support research that pushes us towards that end.

I'm just not satisfied preventing small players from training AI on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.

[–] Feyd@programming.dev 2 points 15 hours ago (1 children)

Yeah there's the difference. I'm not convinced there is a robust poison but I'd love to be wrong

[–] Hazzard@lemmy.zip 1 points 14 hours ago

Amen to that, here's to hoping.