this post was submitted on 23 Oct 2023
569 points (86.3% liked)

Technology

59415 readers
1527 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[...]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

you are viewing a single comment's thread
view the rest of the comments
[–] barsoap@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

if the only source of input data for a network is subtly corrupted, won’t that guarantee corrupted output as well?

We have to distinguish between different kinds of "corruption", here. What you seem to be describing is "if we only feed the model data from rule34, will it ever learn proper human anatomy" and the answer is no, it won't. You'll have to add data which narrows the range of body proportions from cartoonish to, well, real. That's an external source of corruption: Feeding it bad data (for your own definition of "bad"). Garbage in, garbage out.

The corruption that these adversarial models are exploiting though is inherent in the model they're attacking. Take... ropes and snakes and cats (or, generally, mammals). Good example: It is incredibly easy for a cat to mistake a rope for a snake -- it looks exactly the same to the first layers of the visual cortex and evolution would rather have the cat jump away as soon as possible than be bitten, and it doesn't hurt to jump away from a rope (even though the cat might end up being annoyed or ashamed (yes cats can 110% be self-conscious different story)), so when there's an unexpected wiggly shape the first layers directly tell the motor cortex to move, short-circuiting any higher processing.

That trait has been written into the network by evolution, very similar to how we train AI models -- conceptually, that is: In both cases the network gets trained for fitness for a purpose (the implementation details are indeed rather different but also irrelevant):

What those adversarial models do kinda looks like this: Take a picture of a rope. Now randomly shift pixels to make the rope subtly more snake-like until you get your cat to jump as reliably as possible, in as many different situations as possible, e.g. even if they're expecting it and staring straight at it. Sell the product for a lot of money. People start posting pictures of ropes, rope manufacturers adjust their weaving patterns. Other cats see those pictures and ropes, some jump, and others only feel a bit, or a lot, uneasy. The ones that jump will not be able to procreate, any more, being busy jumping, while the uneasy ones will continue to evolve. After a couple of generations no cat cares about those ropes with shifted pixels any more.

Whether that trains general immunity against adversarial attacks -- I wouldn't be so sure. It very likely will make the rope/snake distinction more accurate. But even if it doesn't build general immunity, it's an eternal cat and mouse game and no artist will be willing to continue paying for that kind of software when it's going to get defeated within days, anyway, because that's just how fast we can evolve models.

Oh. Back to the definition of corruption: If all the pictures of rope that our models ever see have shifted pixels then it's just going to assume that is the norm, and distinguish it from snakes because the tags say "rope" in one case, and "snake" in the other. The original un-shifted pictures probably won't be an adversarial attack because they're not a product of trying to get cats to jump.