this post was submitted on 30 Dec 2025
225 points (99.6% liked)

movies

2360 readers
384 users here now

A community about movies and cinema.

Related communities:

Rules

  1. Be civil
  2. No discrimination or prejudice of any kind
  3. Do not spam
  4. Stay on topic
  5. These rules will evolve as this community grows

No posts or comments will be removed without an explanation from mods.

founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dontsayaword@piefed.social 10 points 1 day ago* (last edited 1 day ago) (1 children)

Totally disagree. I've seen original sources reproduced that show exactly what an AI copied to make images.

And humans can definitely create things that have never been seen before. An AI could never have invented general relativity having only been trained on Newtonian physics. But Einstein did.

[–] riskable@programming.dev -2 points 1 day ago (1 children)

I've seen original sources reproduced that show exactly what an AI copied to make images.

Show me. I'd honestly like to see it because it means that something very, very strange is taking place within the model that could be a vulnerability (I work insecurity).

The closest thing to that I've seen is false watermarks: If the model was trained on a lot of similar images with watermarks (e.g. all images of a particular kind of fungus might have come from a handful of images that were all watermarked), the output will often have a nonsense watermark that sort of resembles the original one. This usually only happens with super specific things like when you put the latin name of a plant or tree in your prompt.

Another thing that can commonly happen is hallucinated signatures: On any given image that's supposed to look like a painting/drawing, image models will sometimes put a signature-looking thing in the lower right corner (because that's where most artist signatures are placed).

The reason why this happens isn't because the image was directly copied from someone's work, it's because there's a statistical chance that the model (when trained) associated the keywords in your prompt with some images that had such signatures. The training of models is getting better at preventing this from happening though, as they apply better bounding box filtering to the images as a pretraining step. E.g. a public domain Audibon drawing of a pelican would only use the bird itself and not the entire image (which would include the artist signature somewhere).

The reason why the signature should not be included is because the resulting image would not be drawn by that artist. That would be tantamount to fraud (bad). Instead, what image models do (except OpenAI with ChatGPT/DALL-E) is tell the public exactly what their images were trained on. For example, they'll usually disclose that they used ImageNET (which you yourself can download here: https://www.image-net.org/download.php ).

Note: I'm pretty sure the full ImageNET database is also on Huggingface somewhere if you don't want to create an account with them.

Also note: ImageNET doesn't actually contain images! It's just a database of image metadata that includes bounding boxes. Volunteers—for over a decade—spent a lot of time drawing bounding boxes with labels/descriptions on public images that are available for anyone to download for free (with open licenses!). This means that if you want to train a model with ImageNET, you have to walk the database and download all the image URLs it contains.

If anything was "stolen", it was the time of those volunteers that created the classification system/DB in order for things like OpenCV to work so that your doorbell/security camera can tell the difference between a human and a cat.

[–] dontsayaword@piefed.social 8 points 1 day ago* (last edited 1 day ago) (1 children)
[–] riskable@programming.dev -5 points 1 day ago (1 children)

What that Afghanistan girl image demonstrates is simply a lack of diversity in Midjourney's training data. They probably only had a single image categorized as "Afghanistan girl". So the prompt ended up with an extreme bias towards that particular set of training values.

Having said that, Midjourney's model is entirely proprietary so I don't know if it works the same way as other image models.

It's all about statistics. For example, there were so many quotes and literal copies of the first Harry Potter book in OpenAI's training set that you could get ChatGPT to spit out something like 70% of the book with a lot of very, very specific prompts.

At the heart of every AI is a random number generator. If you ask it to generate an image of an Afghan girl—and it was only ever trained on a single image—it's going to output something similar to that one image every single time.

On the other hand, if it had thousands of images of Afghan girls you'd get more varied and original results.

For reference, finding flaws in training data like that "Afghanistan girl" is one of the methods security researchers use to break large language models.

Flaws like this are easy to fix once they're found. So it's likely that over time, image models will improve and we'll see fewer issues like this.

The "creativity" isn't in the AI model itself, it's in its use.

[–] dontsayaword@piefed.social 7 points 1 day ago (1 children)

I guess the argument is "if the AI mixes enough copied art together so that you can't tell as easily, it's being creative like a human" and I just don't really believe that. Perhaps its a philosophical question.

[–] riskable@programming.dev -1 points 1 day ago

It's more like this: If you give a machine instructions to construct or do something, is the end result a creative work?

If I design a vase (using nothing but code) that's meant to be 3D printed, does that count as a creative work?

https://imgur.com/bdxnr27

That vase was made using code (literally just text) I wrote in OpenSCAD. The model file is the result of the code I wrote and the physical object is the output of the 3D printer that I built. The pretty filament was store-bought, however.

If giving a machine instructions doesn't count as a creative process then programming doesn't count either. Because that's all you're doing when you feed a prompt to an AI: Giving it instructions. It's just the latest tech for giving instructions to machines.