this post was submitted on 04 Jan 2026
14 points (100.0% liked)

technology

24155 readers
196 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

I'm looking to make what seems like two fairly simple scripts.

First script:

Step 1: Take the contents from a table. The table is a collection of ChatGPT responses to the prompt: "Please generate a list as long as possible arranged in a table of alternative ways to phrase this hypothetical AI prompt, using different variations of scenery and demands for visually-intensive aesthetic additions. In all variations, keep "@sama" intact: In a photorealistic scene, @sama dances around a large pile of burning money. Surrounding him are crying men in suits, some of them begging on their knees and sobbing on the ground. This takes place in an extremely detailed forest that shows off the full complexity of the model's image generation capabilities. There is a soundtrack of happy ukulele music only broken by the sounds of crying and @sama laughing."

Step 2: Feed a cell from that table into OpenAI's Sora website and make it generate a video. This repeats until the day's free upload limit is reached.


Second script:

Step 1: Take the contents of another table. It's a collection of ChatGPT responses to a prompt asking for a similar list of ways to rephrase "In this news photo from today, a man lit a large pile of money on fire. Please alter the image to look slightly off in ways that demonstrate the ways an AI model would fail to authentically reproduce it, but do so in a way that it wouldn't be obvious to an AI model inspecting the image to distinguish it from a photograph."

Step 2: Upload a ChatGPT-generated image of Sam Altman burning a pile of money while businessmen cry in horror to ChatGPT with the tabled variations of the prompt

Step 3: Save that image, then repeat the process in a new chat using it and the next cell from the table.


I've done both of these processes manually and they work. I'd like to max out the slop generation across as many accounts as possible, costing OpenAI money to train itself on its own filth. Ideally this would be one button that you could press on each website, either as a browser extension or desktop app. What coding language can I use as a monkey with a keyboard to bang this out?

top 5 comments
sorted by: hot top controversial new old
[–] doodoo_wizard@lemmy.ml 2 points 3 days ago

This won’t work.

Not because you don’t know how to do it, or because they’re not using user generated content in the training datasets, but because all the services are hardened against specifically this vector.

Imagine you’re one of a large handful of ai companies offering cheap or free services at far, far below cost in an effort to be the guy once it all takes off. What’s the literal first thing you’d do to kneecap your rivals? You’d load their service down generating garbage so they have to waste money on it for no (or even negative) return!

So naturally they have all worked soft limits into their apis and worded their terms of service to prevent vaguely worded “abuses”, presented as a few bad apples who live in vans down by their respective rivers needing to be reined in before they can poison the well, all to prevent corporate sabotage.

The proliferation of “slop art” like yapdollar or the millennial memory care smiley are great examples of corporate versions of the cias support of the tomato soup can man in the 60s. Ostensibly saying something about our world but literally funded to prevent the opposition from being able to function.

Once the obvious vector of “waste a billion gpu flops making a picture of your competitor suffering over and over again” ran up against a wall, companies did a bunch of distributed versions, often under the guise of “reverse engineering” or “research” then pivoted towards encouraging the most brain broken festies to use their competitors services and presently are all in on social media grifters doing the same.

Your program couldn’t possibly do more damage than a hundred million zoomers making tool covers animate or having Colby the Christian computer narrate their resumes.

If you just skipped to the end then this reply can be filed under “the revolution will not be televised” because we aren’t living in they live or the running man. Simply sabotaging or taking over the media apparatus used to harm people can’t “set them free” because that apparatus is part of a greater system that is self repairing and self replicating. Adventurism is frowned upon not only because it’s an a-1 way to see who’s a fed but also because it doesn’t fucking work.

[–] Cutecity@hexbear.net 4 points 4 days ago* (last edited 4 days ago) (1 children)

How do you know this actually poisons anything? I thought user outputs only accidentally end up in training data through coincidence, specifically because the training process is known to degrade if trained on AIgen, so they avoid it as much as possible.

[–] happybadger@hexbear.net 3 points 4 days ago (1 children)

I think there are three ways to poison an AI:

  1. Attack the training data. You're probably right in that they aren't training it directly on user inputs. I think LLMs might operate this way because they have access to a lot of other sources of data. Image and video generators have fewer sources, and in the case of OpenAI's products they allow you to recycle the outputs. You're poisoning it by making the outputs more unreliable.

  2. Attack the computational power. For LLMs it's not a big deal. For image and video generators, it's expensive to generate slop. You're poisoning the process by clogging up the queue and increasing their demand for data centre resources that surrounding communities hate. It can't even generate unreliable outputs because it's busy generating the equivalent of fully blacked-out printer pages.

  3. Attack the platform. Sora's front page is already as bleak and disengaged as 2010s Digg. It wouldn't take much to flood the space with the same meme that's altered just enough each time to evade filtering efforts. Maybe it's Sam Altman eating the money under a photorealistic sea, maybe it's a tech CEO with black hair urinating on money in a photorealistic recreation of the Hagia Sophia, maybe it's a description of OpenAI's logo with arms and legs swimming in a pool of molten gold while Jake Paul plays funeral music on a trumpet. You're poisoning it structurally by further disengaging the community, turning some of it hostile to the platform, and discouraging any investor who opens the app.

There's probably a more effective way of achieving all three that I just haven't thought of. I can at least take a stab at 2 and 3, while 1 is dependent on those outputs being rehosted on other platforms that we do know they source training data from. Every reddit post with an increasingly regurgitated ChatGPT image claiming to be something it isn't gets fed into ChatGPT.

[–] Cutecity@hexbear.net 2 points 3 days ago

Thanks, I get that! I hope it ends up working.

[–] nasezero@hexbear.net 3 points 4 days ago

If you're comfortable with javascript/node, I bet Playwright could do this easily. It's a Javascript framework for building tests that run automated operations against websites. It has everything you need to to launch a browser, open a website, find the input, input whatever text, and submit it.