this post was submitted on 07 Mar 2026
52 points (98.1% liked)
Slop.
852 readers
271 users here now
For posting all the anonymous reactionary bullshit that you can't post anywhere else.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No bigotry of any kind, including ironic bigotry.
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target federated instances' admins or moderators.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


Mostly, yeah, although some of that is UI frontend formatting. For certain frontends and model types something like "(tag:2)" increases the weight of the tag during the turning the text into usable numbers stage, and it only did anything if that was actually a tag the model or lora was trained with. It had some limited ability to force like SD1.5 or SDXL based models more or less towards a concept, but there's always so much random noise and incoherence that means actually making the shitty gacha churn out a desired result means lots and lots of rerolling and poking at the prompt and it never actually does a good job.
Modern qwen based natural language prompt models are literally just you describe something in as much detail as possible and then the image model gives something that's still dogshit and still randomly broken, but is a little more like what it's told than the older ones did.
There's no secret to it, and even at it's most esoteric it was less complicated than the markup formatting used in reddit or lemmy posts lmao.