this post was submitted on 24 Feb 2025
457 points (99.6% liked)

Gaming

2822 readers
587 users here now

The Lemmy.zip Gaming Community

For news, discussions and memes!


Community Rules

This community follows the Lemmy.zip Instance rules, with the inclusion of the following rule:

You can see Lemmy.zip's rules by going to our Code of Conduct.

What to Expect in Our Code of Conduct:


If you enjoy reading legal stuff, you can check it all out at legal.lemmy.zip.


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] deur@feddit.nl 23 points 1 day ago (6 children)

People want pieces of art made by actual humans. Not garbage from the confident statistics black box.

[–] Lumiluz@slrpnk.net 4 points 22 hours ago (2 children)

What if they use it as part of the art tho?

Like a horror game that uses an AI to just slightly tweak an image of the paintings in a haunted building continuously everytime you look past them to look just 1% creepier?

[–] AceFuzzLord@lemm.ee 5 points 22 hours ago (2 children)

Would the feature in that horror game Zort where you sometimes use the player respon item and it respons an NPC that will use clips of what a specific dead player has said while playing count as AI use? If so, that's a pretty good use of AI in horror games in my opinion.

[–] Semjaza@lemmynsfw.com 2 points 10 hours ago

That's not generative, since it's just copying player input. Feasible without AI, just storing strings for later recall.

[–] Jakeroxs@sh.itjust.works 1 points 21 hours ago
[–] mke@programming.dev 0 points 17 hours ago* (last edited 17 hours ago) (1 children)

That's an interesting enough idea in theory, so here's my take on it, in case you want one.

Yes, it sounds magical, but:

  • AI sucks at make it more X. It doesn't understand scary, so you'll get worse crops of the training data, not meaningful changes.
  • It's prohibitively expensive and unfeasible for the majority of consumer hardware.
  • Even if it gets a thousand times cheaper and better at its job, is GenAI really the best way to do this?
  • Is it the only one? Are alternatives also built on exploitation? If they aren't, I think you should reconsider.
[–] Lumiluz@slrpnk.net 0 points 11 hours ago* (last edited 11 hours ago) (1 children)

•Ok, I know the researching ability of people has decreased greatly over the years, but using "knowyourmeme" as a source? Really?

• You can now run optimized open source diffusion models on an iPhone, and it's been possible for years. I use that as an example because yes, there's models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game

• Already has for awhile as demonstrated by it being able to run on an iPhone, but yes, it's probably the best way to get an uncanny valley effect in certain paintings in a horror game, as the alternatives would be:

  • spending many hours manually making hundreds of incremental changes to all the paintings yourself (and the will be a limit to how much they warp, and this assumes you have even better art skills)
  • hiring someone to do what I just mentioned (assumes you have a decent amount of money) and is still limited of course.

• I'll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations. I have looked into this myself, unlike seemingly most people on the internet. Last I checked, the closest was a 90 something % similarity image after using an algorithm that modified the prompt over time after thousands of generations. I can find this research paper myself if you want, but there may be newer research out there.

[–] mke@programming.dev 0 points 3 hours ago

You can now run optimized open source diffusion models on an iPhone, and it's been possible for years.

Games aren't background processes. Even today, triple-A titles still sometimes come out as unoptimized hot garbage. Do you genuinely think it's easy to pile a diffusion model on top with negligible effect? Also, will you pack an entire model into your game just for one instance?

I use that as an example because yes, there's models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game

Look at the share of people using an 1050 or lower card. Or let's talk about the AMD and Intel issues. These people aren't an insignificant portion. Hell, nearly 15% don't even have 16GB of ram.

it's probably the best way to get an uncanny valley effect in ... a horror game, as the alternatives would be:

  • spending many hours manually making hundreds of incremental changes
  • hiring someone to do what I just mentioned

What are you talking about? You're satisfied with a diffusion model's output, but won't be with any other method except excruciating manual labor? Your standards are all over the place—or rather, you don't have any. And let's keep it real: most won't give a shit if your game can show them 10 or 100 slightly worse versions of the same image.

Procedural generation has been a thing for decades. Indie devs have been making do with nearly nonexistent art skills and less sophisticated tech for just as long. I feel like you don't actually care about the problem space, you just want to shove AI into the solution.

I'll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations.

Are you referring to the OSAID? The infamously broken definition that exists to serve companies? You don't understand what exploitation here means. "Can it regurgitate exact training input" is not the only question to ask, and not the bar. Knowing your work was used without consent to train computers to replace people’s livelihoods is extremely violating. Talk to artists.

I know the researching ability of people has decreased greatly over the years, but using "knowyourmeme" as a source? Really?

I tried to use an accessible and easily understandable example. Fuck off. Go do your own "research", open those beloved diffusion models, make your scary, then scarier images and try asking people what they think of the results. Do it a hundred times, since that's your only excuse as to why you need AI. No cherry-picking, you won't be able to choose what your rube goldberg painting will look like on other people's PCs.

[–] RampantParanoia2365@lemmy.world 1 points 1 day ago (2 children)

Honest question: are things like trees, rocks, logs in a huge world like a modern RPG all placed by hand, or does it use AI to fill it out?

[–] finitebanjo@lemmy.world 5 points 23 hours ago (1 children)

Not AI but certainly a semirandom function. Then they go through and manually clean it up by hand.

[–] SchmidtGenetics@lemmy.world 1 points 7 hours ago

Ah, so this kind of tool is allowable, but not another? Pretty hypocritical thinking there.

A tools is a tool, any tool can be abused.

[–] skibidi@lemmy.world 2 points 23 hours ago (1 children)

Most games (pre-ai at least) would use a brush for this and manually tweak the result if it ended up weird.

E.g. if you were building a desert landscape you might use a rock brush to randomly sprinkle the boulder assets around the area. Then the bush brush to sprinkle some dry bushes.

Very rare for someone to spend the time to individually place something like a rock or a tree, unless it is designed to be used in gameplay or a cutscene (e.g. a climable tree to get into a building through a window).

[–] TwanHE@lemmy.world 2 points 22 hours ago

That's only for open world maps, many games where the placement of rocks and trees is something that's subject to miniscule changes for balance reasons.

[–] pennomi@lemmy.world 2 points 1 day ago (1 children)

It’s all virtue signaling. If it’s good, nobody will be able to notice anyway and they’ll want it regardless. The only reason people shit on AI currently is because expert humans are still far better than it.

We’re just at that awkward point in time where AI is better than the random joe but worse than experts.

[–] mke@programming.dev 2 points 18 hours ago (1 children)

The only reason people shit on AI currently is because expert humans are still far better than it.

Not it's not! There are a whole bunch of reasons why people dislike the current AI-wave, from artist exploitation, to energy consumption, to making horrible shitty people and companies richer while trying to obviate people's jobs!

You're so far off, it's insane. That's like saying people only hate slavery because the slaves can't match craftsmen yet. Just wait a bit until they finish training the slaves, just a few more whippings, then everyone will surely shut up.

[–] pennomi@lemmy.world 0 points 17 hours ago (1 children)

I agree that those are reasons people give for their reasoning, but if history has shown anything, we know people change their minds when it becomes most convenient to use a technology.

Human ethics is highly dependent on convenience, unfortunately.

[–] mke@programming.dev 2 points 16 hours ago

It sounds like you gave up and expect everyone else to do the same.

[–] otp@sh.itjust.works -4 points 1 day ago (1 children)

One of my favourite games used procedural generation to create game "art", "assets", and "maps".

That could conceivably be called (or enhanced by) ML today, which could conceivably be called AI today.

But even in modern games, I'm not opposed to mindful usage of AI in games. I don't understand why you're trying to speak for everyone (by saying "people") when you're talking to someone who doesn't share your view.

This is like those stupid "non-GMO" stickers. Yes, GMOs are being abused by Monsanto (and probably other corporations like them). No, that doesn't mean that GMOs are bad in all cases.

[–] finitebanjo@lemmy.world 4 points 23 hours ago (1 children)

I think the sort of generative AI referred to is something that trains on data to approximate results, which consumes vast amounts more power.