74
submitted 8 months ago by memfree@beehaw.org to c/gaming@beehaw.org

Excerpts:

The Verbal Verdict demo drops me into an interrogation room with basic facts about the case to my left, and on the other side of a glass window are three suspects I can call one at a time for questioning. There are no prompts or briefings—I just have to start asking questions, either by typing them or speaking them into a microphone

The responses are mostly natural, and at times add just a bit more information for me to follow up on.

Mostly. Sometimes, the AI goes entirely off the rails and starts typing gibberish

There are, of course, still many limitations to this implementation of an LLM in a game. Kristelijn said that they are using a pretty “censored” model, and also adding their own restrictions, to make sure the LLM doesn’t say anything harmful. It also makes what should be a very small game much larger (the demo is more than 7GB), because it runs the model locally on your machine. Kristelijn said that running the model locally helps Savanna Developments with privacy concerns. If the LLM runs locally it doesn’t have to see or handle what players are typing. And it also is better for game preservation because if the game doesn’t need to connect to an online server it can keep running even if Savanna Developments shuts down.

it’s pretty hard to “write” different voices for them. They all kind of speak similarly. One character in the full version of the game, for example, speaks in short sentences to convey a certain attitude, but that doesn’t come close to the characterization you’d see in a game like L.A. Noire, where character dialogue is meticulously written to convey personality.

you are viewing a single comment's thread
view the rest of the comments
[-] peter@feddit.uk 2 points 8 months ago

This is the only actually good use of LLMs I can really think of. As long as there is a good way to keep them within the bounds of the actual story it would be great for that

[-] blindsight@beehaw.org 3 points 8 months ago

I think they also have potential for creating lots of variations in dialogue pre-run in a database, and manually checked by a writer for QC.

The problem with locally-run LLMs is that the good ones require massive amounts of video memory, so it's just not feasible anytime soon. And the small ones are, well, crappy. And slow. And still huge (8GB+).

That of course means you can't get truly dynamic branching dialogue, but it can enable things like getting thousands of NPC lines instead of "I took an arrow to the knee" from every guard in every city.

It can also be used to generate dialogue, too, so not just one-liners, but "real" NPC conversations (or rich branching dialogue options for players to select.)

I'm very skeptical that we'll get "good" dynamic LLM content in games, running locally, this decade.

[-] Fisch@lemmy.ml 2 points 8 months ago

Big breakthroughs are still made when it comes to efficiency (so same or better quality for less processing power) and game devs will probably figure out how to best instruct the LLM to do what they want over time. I think there's still a lot that will happen in that regard in the next few years until it starts to slow down.

this post was submitted on 05 Mar 2024
74 points (100.0% liked)

Gaming

30500 readers
302 users here now

From video gaming to card games and stuff in between, if it's gaming you can probably discuss it here!

Please Note: Gaming memes are permitted to be posted on Meme Mondays, but will otherwise be removed in an effort to allow other discussions to take place.

See also Gaming's sister community Tabletop Gaming.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS