Excerpts:

The Verbal Verdict demo drops me into an interrogation room with basic facts about the case to my left, and on the other side of a glass window are three suspects I can call one at a time for questioning. There are no prompts or briefings—I just have to start asking questions, either by typing them or speaking them into a microphone

The responses are mostly natural, and at times add just a bit more information for me to follow up on.

Mostly. Sometimes, the AI goes entirely off the rails and starts typing gibberish

There are, of course, still many limitations to this implementation of an LLM in a game. Kristelijn said that they are using a pretty “censored” model, and also adding their own restrictions, to make sure the LLM doesn’t say anything harmful. It also makes what should be a very small game much larger (the demo is more than 7GB), because it runs the model locally on your machine. Kristelijn said that running the model locally helps Savanna Developments with privacy concerns. If the LLM runs locally it doesn’t have to see or handle what players are typing. And it also is better for game preservation because if the game doesn’t need to connect to an online server it can keep running even if Savanna Developments shuts down.

it’s pretty hard to “write” different voices for them. They all kind of speak similarly. One character in the full version of the game, for example, speaks in short sentences to convey a certain attitude, but that doesn’t come close to the characterization you’d see in a game like L.A. Noire, where character dialogue is meticulously written to convey personality.

  • memfree@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 months ago

    NOTE: I just downloaded the game and on my first attempted launch, it complained that the port it wanted was not open. My only option was to close the game. I ran netstat and did not see the port listed, so I tried again. THAT time, it complained about my older video card :-/ The warning is clunky and there’s a typo, too (within -> withing). It says (if I transcribed accurately):

    You are using an: NVIDIA GEOFORCE GTX 1080. This video card is currently not recognized withing the recommended specs. We only support a limited amount of NVIDIA GTX graphics cards, all NVIDIA RTX graphics cards or all AMD RX graphics cards since the local AI requires a lot of performance.

    So please note that the game might not work properly. Refer to the Steam guide for more information.

    When I closed that warning, the game loaded.

  • Fisch@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    8 months ago

    Since I learned about LLMs when ChatGPT became popular, the one thing I wanted to see was games where you can actually talk to NPCs (using a locally running LLM like here, not using ChatGPT) and it’s cool to see that we’re getting closer and closer to that

    • Big P@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      This is the only actually good use of LLMs I can really think of. As long as there is a good way to keep them within the bounds of the actual story it would be great for that

      • blindsight@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I think they also have potential for creating lots of variations in dialogue pre-run in a database, and manually checked by a writer for QC.

        The problem with locally-run LLMs is that the good ones require massive amounts of video memory, so it’s just not feasible anytime soon. And the small ones are, well, crappy. And slow. And still huge (8GB+).

        That of course means you can’t get truly dynamic branching dialogue, but it can enable things like getting thousands of NPC lines instead of “I took an arrow to the knee” from every guard in every city.

        It can also be used to generate dialogue, too, so not just one-liners, but “real” NPC conversations (or rich branching dialogue options for players to select.)

        I’m very skeptical that we’ll get “good” dynamic LLM content in games, running locally, this decade.

        • Fisch@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Big breakthroughs are still made when it comes to efficiency (so same or better quality for less processing power) and game devs will probably figure out how to best instruct the LLM to do what they want over time. I think there’s still a lot that will happen in that regard in the next few years until it starts to slow down.

    • derbis@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      There’s a Stardew valley mod that does this but it seems development has stalled

      • Fisch@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        That’s really cool. I also heard about a Skyrim mod that does this too and since I’m playing Skyrim in VR (modded ofc) that would make it even cooler.

  • AlmightyTritan@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    8 months ago

    I don’t think we’ll see this any time soon, because corpos probably won’t listen to any creative that presents this, but I want something where the LLM runs locally and is just used to interpret what you are asking for but the dialogue responses are all still written by a writer. Then you can make the user interaction feel more intuitive, but the design of the story and mechanics can just respond to the implied tone, questions, prompts, keywords from the user.

    Then you could have a dialogue tree that responds with a nice well constructed narrative, but a user who asked something casually vs accusatory might end up with slightly different information.

    • Fauxreigner@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      Unless you’re willing to put in some kind of response that basically says “I’m not going to respond to that” (and that’s a sure way to break immersion) this is effectively impossible to do well, because the writer has to anticipate every possible thing a player could say and craft a response to it. If you don’t, you’ll end up finding a “nearest fit” that is not at all what the player was trying to say, and the reaction is going to be nonsensical from the player’s perspective

      LA Noire is a great example of this, although from the side of the player character: the dialogue was written with the “Doubt” option as “Press” (as in, put pressure on the other party). As a result, a suspect can say something, the player selects “Doubt”, and Phelps goes nuts making wild accusations instead of pointing out an inconsistency.

      Except worse, because in this case, the player says something like “Why didn’t you say something to your boss about feeling sick?” and the game interpreted it as “Accuse them of trying to sabotage the business.”

      • memfree@beehaw.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Ooooh, I’d like that! Well, there’s 3 parts to the (random user input / scripted game output) conundrum:

        1. I think it is fair that if you ask, ‘Why didn’t you say something?’ the NPC might either respond as if it is being accused of sabotage, answer the damn question, lie, or prefer not to talk about it (it’s personal).
        2. I’d keep a short list of standard options – probably in a collapsed scroller kinda thing so you could either verbally say or type whatever you want, OR you could click an arrow to pick from a list. That way lazy or stuck players wou;dn’t have to think of all the options, and players interested in roleplaying could do as they please.
        3. I’m OK with, “I’m not going to respond to that”. I’d hope each character had several variations of that, but I think it is legitimate for NPCs to dislike being pestered. Shopkeepers might have replies like, “Are you gonna buy something or are you just here to bend my ear?” or “I don’t see how that relates to my inventory.” Random townies might reply, “Do I even know you?” or “Would you PLEASE stop bothering me.” or “You’re harshing my mellow, man. Shhhh… Just chill.”
  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    8 months ago

    I wonder if were gonna start seeing modular specialized game drivers to save space and work.

    We already have shared libraries for gamepad controlles and such. Why not one that handles a large language model , one for raytraced light. Maybe even an image generator for patterns in creative building games.

    These would need to be standardized and able to be further molded, processed , restricted by the actual games.

    Obvious the Triple Ass studios will want you to pay for online services but I legitimately believe there is a future for open source gaming and this could potentially save allot of hair pulling for some nonprofit indie devs.