There are uses of AI that are proving to be more than black and white. While voice actors, have protested their performances being fed into AI against their will, we are now seeing an example of this being done, with permission, in a very unique case.

  • Hildegarde@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    11 months ago

    This article lacks some context. Miłogost Reczek was the voice actor for the Polish language of the game. If you played the English version of the game you would have heard the very much alive Michael Gregory as Victor Vektor.

    Many commenters here are discussing the writer’s and actors strikes that are in the news. Those are American unions, they have no bearing on the work of Polish voice actors who do localization work.

    • gornius@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      11 months ago

      He also voiced Vesemir from Witcher. He was also very popular voice actor in animated movies localizations. Genuinely he was one of the few voice actors I knew by name, and the news that he died really struck me.

    • NuPNuA@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      Technically, wouldn’t he be the standard voice actor in a CDPR game and the English actor be the localisation?

      • Hildegarde@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        11 months ago

        I assumed the English version is the primary version. The game is set in the US, it’s based on an American tabletop game. The plurality of the game’s players will be playing the English version. Also the Bloomberg article features quotes from “CD Projekt localization director Mikołaj Szwed.”

        I don’t know specifics of CDPRs development, but it’s a reasonable assumption to make that English is the primary language, despite the studio’s location.

        Also the game has Kianu Reeves, his character is pretty clearly based on the actor. You wouldn’t spend hollywood money to hire a hollywood film actor just to have them there to do dubbing work to replace a polish actor’s performance.

        • gredo@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          11 months ago

          Aren’t all languages localizations in a game? Unless the characters’ movements and voice would be motion captured and recorded together…

      • MrScottyTay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        11 months ago

        They’re all localisation. It just happens that the polish work is localising for their home country.

      • anlumo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        CDPR uses game dev studios all over the place for supplemental work, so their business language has to be English.

  • pruwyben@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    31
    ·
    11 months ago

    I was just thinking earlier today, games could probably use AI to seamlessly work the player’s name into dialog. They would still hire voice actors, but insert whatever name the player chooses into the lines where it is mentioned. I feel like this isn’t too far away.

      • Cavemanfreak@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        ·
        11 months ago

        Fallout 4 did as well, but it worked with a list of names that Codsworth could say. I’m assuming Starfield does something similar? Or is it a ton of NPCs that use the name?

        • stephfinitely@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          11 months ago

          Idk I didn’t pick a name, I just entered my name for my character. Then I was playing and Vasco just said my name. I literally stopped in my tracks and was like “what did you just say?”

        • Chozo@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          I thought they actually just recorded a ton of names for Fallout 4? I seem to recall hearing that they did something like 900+ different name recordings.

          • Hildegarde@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            11 months ago

            They did the same thing. Probably using the same list with additions. Both games have a robot be the only character who says your name. I suppose that makes it less unbelievable if you notice the splices where they add your name to an otherwise unaltered line.

    • buran@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 months ago

      World of Warcraft would benefit from this. Local processing power is quite up to the task these days, and it’s jarring to see your name on the screen but the audio says “champion” or something similar.

      Maybe in 11.0…

    • Dyskolos@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      I recently had this in a game. Just couldn’t say which one sadly… I was really suprised it said my (character’s) name. Damn…which game was it?

    • Ookami38@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I look forward to the expansion from this, even. Real dialogue between the players and the game. Writers and designers set parameters for each character, backstory, motives, personality, current goals, etc. And then instead of a simple 4-5 choices of pre-canned dialog, with pre-canned responses, the player can type (or even say) whatever they want, and the game can return a customized response appropriate for the character.

  • WindowsEnjoyer@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 months ago

    With advances in AI, Reczek’s performance could be recreated, but CDPR made sure they got permission from his family. His sons were “very supportive” of the idea, according to a Bloomberg report.

    I am pretty sure his sons are fans of CP2077 too. I would give permission too if I were in their shoes - getting a new character, or a different voice for the old character is annoyingly weird when you get used to it…

  • andrew_bidlaw@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    11 months ago

    Imagine now, the Joker 2 movie where Heath Ledger, as Joker, jokes all his jokes about living in the society.

  • Mongostein@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    11 months ago

    I mean, sure, but that should be negotiated. If they’re using my likeness for free I would not be ok with that. If they’re paying me (or my family) for the use, I would give permission for that.

    • ripcord@kbin.social
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      11 months ago

      Right…and they did. Isn’t that one of the main poinst of the article…?

      • Mongostein@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        11 months ago

        Yes. This time, although it was permission from the family, not the actor. Should that be allowed?

            • Mongostein@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Is it? Or is it that I have other things to do? 🙄

              I haven’t fully formed an opinion on this topic, but to me it seems wrong to use someone’s likeness without their permission. I understand that the family gave permission, which is legally ok, but is it morally ok?

              I’m not sure. I think it should be something negotiated before their death.

        • BetaDoggo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          Depends on whether a voice is considered a copyrightable asset. If it is it would have transfered to the family when he died so they could give permission. If not CDPR legally wouldn’t be required to get consent anyway. New regulation is probably going to be written to clarify issues like this.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            11 months ago

            Depends on whether a voice is considered a copyrightable asset.

            It isn’t, a voice is not an expression and hardly tangible, you can copyright a voice as much as you can copyright a violin, or a style of play: You can’t. But as we’re talking about a person and not an object it is use of someone’s likeness, which is part of personality rights.

            • BB69@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Once somebody is dead, their estate because their representative. It’s up to the estate to make the call at that point.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    11 months ago

    In general lay audiences have a very weird relationship to AI as a topic, likely in large part to decades of effectively propaganda from SciFi anthropomorphizing it as an effective ‘other’ to have as a threat for human protagonists.

    The reality of how this all is going to play out is that low skill work like walla walla filler for audioscapes will be AI generated instead of library sourced, which is going to sound better and not really make much difference to labor markets.

    Middle range performances like NPCs will be AI generated from libraries where the actual voice actors creating that voice will be paid out residuals for the duration of voice generation, and you’ll likely have better performances than most side quest NPCs by the next generation of consoles.

    High end performance for key characters will still be custom hires directed for the specific role, likely with additional contract terms for extended generation for things like Bethesda’s radiant questlines.

    The latter is going to be the thing that’s going to be the biggest hurdle to figure out terms for, as what would be ideal for the player would be having a near infinite variety of branching questlines in an open world that would be fully voiced, but if each branch was considered its own X hours of generation under contract that wouldn’t be feasible and would ultimately price human actors out of the market down the road in favor of fully artificial alternatives. So it will probably be something like X hours of parallel generation (i.e. infinite variety but maybe only an additional 200 hours worth in a playthrough priced at 200 hours of generation).

    But as can be seen in the article, it’s not as simple as waving a hand and having AI voice lines - this was work done on top of a different actor’s performance to bring the voice in line with the original performer.

    And given there’s still going to be a few years as the tech improves with significant overlap of needing to work with actors to get performances right, this is all going to get managed in acceptable ways.

    You don’t see people losing their minds over improved facial animation rigs taking away mocap sessions from actors, even though that’s a reality of improved tech. But it doesn’t have the scary ‘AI’ in the name (even though the tech is generally going to lean more and more on machine learning), so it flies under the radar.

    Ultimately, being able to take a static voice performance into dynamic extended content is going to be one of the best things to ever happen to video games, and given how much of that is going to rely on human performance and union buy in, I wouldn’t even be surprised if the eventual leading product offering ends owned and operated by the trade unions or a number of the actors themselves.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      the actual voice actors creating that voice will be paid out residuals for the duration of voice generation

      In a perfect world, sure. In reality, we’ve seen that paying residuals is something companies won’t do if they can possibly help it. It’s one of the very issues being fought over in the strike negotiations right now.

      And given there’s still going to be a few years as the tech improves with significant overlap of needing to work with actors to get performances right, this is all going to get managed in acceptable ways.

      I admire your optimism, but I can’t share it. We’re so poorly prepared to deal with job losses associated with AI and automation in general, and I don’t see any movement on that front. If you’re relying on unions to get it done, know that they already have an uphill battle getting things they should’ve had years ago, let alone future protections against a rapidly changing market.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        The thing is, for the next 3-5 years, talent holds all the cards here.

        It will eventually flip such that the 20 of the 80/20 of a key character performance can be fully automated, but that’s years away.

        Until then, studios that want high quality AI generated performances are going to need to be working intimately with the talent that can produce the baseline to scale out from.

        And the whole job loss thing is honestly overblown. Of course there’s going to be companies chasing short term profits in exchange for long term consequences, but the vast majority of those are going to continue to blow up in their faces.

        In reality, rather than labor demand staying constant as supply increases with synthetic labor, what’s going to happen is that labor demand spikes rapidly as supply increases.

        You won’t see a game with a 40 person writing team reducing staff to 4 people to produce the same scope of game, you’ll see a 40 person writing team working to create a generative AI pipeline that enables an order of magnitude increase in world detail and scope.

        It’s almost painful playing games these days seeing the gap between what’s here today and what’s right around the bend. Things like Cyberpunk 2077 having incredibly detailed world assets, dialogue, and interiors in the places the main and side quests take you, but NPCs on the sidewalk that all say the same things in voices that don’t even match the models, and most buildings off the beaten path being inaccessible.

        Think of just how much work went into RDR2 having each NPC have unique-ish responses to a single round of dialogue from the PC, and how much of a difference that made to immersion but still only surface deep.

        Rather than “let’s fire everyone and keep making games the same way we did before” staffing is going to continue to be around the same, but you’ll see indie teams able to produce games closer to bigger studios today and big studios making games that would be unthinkable just a few years ago.

        The bar adjusts as technology advances. It doesn’t remain the same.

        Yes, large companies are always going to try to pinch every penny. But the difference between a full voice synthesis performance over the next few years that skirts unions and a performer-integrated generation platform that’s tailored to the specific characters being represented is going to be night and day, and audiences/reviewers aren’t going to react well to cut corners as long as flagship examples of it being done right are being released too.

        The fearmongering is being blown out of proportion, and at a certain point it actually becomes counterproductive, as if too many within a given sector buy into the BS such that they simply become obtusely contrarian to progress rather than adaptive, you’ll see the same shooting themselves in the foot as the RIAA/MPAA years ago fighting tooth and nail to prevent digital distribution, leaving the door open to 3rd parties to seize the future, rather than building and adapting into the future themselves (which would have been the most profitable approach in retrospect).

  • RedWeasel@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    11 months ago

    Ai is really just a tool. How it is used,good or bad, and whether the person’s likeness is used with permission is controlled by people who make decisions.