• Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    23 hours ago

    This is a really good video about how good DLSS5 works… on backgrounds, without altering artistic intent. If nvidia allows devs to tailor “no go zones” for DLSS, this would be a great thing.

    https://youtu.be/rtiynhjWPWo

    • redwattlebird@thelemmy.club
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      But… Why though? As a dev, why would I go through the ideation process only to have it filtered through TWO GPUs? For what benefit? This type of filtering is completely out of my control as a developer, and I wouldn’t want my game to be attached to third party parasite companies and basically split my player base into two classes.

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        20 hours ago

        I agree on two things:

        • DLSS5 should not be run where an artist does not want it to be run.
        • DLSS5 requiring two GPUs is horrible, and can only push us further towards the dreaded “game in the cloud”

        But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

        As artists learn to predict what these tools do, and where to take advantage of them (such as in backgrounds or on specific textures), I think they will become useful. At least I hope. If nvidia doesn’t provide tooling to do that, then I’m 100% on the same page as you.

        • redwattlebird@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.

          But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

          We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.

          Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?

          This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.