• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    1 天前

    Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.

    Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.

    • Eggymatrix@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 天前

      Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.

      I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.

      My two cents

      • egregiousRac@piefed.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 天前

        Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 天前

          I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.

          I expect a lot of nonsense being hallucinated in those areas.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 天前

        I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 天前

          Neural network would be the most technically accurate given what they’ve announced so far.

          There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.

          https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/

          • REDACTED@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 小时前

            Saw somewhere a mention of it doing only 1 pass. Stable diffusion takes 30-100+ passes, so this sounds like fast inpainting rather than actual generation

        • hobovision@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 小时前

          Generative AI is a decent catch-all that I think would apply to this.

          Another good option is “machine learning” or ML, but that’s fallen out of favor cause it doesn’t sound as impressive as AI. But really it’s teaching a machine to do a specific task. It’s not intelligent, it’s just that we don’t understand how it learns.