• webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    2 days ago

    That was a beautiful read.

    But do i find myself conflicted about dismissing it as a potential technical skill all together.

    I have seen comfy-ui workflows that are build in a very complex way, some have the canvas devided in different zones, each having its own prompts. Some have no prompts and extract concepts like composition or color values from other files.

    I compare these with collage-art which also exists from pre existing material to create something new.

    Such tools take practice, there are choices to be made, there is a creative process but its mostly technological knowledge so if its about such it would be right to call it a technical skill.

    The sad reality however, is how easy it is to remove parts of that complexity “because its to hard” and barebones it to simple prompt to output. At which point all technical skill fades and it becomes no different from the online generators you find.

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      2 days ago

      All of that’s great and everything, but at the end of the day all of the commercial VLM art generators are trained on stolen art. That includes most of the VLMs that comfyui uses as a backend. They have their own cloud service now, that ties in with all the usual suspects.

      So even if it has some potentially genuine artistic uses I have zero interest in using a commercial entity in any way to ‘generate’ art that they’ve taken elements for from artwork they stole from real artists. Its amoral.

      If it’s all running locally on open source VLMs trained only on public data, then maybe - but that’s what… a tiny, tiny fraction of AI art? In the meantime I’m happy to dismiss it altogether as Ai slop.

      • FishFace@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        2 days ago

        How is that any different from “stealing” art in a collage, though? While courts have disagreed on the subject (in particular there’s a big difference between visual collage and music sampling with the latter being very restricted) there is a clear argument to be made that collage is a fair use of the original works, because the result is completely different.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 days ago

          Collage art retains the original components of the art, adding layers the viewer can explore and seek the source of, if desired.

          VLMs on the other hand intentionally obscure the original works by sending them through filters and computer vision transformations to make the original work difficult to backtrace. This is no accident, its designed obfuscation.

          The difference is intent - VLMs literally steal copies of art to generate their work for cynical tech bros. Classical collages take existing art and show it in a new light, with no intent to pass off the original source materials as their own creations.

          • FishFace@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            2 days ago

            The original developers of Stable Diffusion and similar models made absolutely no secret about the source data they used. Where are you getting this idea that they “intentionally obscure the original works… to make [them] difficult to backtrace.”? How would an image generation model even work in a way that made the original works obvious?

            Literally steal

            Copying digital art wasn’t “literally stealing” when the MPAA was suing Napster and it isn’t today.

            For cynical tech bros

            Stable Diffusion was originally developed by academics working at a University.

            Your whole reply is pretending to know intent where none exists, so if that’s the only difference you can find between collage and AI art, it’s not good enough.

            • pulsewidth@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              21 hours ago

              Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.

              Stable Diffusion also was from its inception in the hands of tech bros, funded and built with the help of a $3 billion dollar AI company (Runway AI), and itself owned by Stability AI, a made for profit company presently valued at $1 billion and now has James Cameron on its board. The students who worked on a prior model (Latent Diffusion) were hired for the Stable Diffusion project, that is all.

              I don’t care to drag the discussion into your opinion of whether artists have any ownership of their art the second after they post it on the internet - for me it’s good enough that artists themselves assign licences for their work (CC, CC BY-SA, ©, etc) - and if a billion dollar company is taking their work without permission (as in the © example) to profit off it - that’s stealing according to the artists intent by their own statement.

              If they’re taking CC BY-SA and failing to attribute it, then they are also breaking licencing and abusing content for their profit. An VLM could easily add attributes to images to assign source data used in the output - weird none of them want to.

              In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.

              • FishFace@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                20 hours ago

                Stable Diffusion? The same Stable Diffusion sued by Getty Images which claims they used 12 million of their images without permission? Ah yes very non-secretive very moral. And what of industry titans DALL-E and Midjourney? Both have had multiple examples of artists original art being spat out by their models, simply by finessing the prompts - proving they used particular artists copyright art without those artists permission or knowledge.

                Getting sued means Getty images disagrees that the use of the images was legal, not that it was secret, nor that it was moral. Getty images are included in the LAION-5b dataset that Stability AI publicly stated they used to create Stable Diffusion. So it’s not “intentionally obscuring” as you claimed.

                I don’t care to drag the discussion into your opinion of whether artists have any ownership of their art the second after they post it on the internet - for me it’s good enough that artists themselves assign licences for their work (CC, CC BY-SA, ©, etc) - and if a billion dollar company is taking their work without permission (as in the © example) to profit off it - that’s stealing according to the artists intent by their own statement.

                Copying is not theft, no matter how many words you want to write about it. You can steal a painting by taking it off the wall. You can’t steal a JPG by right-clicking it and selecting “Copy Image”. That’s fundamentally different.

                An VLM could easily add attributes to images to assign source data used in the output

                Oh yeah? Easily? What attribution should a model trained purely on LAION-5b add to an output image if prompted with “photograph of a cat”?

                In other words, I’ll continue to treat AI art as the amoral slop it is. You are of course welcome to have a different opinion, I don’t really care if mine is ‘good enough’ for you.

                You can do whatever you want (within usual rules) in your personal life, but you chose to enter into a discussion.

                From that discussion it’s clear that your position is rooted in bias not knowledge. That’s why you can’t point out substantial differences between AI-generated images and other techniques which re-use existing imagery, why you make up intentions and can’t back them up, and why you prefer to dismiss academics as “tech bros” instead of engaging on facts.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 days ago

        If you download a checkpoint from non trustworthy sources definitely and that is the majority of people, but also the majority that does not use the technical tools that deep nor cares about actual art (mostly porn if the largest distributor of models civitai is a reference).

        The technical tool that allow actual creativity is called comfyui, and this is open source. I have yet to see anything that is even comparable. Other creative tools (like the krita plugin) use it as a backend.

        I am willing to believe that someone with a soul for art and complex flows would also make their own models, which naturally allows much more creativity and is not that hard to do.

    • TheRealKuni@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      I think there’s a stark difference between crafting your own comfyui workflow, getting the right nodes and control nets and checkpoints and whatever, tweaking it until you get what you want, and someone telling an AI “make me a picture/video of X.”

      The least AI-looking AI art is the kind that someone took effort to make their own. Just like any other tool.

      Unfortunately, gen AI is a tool that gives relatively good results without any skill at all. So most people won’t bother to do the work to make it their own.

      I think that, like nearly everything in life, there is nuance to this. But at the same time, we aren’t ready for the nuance because we’re being drowned by slop and it’s horrible.