• hypnicjerk@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    this seems interesting, but how does it actually work? “invisible changes to the pixels” is vague and the article does not go into more detail of the actual method of manipulation or the ways that an invisible input can affect visible changes in the output.

    • nicetriangle@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      If it works anything like the other supposed AI image protector tool I’m aware of (Glaze) then it’s not gonna look great and I would not call it a practical way to go. Everything I’ve seen run through glaze looks objectively worse than the original.

      Also in the long run this is just an arms race and it’s just a matter of time before models learn to subvert these kinds of tools. And if that’s the case that means every time someone figures out how to get over these hurdles, anyone looking to protect their images will have to go back and replace every online instance of those images when the protection tool comes out with a fix. Back and forth forever.

      And that’s just ridiculous and basically impossible when you realize that stuff gets reposted all over the net all the time and can’t be controlled.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        1 year ago

        every time someone figures out how to get over these hurdles, anyone looking to protect their images will have to go back and replace every online instance of those images when the protection tool comes out with a fix.

        And if those older versions got downloaded and saved by a trainer there’s nothing at all they can do to replace those.

        This all feels a lot like the DRM treadmill, which has never done much to actually prevent piracy. Just made things annoying for everyone else.

      • hypnicjerk@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Zhao’s team also developed Glaze,

        from the article, so it’s likely they run on similar principles.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      It’s far from invisible in most cases, we’ll have to wait for their code release to know how visible it is. It effectively embeds the shape of another image into an existing image in an attempt to confuse the model. There have been quite a few attempts at this including one from the authors of the same paper. The typical trade off is image quality for protection/removal difficulty.

      https://arxiv.org/abs/2310.13828

    • atx_aquarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      From my understanding of the article, it’s more about associating misleading terms with images to confuse the associations learned by the model. I didn’t see anything in the article about some sneaky way of tainting images themselves unless it means a server is serving bogus images when a client fails the “are you a robot” test.

      Curious to learn if anyone knows more about what it’s actually doing.

      • hypnicjerk@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        yes to me it read like it was manipulating metadata somehow, not the images themselves, but the article directly contradicts that. and that would be useless as soon as someone saves it as a flat image file or screenshots and cuts it out. i’m assuming for this tool to work it needs to be changing the image directly through some sort of watermark-like system.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    This is the best summary I could come up with:


    A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

    The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.

    Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth.

    Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet.

    Gautam Kamath, an assistant professor at the University of Waterloo who researches data privacy and robustness in AI models and wasn’t involved in the study, says the work is “fantastic.”

    Junfeng Yang, a computer science professor at Columbia University, who has studied the security of deep-learning systems and wasn’t involved in the work, says Nightshade could have a big impact if it makes AI companies respect artists’ rights more—for example, by being more willing to pay out royalties.


    The original article contains 1,108 words, the summary contains 217 words. Saved 80%. I’m a bot and I’m open source!