In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    13 hours ago

    That’s quite a claim, I’d like to see that. Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

    I doubt that this is the case as one of the features of chatbots is the randomization of the next token which is done by treating the model’s output vector as a, softmaxxed, distribution. That means that every single token has a chance to deviate from the source material because it is selected randomly. In order to get a complete reproduction it would be of a similar magnitude as winning 250,000 dice rolls in a row.


    In any case, the ‘highly transformative’ standard was set in Authors Guild v. Google, Inc., No. 13-4829 (2d Cir. 2015). In that case Google made digital copies of tens of millions of books and used their covers and text to make Google Books.

    As you can see here: https://www.google.com/books/edition/The_Sunlit_Man/uomkEAAAQBAJ where Google completely reproduces the cover and you can search the text of the book (so you could, in theory, return the entire book in searches). You could actually return a copy of a Harry Potter novel (and a high resolution scan, or even exact digital copy of the cover image).

    The judge ruled:

    Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use.

    In cases where people attempt to claim copyright damages against entities that are training AI, the finding is essentially ‘if they paid for a copy of the book then it is legal’. This is why Meta lost their case against authors, in that case they were sued for 1.) Pirating the books and 2.) Using them to train a model for commercial purposes. The judge struck 2.) after citing the ‘highly transformative’ nature of language models vs books.