In order to help train its AI models, Meta (and others) have been using pirated versions of copyrighted books, without the consent of authors or publishers. The company behind Facebook and Instagram faces an ongoing class-action lawsuit brought by authors including Richard Kadrey, Sarah Silverman, and Christopher Golden, and one in which it has already scored a major (and surprising) victory: The Californian court concluded last year that using pirated books to train its Llama LLM did qualify as fair use.
You’d think this case would be as open-and-shut as it gets, but never underestimate an army of high-priced lawyers. Meta has now come up with the striking defense that uploading pirated books to strangers via BitTorrent qualifies as fair use. It further goes on to claim that this is double good, because it has helped establish the United States’ leading position in the AI field.
Meta further argues that every author involved in the class-action has admitted they are unaware of any Llama LLM output that directly reproduces content from their books. It says if the authors cannot provide evidence of such infringing output or damage to sales, then this lawsuit is not about protecting their books but arguing against the training process itself (which the court has ruled is fair use).
Judge Vince Chhabria now has to decide whether to allow this defense, a decision that will have consequences for not only this but many other AI lawsuits involving things like shadow libraries. The BitTorrent uploading and distribution claims are the last element of this particular lawsuit, which has been rumbling on for three years now, to be settled.
I absolutely love the fact that all these companies are laying the legal groundwork to destroy intellectual property rights altogether. If they win enough of these cases, then every pirate on the open seas sails under a flag of amnesty.
No, I expect they’ll be more like “rules for thee but not for me”
deleted by creator
So we can pirate books as well as long as we aren’t able to reproduce them verbatim from memory as well?
Judge Vince Chhabria either accepts whatever bribes and offers he’s probably getting offered and sides with Meta, or it will eventually go on to the Supreme Court where they most definitely will. That’s the part of this that will work the most under an administration of no accountability.
Tell the judge you are training a neural network… it just happens to also be you.
Looking forward to Jellyfin getting a LLM to train locally on movie preferences so everyone’s library is fair use. Wait, is this why LLMs are being shoehorned into everything? 🤔
did they have a library card? if so, then fuck off.
Classic “the end justifies the means” (bad) defense. If ISPs can send letter for torrenting, and Facebook torrented a lot, Facebook deserves a fair punishment.
lol it would be hilarious if they could order Facebook disconnected from the Internet like a pleb hit with a copyright complaint
Not deserves, needs.
sure. thanks meta, anna’s archive will help me with my reading list, thanks.
We can train our NI (Natural Intelligence) models.
To demand shrubberies?
Just spitballing…
If you were to train a model on just one book, as long as you don’t prompt it to create an exact copy (maybe just some indiscernible differences) then presumably that’s fair use.
Then, since we know AI generated work can’t be copyrighted, does that essentially create a copyright-free version of the text which can be freely distributed?
We’re going to end up in a situation where whatever is necessary to train AI is permitted, and the main question is whether that will be through (re)interpretation of existing law or the passage of a new law.
Good thing I have a local model running that’s constantly learning, for precisely this reason
I’m still collecting media before I can start the training process.
If anything, this is proof you should be next in line for a large venture capital infusion!
As long as they cannot copyright what they generate from using the pirated materials
Arguing that training models isn’t fair use us going to be a massive uphill battle, it’s basically reading the book but with a computer. It’s not actually a big deal to people, unless you hold the copyright to a ton of works and want to get a percentage of all the AI income these companies have made.
Torrenting the books is likely absolutely copyright infringement, but that has relatively low payout compared to the money these companies are getting for their models. The training being fair use means that rights holders can’t try to take any money from the model’s use. The statutory limits for infringement even at per work levels aren’t significant compared to the legal cost of proving it happened.
There’s an argument to be made that it is, in fact, not ‘reading’. The training of the model could be considered a lossy compression of the data. And streaming movies in a lossy compression format is not fair use, is it?
The model doesn’t stream out anyone’s content though. The article mentions that the plaintiffs have provided no examples of a prompt that creates anything substantial.
Streaming a lossy compression would generally be infringement, but there is definitely a point where it becomes not infringement if it’s lossy enough.
What a model generally stores, is factual information that isn’t copyright in the first place. It’s storing word counts, sentence lengths, sentiment analysis, and so on.
It’s not the storage of the information that matters as much as the presentation. Google’s search index stores a huge amount of copyrighted material, even losslessly. But they only present small snippets at a time which is not considered copyright infringement. The question really is whether or not the information being presented by the models is in a format which is considered copyright infringement. So far, courts have not found that they are.
They didn’t say seeding is fair use, just inherently part of torrenting. Good thing Sarah Silverman has pc gamer there to pander for her.








