Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.
This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.
This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.
Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.
While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.
For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744
If they can base their business on stealing, then we can steal their AI services, right?
How do you feel about Meta and Microsoft who do the same thing but publish their models open source for anyone to use?
Well how long to you think that’s going to last? They are for-profit companies after all.
I mean we’re having a discussion about what’s fair, my inherent implication is whether or not that would be a fair regulation to impose.
Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.
The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).
Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.
They are model-available if anything.
For the purposes of this conversation. That’s pretty much just a pedantic difference. They are paying to train those models and then providing them to the public to use completely freely in any way they want.
It would be like developing open source software and then not calling it open source because you didn’t publish the market research that guided your UX decisions.
You said open source. Open source is a type of licensure.
The entire point of licensure is legal pedantry.
And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.
You said open source. Open source is a type of licensure.
The entire point of licensure is legal pedantry.
No. Open source is a concept. That concept also has pedantic legal definitions, but the concept itself is not inherently pedantic.
And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.
No, they’re not. Which is why I didn’t use that metaphor.
A binary is explicitly a black box. There is nothing to learn from a binary, unless you explicitly decompile it back into source code.
In this case, literally all the source code is available. Any researcher can read through their model, learn from it, copy it, twist it, and build their own version of it wholesale. Not providing the training data, is more similar to saying that Yuzu or an emulator isn’t open source because it doesn’t provide copyrighted games. It is providing literally all of the parts of it that it can open source, and then letting the user feed it whatever training data they are allowed access to.
i feel like its less meaningful because we dont have access to the datasets.
Here’s an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.
And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I’d need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we’re not allowed to see that but you can take whatever you want from us. Sounds fair.
Can you just give us the TLDE?
AI Chat bots copy/paste much of their “training data” verbatim.
Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.
Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.
It’s not a breach of copyright or other IP law not to cite sources on your paper.
Getting your paper rejected for lacking sources is also not infringing in your freedom. Being forced to pay damages and delete your paper from any public space would be infringement of your freedom.
I mean, you’re not necessarily wrong. But that doesn’t change the fact that it’s still stealing, which was my point. Just because laws haven’t caught up to it yet doesn’t make it any less of a shitty thing to do.
When I analyze a melody I play on a piano, I see that it reflects the music I heard that day or sometimes, even music I heard and liked years ago.
Having parts similar or a part that is (coincidentally) identical to a part from another song is not stealing and does not infringe upon any law.
You guys are missing a fundamental point. The copyright was created to protect an author for specific amount of time so somebody else doesn’t profit from their work essentially stealing their deserved revenue.
LLM AI was created to do exactly that.
It’s not stealing, its not even ‘piracy’ which also is not stealing.
Copyright laws need to be scaled back, to not criminalize socially accepted behavior, not expand.
The original source material is still there. They just made a copy of it. If you think that’s stealing then online piracy is stealing as well.
Well they make a profit off of it, so yes. I have nothing against piracy, but if you’re reselling it that’s a different story.
But piracy saves you money which is effectively the same as making a profit. Also, it’s not just that they’re selling other people’s work for profit. You’re also paying for the insane amount of computing power it takes to train and run the AI plus salaries of the workers etc.
The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.
The idea of a “teensy” exception so that we can “advance” into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made “unnecessary” by the unstoppable progress of “thinking” machines, would be hilarious, if it weren’t depressing as fuck.
The whole point of copyright in the first place, is to encourage creative expression
…within a capitalistic framework.
Humans are creative creatures and will express themselves regardless of economic incentives. We don’t have to transmute ideas into capital just because they have “value”.
Sorry buddy, but that capitalistic framework is where we all have to exist for the forseeable future.
Giving corporations more power is not going to help us end that.
I don’t think they’re advocating for more capitalism.
You’re not wrong.
The kind of art humanity creates is skewed a lot by the need for it to be marketable, and then sold in order to be worth doing.
But copyright is better than nothing, and this exemption would straight up be even worse than nothing.
I’d agree, but here’s one issue with that: we live in reality, not in a post-capitalist dreamworld.
Creativity takes up a lot of time from the individual, while a lot of us are already working two or even three jobs, all on top of art. A lot of us have to heavily compromise on a lot of things, or even give up our dreams because we don’t have the time for that. Sure, you get the occasional “legendary metal guitarist practiced so much he even went to the toilet with a guitar”, but many are so tired from their main job, they instead just give up.
Developing game while having a full-time job feels like crunching 24/7, while only around 4 is going towards that goal, which includes work done on my smartphone at my job. Others just outright give up. This shouldn’t be the normal for up and coming artists.
That’s why we should look for good solutions to societal problems, and not fall back on bad “solutions” just because that’s what we’re used to. I’m not against the idea of copyright existing. But copyright as it exists today is stifling and counterproductive for most creative endeavors. We do live in reality, but I don’t believe it is the only possible reality. We’re not getting to Star Trek Space Communism™ anytime soon and honestly I like the idea of owning stuff. That doesn’t mean that there aren’t concrete steps we can and should take right now in the present reality to make things better. And for that to happen we need to get our priorities and philosophies straight. Philosophies which for me include a robust public commons, the inability to own ideas outright, and the ability to take and transform art and culture. Otherwise, we’re just falling into the “temporarily embarrassed millionaires” mindset but for art and culture.
Honestly, that’s why open source AI is such a good thing for small creatives. Hate it or love it, anyone wielding AI with the intention to make new expression will be much more safe and efficient to succeed until they can grow big enough to hire a team with specialists. People often look at those at the top but ignore the things that can grow from the bottom and actually create more creative expression.
One issue is, many open source AI also tries to ape whatever the big ones are doing at the moment, with the most outrageous example is one that generates a timelapse for AI art.
There’s also tools that especially were created with artists in mind, but they’re less popular due to the average person cannot use it as easily as the prompter machines, nor promise the end of “people with fake jobs” (boomers like generative AI for this reason).
Humans are indeed creative by nature, we like making things. What we don’t naturally do is publish, broadcast and preserve our work.
Society is iterative. What we build today, we build mostly out of what those who came before us built. We tell our versions of our forefathers’ stories, we build new and improved versions of our forefather’s machines.
A purely capitalistic society would have infinite copyright and patent durations, this idea is mine, it belongs to me, no one can ever have it, my family and only my family will profit from it forever. Nothing ever improves because improving on an old idea devalues the old idea, and the landed gentry can’t allow that.
A purely communist society immediately enters whatever anyone creates into the public domain. The guy who revolutionizes energy production making everyone’s lives better is paid the same as a janitor. So why go through all the effort? Just sweep the floors.
At least as designed, our idea of copyright is a compromise. If you have an idea, we will grant you a limited time to exclusively profit from your idea. You may allow others to also profit at your discretion; you can grant licenses, but that’s up to you. After the time is up, your idea enters the public domain, and becomes the property and heritage of humanity, just like the Epic of Gilgamesh. Others are free to reproduce and iterate upon your ideas.
I think you have your janitor example backwards. Spending my time revolutionizing energy productions sounds much more enjoyable than sweeping floors. Same with designing an effective floor sweeping robot.
The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.
I feel like that purpose has already been undermined by various changes to copyright law since its inception, such as DMCA and lengthening copyright term from 14 years to 95. Freedom to remix existing works is an important part of creative expression which current law stifles for any original work that releases in one person’s lifespan. (Even Disney knew this: the animated Pinocchio movie wouldn’t exist if copyright could last more than 56 years then)
Either way, giving bots the ‘right’ to remix things that were just made less than a year ago while depriving humans the right to release anything too similar to a 94 year old work seems ridiculous on both ends.
I’ll train my AI on just the bee movie. Then I’m going to ask it “can you make me a movie about bees”? When it spits the whole movie, I can just watch it or sell it or whatever, it was a creation of my AI, which learned just like any human would! Of course I didn’t even pay for the original copy to train my AI, it’s for learning purposes, and learning should be a basic human right!
That would be like you writing out the bee movie yourself after memorizing the whole movie and claiming it is your own idea or using it as proof that humans memorizing a movie is violating copyright. Just because an AI is violating copyright by outputting the whole bee movie, it doesn’t mean training the AI on copyright stuff is violating copyright.
Let’s just punish the AI companies for outputting copyright stuff instead of for training with them. Maybe that way they would actually go out of their way to make their LLM intelligent enough to not spit out copyrighted content.
Or, we can just make it so that any output made by an AI that is trained on copyrighted stuff cannot be copyrighted.
There is actually already a website where people just recreated the bee movie by hand so idk it might actually work as a legal argument.
I don’t think that’s a feasible dream in our current system. They’ll just lobby for it, some senators will say something akin to “art should have been always a hobby, not a profession”, then make adjustments for the current copyright laws so that they can be copyrighted.
If the solution is making the output non-copyrighted it fixes nothing. You can sell the pirating machine on a subscription. And it’s not like Netflix where the content ends when the subscription ends, you have already downloaded all the not-copyrighted content you wanted, and the internet would be full of non-copyrighted AI output.
Instead of selling the bee movie, you sell a bee movie maker, and a spiderman maker, and a titanic maker.
Sure, file a copyright infringement each time you manage to make an AI output copyrighted content. Just run it on a loop and it’s a money making machine. That’s fine by me.
Yeah, because running the AI also have some cost, so you are selling the subscription to run the AI on their server, not it’s output.
I’m not sure what is the legality of selling a bee movie maker, so you’d have to research that one yourself.
It’s not really a money making machine if you lose more money running the AI on your server farm, but whatever floats your boat. Also, there are already lawsuits based on outputs created from chatgpt, so it is exactly what is already happening.
Yeah, making sandwiches also costs money! I have to pay my sandwich making employees to keep the business profitable! How do they expect me to pay for the cheese?
EDIT: also, you completely missed my point. The money making machine is the AI because the copyright owners could just use them every time it produces copyright-protected material if we decided to take that route, which is what the parent comment suggested.
They should pay for the cheese, I’m not arguing against that, but they should be paying it the same amount as a normal human would if they want access to that cheese. No extra fees for access to copyrighted material if you want to use it to train AI vs wanting to consume it yourself.
And I didn’t miss your point. My point was that the reality is already occurring since people are already suing OpenAI for ChatGPT outputs that the people suing are generating themselves, so it’s no longer just a hypothetical. We’ll see if it is a money making machine for them or will they just waste their resources from doing that.
Media is not exactly like cheese though. With cheese, you buy it and it’s yours. Media, however, is protected by copyright. When you watch a movie, you are given a license to watch the movie.
When an AI watches a movie, it’s not really watching it, it’s doing a different action. If the license of the movie says “you can’t use this license to train AI, use the other (more expensive) license for such purposes”, then AIs have extra fees to access the content that humans don’t have to pay.
Both humans and AI consume the content, even if they do not do so in the exact same way. I don’t see the need to differentiate that. It’s not like we have any idea of the mechanism by which humans consume a content to make the differentiation in the first place.
In the meantime I’ll introduce myself into the servers of large corporations and read their emails, codebase, teams and strategic analysis, it’s just learning!
learning should be a basic human right!
Education is a basic human right (except maybe in Usa, then it should be one there)
Yeah. A human right.
I am thrilled to see the output you get!
You drank the kool-aid.
The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works. This has been suppressed by OpenAI in a rather brute force kind of way, by prohibiting the prompts that have been found so far to do this (e.g. the infamous “poetry poetry poetry…” ad infinitum hack), but the possibility is still there, no matter how much they try to plaster over it. In fact there are some people, much smarter than me, who see technical similarities between compression technology and the process of training an LLM, calling it a “blurry JPEG of the Internet”… the point being, you wouldn’t allow distribution of a copyrighted book just because you compressed it in a ZIP file first.
The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.
Exactly! This is the core of the argument The New York Times made against OpenAI. And I think they are right.
The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.
I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.
The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.
What method still works? I’d like to try it.
I have access to ChatGPT 4, and the latest Anthropic model.
Edit: hm… no answers but downvotes. I wonder why that is.
I agree. You can’t just dismiss the problem saying it’s “just data represented in vector space” and on the other hand not be able properly censor the models and require AI safety research. If you don’t know exactly what’s going on inside, you also can’t claim that copyright is not being violated.
It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.
Right! Like if we could honestly further enhance that feature its an incredible increase in compression tech!
ML techniques have been very useful in compression, yes, but it’s sort of nuts to say that a data structure that encodes only (sometimes overly so for certain regions of its latent space/embedding space/semantics space/whatever you want to call it right now) relationships between values rather than value sequences themselves as storing contiguous copyright protected works is storing partiularized creative works in particularly identifiable manner.
This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.
I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.
You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.
Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.
You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.
Y’all should really stop expecting people to buy into the analogy between human learning and machine learning i.e. “humans do it, so it’s okay if a computer does it too”. First of all there are vast differences between how humans learn and how machines “learn”, and second, it doesn’t matter anyway because there is lots of legal/moral precedent for not assigning the same rights to machines that are normally assigned to humans (for example, no intellectual property right has been granted to any synthetic media yet that I’m aware of).
That said, I agree that “the model contains a copy of the training data” is not a very good critique–a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.
a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.
Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. “All Rights Reserved.” No derivatives is already included under full, default, copyright.
Second, derivative has a pretty strict legal definition. It’s not enough to say that the derived work was created using a protected work, or even that the derived work couldn’t exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.
Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.
Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.
Model sizes are larger than their training sets
Excuse me, what? You think Huggingface is hosting 100’s of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don’t know if I agree with the compression argument, myself, but for other reasons–your retort is objectively false.
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.
Table 2.2 shows the final mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens. Language Models are Few-Shot Learners
*Did some more looking, and that model size estimate assumes 32 bit float. It’s actually 16 bit, so the model size is 350GB… technically some compression after all!
The issue isn’t that you can coax AI into giving away unaltered copyrighted books out of their trunk, the issue is that if you were to open the hood, you’d see that the entire engine is made of unaltered copyrighted books.
All those “anti hacking” measures are just there to obfuscate the fact that that the unaltered works are being in use and recallable at all times.
This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.
Again: nobody is complaining that you can make AI spit out their training data because AI is the only source of that training data. That is not the issue and nobody cares about AI as a delivery source of pirated material. The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.
The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.
Are you only talking about the word repetition glitch?
Look… All I have to say is… Support the Internet Archive!
(please)
Heh. Funny that this comment is uncontroversial. The Internet Archive supports Fair Use because, of course, it does.
This is from a position paper explicitly endorsed by the IA:
Based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.
By
- Library Copyright Alliance
- American Library Association
- Association of Research Libraries
Disagree. These companies are exploiting an unfair power dynamic they created that people can’t say no to, to make an ungodly amount of money for themselves without compensating people whose data they took without telling them. They are not creating a cool creative project that collaboratively comments on or remixes what other people have made, they are seeking to gobble up and render irrelevant everything that they can, for short term greed. That’s not the scenario these laws were made for. AI hurts people who have already been exploited and industries that have already been decimated. Copyright laws were not written with this kind of thing in mind. There are potentially cool and ethical uses for AI models, but open ai and google are just greed machines.
Edited * THRICE because spelling. oof.
“but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!”
If ChatGPT was free I might see their point but it’s not so no. If you’re making money from someone’s work you should pay them.
Bullshit. AI are not human. We shouldn’t treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does “learning”, but that doesn’t mean we should elevate what they do to be legally equal to human learning.
It’s this same kind of twisted logic that makes people think Corporations are People.
Ok, ignore this specific company and technology.
In the abstract, if you wanted to make artificial intelligence, how would you do it without using the training data that we humans use to train our own intelligence?
We learn by reading copyrighted material. Do we pay for it? Sometimes. Sometimes a teacher read it a while ago and then just regurgitated basically the same copyrighted information back to us in a slightly changed form.
We learn by reading copyrighted material.
We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.
This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.
Moreover, human beings make their own choices, they aren’t actual tools.
They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.
There is no intelligence here except theirs. There is no intent here except theirs.
We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.
If you fundamentally do not think that artificial intelligences can be created, the onus is on yo uto explain why it’s impossible to replicate the circuitry of our brains. Everything in science we’ve seen this far has shown that we are merely physical beings that can be recreated physically.
Otherwise, I asked you to examine a thought experiment where you are trying to build an artificial intelligence, not necessarily an LLM.
This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.
Or you are over complicating yourself to seem more important and special. Definitely no way that most people would be biased towards that, is there?
Moreover, human beings make their own choices, they aren’t actual tools.
Oh please do go ahead and show us your proof that free will exists! Thank god you finally solved that one! I heard people were really stressing about it for a while!
They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.
“I don’t know how this works but it’s math and that scares me so I’ll minimize it!”
And that’s all paid for. Think how much just the average high school graduate has has invested in them, ai companies want all that, but for free
It’s not though.
A huge amount of what you learn, someone else paid for, then they taught that knowledge to the next person, and so on. By the time you learned it, it had effectively been pirated and copied by human brains several times before it got to you.
Literally anything you learned from a Reddit comment or a Stack Overflow post for instance.
If only there was a profession that exchanges knowledge for money. Some one who “teaches.” I wonder who would pay them
tweet is good, your body argument is completely wrong
deleted by creator
“No, not like that!”
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.
AI are people, my friend. /s
But, really, I think people should be able to run algorithms on whatever data they want. It’s whether the output is sufficiently different or “transformative” that matters (and other laws like using people’s likeness). Otherwise, I think the laws will get complex and nonsensical once you start adding special cases for “AI.” And I’d bet if new laws are written, they’d be written by lobbiests to further erode the threat of competition (from free software, for instance).
What do you think “ingesting” means if not learning?
Bear in mind that training AI does not involve copying content into its database, so copyright is not an issue. AI is simply predicting the next token /word based on statistics.
You can train AI in a book and it will give you information from the book - information is not copyrightable. You can read a book a talk about its contents on TV - not illegal if you’re a human, should it be illegal if you’re a machine?
There may be moral issues on training on someone’s hard gathered knowledge, but there is no legislature against it. Reading books and using that knowledge to provide information is legal. If you try to outlaw Automating this process by computers, there will be side effects such as search engines will no longer be able to index data.
I absolutely would download a car.
Yes, but would you download more RAM? 😛
Depends on how big the car is.
Tried that once, now my browser has 20 search bars and lots of pop ups
Yes