The U.S. Supreme Court declined on Monday to take up the issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.
Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator.
Thaler, of St. Charles, Missouri, applied for a federal copyright registration in 2018 covering “A Recent Entrance to Paradise,” visual art he said his AI technology “DABUS” created. The image shows train tracks entering a portal, surrounded by what appears to be green and purple plant imagery.
The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright. U.S. President Donald Trump’s administration had urged the Supreme Court not to hear Thaler’s appeal.



Fair enough, I see what you’re saying.
I’ll go ahead and share the quote from the court’s decision for context:
I’m a little bit uncertain based on this summary of the judgement by the Stanford library on copyright and fair use:
Why are they saying that “the work was never eligible for copyright in the first place”? Because Thaler claimed that the AI itself made the work? This all feels a bit like Schroedinger’s Copyrighted Work to me… the work exists, so who made it?
Generative AI fans would have you believe that they are the author and copyright holder, because they wrote a prompt.
AI companies might want to argue, like Thaler, that they made the AI, so they are the author and copyright holder.
My personal opinion is that the prompt and code are both relatively insignificant in comparison to the training data from which the probabilistic machine learning model is derived. The prompt would do nothing without the model, and OpenAI themselves said they quiet part out loud when they argued in court that the creation of a model such as theirs would be “impossible” to achieve without training off of vast amounts of copyrighted works.
Clearly the training data itself is the most important piece of the system, which makes a lot of sense to those of us who understand how machine learning and “AI” training actually works on a technical level. They’ve admitted in plain English that their entire product and for-profit business model relies on the use of other people’s work as training data. Sounds to me like they have derived considerable value from other people’s work without any sort of license or compensation…
By that logic alone, I would argue that the real copyright holders of generative AI works ought to be, at least in part, the people who provided (wittingly or unwittingly) the training data. They are the ones who made this whole social experiment possible, after all. Data is the new code, so I’m not sure why people expect to be able to use it for free in an unrestricted way.
It’s simply not the court’s job to determine this, in this particular case. Which is why it’s so frustrating that this particular case keeps ending up under headlines claiming that it’s established that “AI generated art can’t be copyrighted.”
All the rest of this argument is out of scope of this case, you’d need to look to other cases. You can argue and opine however you like about what you think the outcomes should be but that doesn’t change what the outcomes of those cases actually end up being.