• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.

    Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. “All Rights Reserved.” No derivatives is already included under full, default, copyright.

    Second, derivative has a pretty strict legal definition. It’s not enough to say that the derived work was created using a protected work, or even that the derived work couldn’t exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.

    Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.



  • They do, though. They purchase data sets from people with licenses, use open source data sets, and/or scrape publicly available data themselves. Worst case they could download pirated data sets, but that’s copyright infringement committed by the entity distributing the data without the legal authority.

    Beyond that, copyright doesn’t protect the work from being used to create something else, as long as you’re not distributing significant portions of it. Movie and book reviewers won that legal battle long ago.


  • The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.

    I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.







  • I mean, it’s in the name. The right to make copies. Not to be glib, but it really is

    A copyright is a type of intellectual property that gives its owner the exclusive legal right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time.

    You may notice a conspicuous absence of control over how a copied work is used, short of distributing it. You can reencode it, compress it, decompress it, make a word cloud, statistically analyze its tone, anything you want as long as you’re not redistributing the work or an adaptation (which has a pretty limited meaning as well). “Personal use” and “fair use” are stipulations that weaken a copyright owner’s control over the work, not giving them new rights above and beyond copyright. And that’s a great thing. You get to do whatever you want with the things you own.

    You don’t have a right to other people’s work. That’s what copyright enables. But that’s beside the point. The owner doesn’t get to say what you use a work for that they’ve distributed to you.






  • My understanding of the ruling is that, no, a law cannot do this. The ruling is mostly a separation of powers argument. Basically, if the president is not above the law then that means that Congress can override the Constitution by writing a law that, for example, makes the President’s constitutional duties illegal. Therefore, the president is allowed to officially do anything he wants limited only by the Constitution.

    Obligatory: this is not an endorsement of the ruling and IANAL. It’s an awful ruling and terrible for the present and future of our country. It’s a violation of primary ideals of democracy and it needs to be overturned ASAP.




  • C is just a work around for B and the fact that the technology has no way to identify and overcome harmful biases in its data set and model. This kind of behind the scenes prompt engineering isn’t even unique to diversifying image output, either. It’s a necessity to creating a product that is usable by the general consumer, at least until the technology evolves enough that it can incorporate those lessons directly into the model.

    And so my point is, there’s a boatload of problems that stem from the fact that this is early technology and the solutions to those problems haven’t been fully developed yet. But while we are rightfully not upset that the system doesn’t understand that lettuce doesn’t go on the bottom of a burger, we’re for some reason wildly upset that it tries to give our fantasy quasi-historical figures darker skin.