If OpenAI can get away with going through copy-righted material, then the answer to piracy is simple: round up a bunch of talented Devs from the internet who are writing and training AI models, and let’s make a fantastic model trained on what the internet archive has. Tell you what, let Mistral’s engineers lead that charge, and put an AGPL license on the project so that companies can’t fuck us over.
I refuse to believe that nobody has thought of this yet
Better yet! Train an AI to re-write the books into brand new books and let us read, review the content, add notes etc so that the AI can refresh the books if we find errors.
Kick the private collections to the curb! Teeth in like in American History X.
We get it, y’all hate LLMs and the companies who make them.
This comparison is disingenuous and I have to think you’re smart enough to know that, making this disinformation.
If/when an LLM like ChatGPT spits out a full copy of training text, that’s considered a bug and is remediated fairly quickly. It’s not a feature.
What IA was doing was sharing the full text as a feature.
As far as I know, there are some court cases pending regarding determining if companies like Open AI are guilty of copyright infringement but I haven’t seen any convictions yet (happy to be corrected here).
All that said, I love IA and have a Warrior container scheduled to run nightly to help contribute.
If OpenAI can get away with going through copy-righted material, then the answer to piracy is simple: round up a bunch of talented Devs from the internet who are writing and training AI models, and let’s make a fantastic model trained on what the internet archive has. Tell you what, let Mistral’s engineers lead that charge, and put an AGPL license on the project so that companies can’t fuck us over.
I refuse to believe that nobody has thought of this yet
An AI trained on old Internet material would be like a synthetic Grandpa Simpson:
“In my day we said ‘all your base’ and laughed all day long, because it took all day to download the video.”
This stupid thing just keeps saying “I can Haz Cheeseburger”. What the hell does that even mean?
What do you think Mistral trains its models on? Public domain stuff?
Better yet! Train an AI to re-write the books into brand new books and let us read, review the content, add notes etc so that the AI can refresh the books if we find errors.
Kick the private collections to the curb! Teeth in like in American History X.
“AI write Hamlet” AI writes Idiocracy.
We get it, y’all hate LLMs and the companies who make them.
This comparison is disingenuous and I have to think you’re smart enough to know that, making this disinformation.
If/when an LLM like ChatGPT spits out a full copy of training text, that’s considered a bug and is remediated fairly quickly. It’s not a feature.
What IA was doing was sharing the full text as a feature.
As far as I know, there are some court cases pending regarding determining if companies like Open AI are guilty of copyright infringement but I haven’t seen any convictions yet (happy to be corrected here).
All that said, I love IA and have a Warrior container scheduled to run nightly to help contribute.
Hmm, true. IA wouldn’t be as supported if we couldn’t get the full text of the source.
Can you tell me more about the “warrior container”?
It’s mentioned in the OP but it’s this:
https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
Basically, distributed collection.