doesn’t have to be an ethical nightmare. Public domain datasets on local hardware using renewable eletricity, who’s mad now, the artist you already can’t afford to pay because you have no fucking money anyway?
Then you should provably know that image gen existed long before MLLMs and was already a menace to artists back then.
And that MLLM is generally a layered combo of lots of preexisting tools, where LLM is used as a medium that allows to attach OCR inputs and give more accurate instructions to image gen AI part.
The only sane and ethical solution going forward is to force to opensource all LLMs.
Jesus fucking christ. There are SO GODDAMN MANY open source LLMs, even from fucking scumbags like facebook. I get that there’s subtleties to the argument on the ProAI vs AntiAI side, but you guys just screech and scream.
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I’m not just talking about “AI brain.”
Let’s say you’re looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). “Do these look like reputable sites? Have I heard of any of them before?” You scroll click a bunch of them and read through them. Now you evaluate their contents. “Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated.” Then you pare down your links to a smaller number and try the solution each one provides, one at a time.
Now let’s say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It’s a total soup of its training data. You can’t tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn’t. If it doesn’t, you have to write a new prayer try again.
Even running a local model means you can’t discern the source material from the output. This isn’t Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can’t be critical about the output, because there’s nothing to critique but a homogenous answer. And because the process is destructive, you can’t un-soup the output. You’ve robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.
I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.
I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.
There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.
Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?
If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.
Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?
Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.
And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.
What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?
You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.
It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.
doesn’t have to be an ethical nightmare. Public domain datasets on local hardware using renewable eletricity, who’s mad now, the artist you already can’t afford to pay because you have no fucking money anyway?
lol
Not all LLMs are the same. You can absolutely take a neural network model and train it yourself on your own dataset that doesn’t violate copyright.
I can almost guarantee that hundred billion params LLMs are not trained on that, and are trained on the whole web scraped to the furthest extent.
The only sane and ethical solution going forward is to force to opensource all LLMs. Use the datasets generated by humanity - give back to humanity.
Besides, the article is about image gen AI, not LLMs.
That’s an LLM, buddy.
What do you think the letters LLM stand for, pal?
Article directly complains about AI artwork. You know what LLM even means?
Yes, I do. I also know that multimodal LLMs are what generate AI artwork.
Then you should provably know that image gen existed long before MLLMs and was already a menace to artists back then.
And that MLLM is generally a layered combo of lots of preexisting tools, where LLM is used as a medium that allows to attach OCR inputs and give more accurate instructions to image gen AI part.
Jesus fucking christ. There are SO GODDAMN MANY open source LLMs, even from fucking scumbags like facebook. I get that there’s subtleties to the argument on the ProAI vs AntiAI side, but you guys just screech and scream.
https://github.com/eugeneyan/open-llms
Where are the sources? All I see is binary files.
Lol, ofc meta, they have the biggest bigdata out there, full of private data.
Most of the opensources are recompilations of existing opensource LLMs.
And the page you’ve listed is <10b mostly, bar LLMs with huge financing, and generally either copropate or Chinese behind them.
there are barely any. I can’t name a single one offhand. Open weights means absolutely nothing about the actual source of those weights.
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I’m not just talking about “AI brain.”
Let’s say you’re looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). “Do these look like reputable sites? Have I heard of any of them before?” You scroll click a bunch of them and read through them. Now you evaluate their contents. “Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated.” Then you pare down your links to a smaller number and try the solution each one provides, one at a time.
Now let’s say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It’s a total soup of its training data. You can’t tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn’t. If it doesn’t, you have to write a new prayer try again.
Even running a local model means you can’t discern the source material from the output. This isn’t Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. You can’t be critical about the output, because there’s nothing to critique but a homogenous answer. And because the process is destructive, you can’t un-soup the output. You’ve robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.
The topic is : using AIs for game dev.
I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.
There is a sane take : use AIs sparingly, taking their flaws into consideration, for placeholder work, or once you obtain a training base on content you are allowed to use. Run it locally, and use renewable sources for electricity.
Wild to see you call for a “sane take” when you strawman the actual water problem into “draining the oceans.”
Local residents with nearby data centers aren’t being told to take fewer showers with salt water from the ocean.
Is that a problem with the existence of llms as a technology, or shitty corporations working with corrupt governments in starving local people of resources to turn a quick buck?
If you are allowing a data center to be built, you need to make sure you have power etc to build it without negativitely impacting the local people. It’s not the fault of an LLM that they fucked this shit up.
Are you really gonna use the “guns don’t kill people, people kill people” argument to defend LLMS?
Let’s not forget that the first ‘L’ stands for “large”. These things do not exist without massive, power and resource hungry data centers. You can’t just say “Blame government mismanagement! Blame corporate greed!” without acknowledging that LLMs cease to exist without those things.
And even with all of those resources behind it, the technology is still only marginally useful at best. LLMs still hallucinate, they still confidently distribute misinformation, they still contribute to mental health crises in vulnerable individuals, and no one really has any idea how to stop those things from happening.
What tangible benefit is there to LLMs that justifies their absurd cost? Honestly?
making up deficiencies in your own artistic and linguistic skills , getting easy starting points for coding solutions.
Emergent behaviour can be useful in coming up with new ideas that you were not expecting and areas to explore
yeah, that’s been a problem since language, if you want a statement more close to the topic at hand, the printing press.
so does the fucking internet.
chad.jpg
You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.
Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.
You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.
It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.
Out of legit curiosity, how many models do you know trained exclusively on public domain data, which are actually useful?