You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
It’s as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can’t be bothered to bother people about dumb ideas.
But at the moment, no, it’s not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.
What about a self-hosted instance?
To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.
I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation
It’s much better, but still acts as plagiarism
Can the rubber ducky use case really be considered plagiarism? I think it’s unequivocal that the models were trained on copyrighted data in a way that, if not illegal, is at the very least unethical. Letting AI write stuff for you seems a lot more problematic than using it to bounce ideas off of or talk things through.
Plagiarism if it uses art, yeah.
For LLMs, not so much since you can’t really own reddit comments