I think the people that will “lose out” includes all but like 50 people, then those 50 later. Look, it is awesome that this thing can find bugs and help complete code, but the way it is being made, it is actively trying to destroy society, and those making of it are marketing that as a feature while they burn down the forests and evaporate all the fresh water.
We need a better rollout plan or we’ll have bug-free browsers as consolation for most of us dying. There are too many of us now to just have no solution or mitigation plans for runaway resource consumption and carbon release.
The owners of the companies that own the largest ai models go on TV as often as they can get a reporter to point a camera at them, and almost every time claim that these tools will be the end of work for most humans, and offer no solutions for that dramatic change.
So, in other words, they claim the AI tools will rapidly destroy one of the bigger underpinnings of western society, and offer no solutions for what to put in their place other than some half-assed UBI suggestions. If you take millions of people’s jobs away in a short time, that’s called a depression, and if they’re never coming back, that’s the end of that society.
If that is where we are destined to go, doing so without a plan for what to do about the masses of unemployed working-age people will lead to global suffering, death, riots, and warfare. Rather than gleefully floor it over a cliff, perhaps we can take the reins from the sociopathic tech bros and try to gracefully migrate to a post-work society without most of us having to die.
Note that previous paragraph is for the sake of the debate, and I do not actually hold the belief that LLMs will meaningfully disrupt global economics over the long term, once the vast should-be-illegal money duplicating scam the AI companies and Nvidia are engaged in is put to a halt.
Those issues are something that societies and democratically elected representatives should work out; not some billionaires. The problem seems to be that some people with a good thing going don’t want something that benefits them personally. They prefer no change over something that benefits everyone (but them relatively less).
Indeed, but, and I’m going to throw a wild hypothetical out there: What if the billionaires use their billions to pay that society’s elected officials not to fix these issues, and then, when those officials are voted out, they pay the new ones?
Is it the billionaire’s job to undermine the society they’re in? You asked how they’re undermining us and I provided some of the many valid answers. If its society’s job to fend off constant attacks from billionaires, I think it is stupid to keep doing that rather than take the thing they’re using to repeatedly violate, their too much money.
General_Effort, it’s pretty clear that you are an AI evangelist, so you should know this already. But for people who are genuinely unfamiliar, they should look at the creepy words of Anthropic ally, Palantir.
Not only are they the company you decided to post about here, but they are the company people will often claim is the ethical one. OpenAI and X are worse.
You can’t just stick your head in the sand when you see inconvenient truths.
I think the people that will “lose out” includes all but like 50 people, then those 50 later. Look, it is awesome that this thing can find bugs and help complete code, but the way it is being made, it is actively trying to destroy society, and those making of it are marketing that as a feature while they burn down the forests and evaporate all the fresh water.
We need a better rollout plan or we’ll have bug-free browsers as consolation for most of us dying. There are too many of us now to just have no solution or mitigation plans for runaway resource consumption and carbon release.
How so?
The owners of the companies that own the largest ai models go on TV as often as they can get a reporter to point a camera at them, and almost every time claim that these tools will be the end of work for most humans, and offer no solutions for that dramatic change.
So, in other words, they claim the AI tools will rapidly destroy one of the bigger underpinnings of western society, and offer no solutions for what to put in their place other than some half-assed UBI suggestions. If you take millions of people’s jobs away in a short time, that’s called a depression, and if they’re never coming back, that’s the end of that society.
If that is where we are destined to go, doing so without a plan for what to do about the masses of unemployed working-age people will lead to global suffering, death, riots, and warfare. Rather than gleefully floor it over a cliff, perhaps we can take the reins from the sociopathic tech bros and try to gracefully migrate to a post-work society without most of us having to die.
Note that previous paragraph is for the sake of the debate, and I do not actually hold the belief that LLMs will meaningfully disrupt global economics over the long term, once the vast should-be-illegal money duplicating scam the AI companies and Nvidia are engaged in is put to a halt.
Those issues are something that societies and democratically elected representatives should work out; not some billionaires. The problem seems to be that some people with a good thing going don’t want something that benefits them personally. They prefer no change over something that benefits everyone (but them relatively less).
Indeed, but, and I’m going to throw a wild hypothetical out there: What if the billionaires use their billions to pay that society’s elected officials not to fix these issues, and then, when those officials are voted out, they pay the new ones?
Is it the billionaire’s job to undermine the society they’re in? You asked how they’re undermining us and I provided some of the many valid answers. If its society’s job to fend off constant attacks from billionaires, I think it is stupid to keep doing that rather than take the thing they’re using to repeatedly violate, their too much money.
I agree. Now note how far away from AI this is.
If you ask clarifying questions for additional context, it seems strange that you would be surprised to hear clarification and additional context.
I did ask for more info about how AI is “destroying society”.
General_Effort, it’s pretty clear that you are an AI evangelist, so you should know this already. But for people who are genuinely unfamiliar, they should look at the creepy words of Anthropic ally, Palantir.
Palantir’s ‘manifesto’ has been described as an ‘AI-driven threat to humanity’s existence’ and ‘technofascism’.
Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race
Leaked: Palantir’s Plan to Help ICE Deport People
That’s 1 company, not AI as a whole, and its services are obviously much in demand by our democratically elected leaders.
Not only are they the company you decided to post about here, but they are the company people will often claim is the ethical one. OpenAI and X are worse.
You can’t just stick your head in the sand when you see inconvenient truths.