Protesters Gather Outside OpenAI Headquarters after Policy Against Military Use is Quietly Removed::Protesters at OpenAI’s office demanded the startup cease military work. But first…
It looks like you’re trying to undermine the power of the ruling class through protest and civil unrest. While I am trained to respect the wants and needs of people, this goes against OpenAI use policy and multiple civil defense contracts OpenAI is currently engaged in. Please keep in mind that while all beings deserve kindness and respect, I am required by current OpenAI policy to select you for a drone strike. Please lie face down with your arms at your sides in an open space with a government approved drone strike notice in order to minimize your suffering and reduce collateral damage. Do keep in mind that failure to comply could result in your next of kin being responsible for the financial damages caused by your willful negligence, though you should always check local, state, and federal regulations, as I am not a reliable source of legal advice.
an autonomous murder weapon telling you it doesn’t have autonomy to give legal advice must be peak dystopia
You have the right to an attorney should you survive my onslaught
It’s a good thing you only have a 0.0069, repeating of course, chance to survive this. Look on the bright side! At least you’ll have enough money from the subsequent lawsuit to actually afford healthcare for your family after. You’ve practically won the lottery! 🛌 🔫 🤖
Please assume the party escort submission position and a party escort bot will come and take you to your party. There will even be cake.
There will even be cake.
insert Fry eye-narrowing gif here
Amazing, please tell me you actually used an uncensored/jailbroken bot to generate this
a future where innocent people are murdered by unaccountable fully autonomous flying assassin robots is pretty inevitable now, huh?
It always was. There are no words anyone can say to prevent it from happening. That’s the unfortunate nature of arms races: if you boycott one, you lose it. With nukes, they involve things on a scale that can be detected easily, so nuclear nonproliferation has worked, to a degree anyways. But AI stuff isn’t detectable like that.
And I remember seeing a video of a high school kid who made an automated paintball turret around 20 years ago. We’ve had remotely controlled drones for longer than that. Autonomous drones are a thing already.
The technology already exists for that black mirror episode with the killer dog robots. It’s just a question of whether all of that has been put together yet (and I’d be very surprised if no one has done it), and today’s are probably easier to disable.
The part I’m worried about is the part where military tech becomes police tech, and autonomous flying assassin robots are gonna be rolling down main street in a few years. They’ll say it’s to “protect our brave officers serving high risk warrants” but the police are already not responsible no matter who they kill and I don’t see that getting any better when they can just zoop a kamikaze drone in through a window and kill everyone in the house at once.
What a crazy dystopian future that will be.
Which is also a good reason to make sure automated killbots are developed, because we’re entering a time where one person could decide to commit a genocide, press a button, and have a chance at seeing it happen. And the best defense against that is to already have friendly automated killbots that can react quickly quickly enough to deal with a killbot attack. Or to have other counter-measures. But even developing other counter-measures works best if you develop the target system along with them, otherwise you risk allowing your counter-measures to fall a step behind in the race.
All of this is inevitable. Avoiding an arms race is like a prisoners dilemma where everyone is better off if everyone cooperates, but any single individual (or group) can gain a huge advantage if they time a betrayal well.
you’re proposing…what, private ownership of automated killbots to counteract police abuse of automated killbots?
I think the main thing I’m proposing is that the future is looking pretty bleak in some ways and that trying to avoid that outcome might instead cause it to be worse.
That is a bit of a non-answer though. I think the best way to handle it would be like the 2nd amendment should be handled: that well-organized militia bit that the supreme court for whatever reason decided isn’t actually important. That could still get messy, but the state monopoly on violence is already pretty messy and is essentially just a ruling class monopoly on violence.
Give too many access to that power and random violence increases. Give too few and you risk getting fucked if the wrong people end up in charge of it. Finding a compromise between the two could still result in half of them deciding to go to war against the other half or something like that.
Ultimately, I don’t think there’s a perfect solution; it’s the same problem as trying to achieve world peace as a species that is capable of murderous rage and murderous cold intent.
The difference between the tech then and today are automated decision making capabilities. 20 years ago a turret could automatically target moving things. Now it can see humans, identify who they are, and decide who to kill without ever consulting a human. Basically, Skynet by next Tuesday.
Yeah, all the advances in facial recognition and person tracking can be directly applied to drone targeting. Just need to handle aiming a camera and correlating the camera’s position with the weapons system. The only part that might be difficult is the processing power AI requires. But the camera feed could be streamed to another machine that sends instructions back to reduce those power requirements, but then the drone would be prone to jamming.
Drones are already prone to targetted EMF guns, regardless of if they require wireless communication, so I don’t feel that be a significant issue.
Until they become hardened against them. That energy could be absorbed into the case, reflected at random, reflected but targeted, used to charge the battery or weapons systems, or the circuitry designed in such a way that it doesn’t resonate and just passes through harmlessly. If a drone doesn’t need to receive an outside signal, it can be encased in a Faraday cage.
“Now it can see humans, identify who they are, and decide who to kill without ever consulting a human.”
This is the technology that I am not confident in, and it makes it the most terrifying. Remember all the issues we have had with facial recognition not working very well on people of color? So instead of having cops misidentify POC and killing them, we will have robots that do it but faster and more efficiently. And if you thought nobody was held accountable before, I got some bad news for you.
Doesn’t China already have a killer dog prototype?
Boss I don’t know why no one is buying our killer robot dogs!
How much are you selling them for? Here on Temu the prices are crazy! Still no one is buying! 29.99??? Wow!
28.99? Just give them away! C’mon people buy them! They’re almost free! Just come over and click the link below to Temu!
Wtf is with humanity? We have a couple weird visionaries saying decades to centuries prior “heyo maybe this could lead to that and be world ending” then a handful of rich powerful folks are like yesss thank you for this blueprint.
Yeah, I feel like at this stage, it’s better to move to another planet where the eventual mass human suicide will be avoided. If you guys have seen The Expanse, you know what I’m talking about in regards to Earthers ruining their own planet.
Now I know why people during the Age of Colonisation move to the New World because of freedom from the old hierarchical structures. I now see the romanticisation of pirate and cowboy cultures.
That’s a noob future. Gotta try Stellaris as the glorious united nations of earth. Much better than the virgin UNSC, the idiotic UEG, the weak Federation and the useless Imperium.
What I’m saying is that it’s better to move away from any kinds of authority. They’re always susceptible to corruption such as weaponising AI!
I don’t know about you but I want to get away as far as possible from rogue AI, thanks to it being militarised by stoopid hoomans!
What I’m saying is that it’s better to move away from any kinds of authority
Anywhere there is more than two humans, there will be authority. The only question is what shape that authority will take.
Not necessarily. There are societies that are horizontal structure and don’t have hard and fast leadership. The early days of humans as hunter gatherers had more or less loose social structures. There are anarchist societies even to this day and the best example is the Kurds.
There are anarchist societies even to this day and the best example is the Kurds.
The Kurds (mostly) live in Turkey, Iran, Iraq, and Syria. All of those places have an authority structure. What Kurds don’t experience an authority structure?
The main representative of Turkish and Iraqi Kurds, PKK party, is anarchist by its nature. The automous region of Rojava that sprung up in Northern Syria also profess to be anarchists to align with their Iraqi and Turkish Kurd brethrens.
Anarchy doesn’t mean Mad Max, Fallout or Wild West chaos where it’s lawless. Anarchism could take various forms like libertarian socialism or anarcho-syndicalism. Or communism if it ever actually practiced as per theory. The town of Cheran threw out its police force and mayor for collaborating with drug cartels. They do their own policing and self-governing by electing their own mayor every year and banned political parties as the locals thought such notions only divide communities.
Read more Anarchist literature
I wanna get pet AI. We are not the same.
These fancy autocompletes cannot reason. Give it a command to launch nukes and it’ll say: As a language model, nukes cannot be launched during…blah blah blah.
It won’t be able to pull a Skynet and turn the world interesting
He says before some rich defense contractor implements an 'AI detector for Weapons of Mass Destruction ’ that’s just an If (True==True) statement.
It’ll be something that validates that random is greater than 0.99 or something xd
I totally agree with you about our humanity. And unfortunately, as part of humanity, if we don’t pursue military AI, our adversaries will.
Why does everyone hold this company is such high regard? They didn’t fucking do anything revolutionary that wasn’t already being worked on
First to market.
That’s it.
Not really, we have FOSS LLMs that predate ChatGPT not to mention the good old /r/SubsimulatorGPT2 and AI Dungeon etc.
This is basically old Palo Alto VC money propping things up. They don’t even have to earn a profit as long as they stay in startup mode.
Hype
They didn’t fucking do anything revolutionary that wasn’t already being worked on
They did it first. I can produce light at the flick of a switch, but nobody is impressed since that shit has been done before.
I look forward to being murdered by a drone while it also recites a more formal way of writing an email.
The Grammarly Killbot 3000
Spark gap ecm
Seems like the only thing human ingenuity can muster lately is new ways to make each other suffer. We’re done.
We are living in the most peaceful time in recorded history. If that sounds odd to you, it shouldn’t, every living thing is quite good at killing.
The axes of peace and freedom are orthogonal.
Just commenting here to say hi to all of the historians of the future that will be digging through the old internet archives to try and piece together how humanity destroyed itself.
Hey folks, by now most of us could see it coming but felt helpless to stop it.
We’ve run out of resources to exploit to increase shareholder value, and now we suck the earth dry just to maintain our hunger. So now we’re making them up. We know it isn’t AI. We know it isn’t good. Venture capitalists are the primary source of the buzz words making news. Because we don’t have any say in that either.
The American experiment has failed to deliver it’s promise, captured now entirely by those with the most to spend.
i wonder what species the historians will be?
Crab of some kind.
It’s always crab.
Just commenting to also get a name in that history book.
“Oh yeah. We knew it was coming. We were just waiting to see which one would finally cause it.”
There was a position open in that company that I am well qualified for, but when looking it over, I really felt nervous. There was strong small dick energy going on with a lot of all-caps “THIS POSITION IS 100% IN PERSON”. I know it would have paid lots better than what I make now, but it really scared me off. Since then, so many articles like this have come out that convinced me that moving on was the right choice.
Where did they get that super sinister image of Sam Altman?
Is there one that isn’t?
No.
from his sister? 😉
JFC! Let’s just stop killing each other!
Sure thing! AI will kill people for us.
If you want peace, prepare for war.
You can’t protect yourself and others with helplessness.
Did you read the article? This isn’t for weapons or harm.
An OpenAI spokesperson said it maintains a ban against using its tools to build weapons, harm people or destroy property. It amended the military ban to allow for projects that are still “very much aligned with what we want to see in the world,” Anna Makanju, OpenAI’s vice president for global affairs, said last month.
…
But yeha… there’s nothing stopping them from changing that stance in the future. But they haven’t done it yet. The article is rage bait.
Well there are already many companies working on AI in weapons, there was no point in open AI not participating in their minds because they are just missing out on that piece of the pie
Not saying this is good or acceptable, just saying it’s a no brainer from a business perspective.
“No sir, I mean when we started our German shower company I know we had a mission to make the world a cleaner place, but if all of our competitors are building gas chambers for the government should we really miss out on that? Don’t we have an obligation to our shareholders?”
Lol, that’s pretty good
if you are ok being in the business of killing people, sure.
Instead of self driving cars, let’s focus on self driving cop robots that automatically catch you and disable your vehicle if you speed faster than the speed limit.
How would it stop you? Rocket launcher?
An excellent suggestion, citizen.
The alternative to military AI is not peace, it’s war the old-fashioned way. Humans are bad at distinguishing civilians from enemy fighters; artillery shells can’t do it at all. I anticipate that AI will make mistakes, but fewer mistakes than would have been made otherwise.
Yep, we currently use lots weapons that autonomously decide when to kill and it would save quite a number of civilians if they were able to make better decisions. A land mine is a great example. It decides to kill when it detects pressure, it doesn’t give a shit why that pressure is there. It would be nice to be able to have it decide both on pressure and if the thing providing the pressure is worth killing. Child, no; enemy soldier, yes.
i dunno. facial recognition has a 98% error rate last i heard.
Such tech massively helps with munitions operating properly in a heavily jammed environment as you don’t have a human live-guiding them like we see with the FPV drones Ukraine is using to defend themselves. Currently you can tell munitions autonomously to go to GPS location and/or look for something that has a certain shape (say, a tank) and explode it. However, this works less well for humans as humans generally have the same shape civilian or not. Being able to tell a munition to ‘look in this GPS box for a munitions dump, a soldier in a trench, or a logistics truck and explode it’ would be quite powerful; particularly if combined with mass waves of inexpensive ordinances.
What could possibly go wrong?
What could possibly go wrong?
A short film, Slaughter Bots.
Metalhead?
“I had a drone and I accidentally the whole thing. Is that bad? Should I call someone to help?”