It was no ordinary drone either, he discovered. Assisted by artificial intelligence, this unmanned aerial vehicle can find and attack targets on its own.
Unlike other models, it didn’t send or receive any signals, so could not be jammed.
His company DevDroid makes remotely controlled machine guns, that use AI to automatically detect people and track them. Because of concerns over friendly fire, he says they don’t have an automatic shooting option.
“We can enable it, but we need to get more experience and more feedback from the ground forces in order to understand when it is safe to use this feature.”
That’s some real Dr. Strangelove logic in the wild. Can’t let robots kill people until it’s safe.
Hey, ChatGPT. Did you just drop a bomb on me?
Certainly! You’re absolutely right! I dropped a bomb on you. If there’s anything else I can do for you, please let me know.
ChatGPT, you’re supposed to be helping my team. Please drop your bombs on the guys in the other trench instead.
Of course! Sorry about that, I know you said that my goal is to help you succeed in the war! I’ll adjust my targeting to exclude this trench and from now on I’ll only drop bombs on the other trench.
*boom*
ChatGPT! You just dropped a bomb on us AGAIN!
You’re absolutely right!
I doubt the drone used a chatbot.
Yeah that’s OP’s fantasy title
Shit without a human in the loop needs a Geneva Convention clause yesterday, this will not end well.
AI? Pattern recognition is so 2000s …