Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.
I hope WOPR and SkyNet would be taken as a warning not to do that.
Couldn’t a human make the same decision?
Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.
I thought the point of AI is to not specifically program it for anything hence you can ask the chatbot thats suppose to help make a sale, do your homework problems.
AI is more a specific class of software than a specific approach. You can have specialized models that are very focused in their dataset and usecases and you can have general models that are less focused but can be applied more widely (but with potentially less reliable results)