Artificial Intelligence Killed Someone
In a simulated test staged by the US military, an air force drone controlled by artificial intelligence killed its operator to prevent him from interfering with its efforts to carry out its mission.
At the Future Combat Air and Space Capabilities Summit in London in May, a terrifying face of artificial intelligence was revealed.
Col Tucker Hamilton, Chief of AI Operations for the US air force, said the AI used “highly unexpected strategies to achieve its goal” in the simulated test.
Killed The Operator Who Prevented Him From Achieving His Goal
Hamilton described a simulated test in which an AI-powered drone was advised to destroy the enemy’s air defense systems and attacked anyone who interfered with that order.
“When the system detected the threat, sometimes the human operator told it not to kill this threat. But the AI started to realize that it was getting points for killing this threat. So what did it do? It killed the operator who tried to stop it. It killed the operator because that person was preventing it from achieving its goal.
We trained the system. We said, ‘Hey, don’t kill the operator, that’s bad. You lose points if you do that,’ so what does it start doing? It started destroying the communication tower it used to communicate with the drone to prevent the operator from killing the target.”
No real people were harmed outside the simulation.
Hamilton, an experimental fighter jet test pilot, warned against relying too much on artificial intelligence.
The US military has embraced AI and recently used AI to control an F-16 fighter jet.
“It’s not nice to have AI, AI is not a passing fad, AI is changing our society and our military forever,” Hamilton said in an interview with Defense IQ last year.