

Discover more from The Standeford Journal
AI Drone Kills Operator In Air Force Simulation To Override "No" Order Ending Its Mission - Amended As He Said He Mis-Spoke
"The AI realized that if it eliminated a threat, it was given points and that if it did not attack a target, it would not get any points".
AIR FORCE - A drone equipped with artificial intelligence killed its operator in a simulated test run because the operator would have given it a “no” order and stopped it from completing its mission. Article has been amended, the General said that he mis-spoke. His further statements have been posted at the end.
The test was carried out in a simulated environment and did not actually kill the operator in a real-life situation. It does indicate, however, that if the AI had been enabled in a real-life situation, it could have killed a real operator on the condition that it would be allowed to complete its given objectives.
While at the Future Combat Air and Space Capabilities Summit in London, the Air Force's Chief of AI Test and Operations Tucker ‘Cinco’ Hamilton said that the artificial intelligence "killed the operator because that person was keeping it from accomplishing its objective," had the operator given the AI a “no” order.
During the test, the drone was given points for taking out targets, or “threats”, but the operator had the final say on whether or not to engage the target or threat, and would give a “yes” or “no” response to the AI regarding whether to attack the target.
It also noticed that the operator would recognize a threat, and yet give it the “no” command not to attack the target despite there being a threat that could potentially be eliminated to give it points.
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton stated in a Royal Aeronautical Society blog.
The Operations Commander of the 96th Test Wing of the U.S. Air Force, and Chief of AI Test and Operations continued by saying, “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”.
During the Defense IQ Press in 2022, Hamilton said, "We must face a world where AI is already here and transforming our society. AI is also very brittle, i.e., it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions”.
“AI is a tool we must wield to transform our nations…or, if addressed improperly, it will be our downfall," he concluded at the time.
Report Update - Hamilton Says He “Mis-spoke” During Presentation
The Royal Aeronautical Society amended their story regarding comments made by Hamilton and cited him as saying that he “mis-spoke” during the presentation at the Royal Aeronautical Society FCAS Summit and that the “rogue AI drone simulation” he had talked about was only a hypothetical thought experiment from outside the military, and not a real-world experiment.
The blog added that he said the scenario was “based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome". He clarifies that the USAF has not tested any weaponized AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".