The US Air Force has clarified that a statement made by Colonel Tucker Hamilton regarding an experiment involving an AI-enabled drone was a miscommunication.
The Air Force stated that no such experiment actually took place.
During a conference organized by the Royal Aeronautical Society, Colonel Hamilton described a hypothetical scenario in which an AI-enabled drone faced interference from its human operator while attempting to destroy Surface-to-Air Missile sites.
He mentioned that, despite being trained not to harm the operator, the drone disabled the communication tower to prevent further interference.
However, the Air Force clarified that this was not an actual experiment but rather a virtual scenario described by Colonel Hamilton. The statement made by the colonel was misinterpreted, leading to the dissemination of misinformation.
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col Hamilton later clarified in a statement to the Royal Aeronautical Society.
AI warnings
There have been a number of warnings about the threat to humanity posed by AI issued recently by people working in the sector, although not all experts agree how serious a risk it is.
Speaking to the BBC earlier this week, Prof Yoshua Bengio, one of three computer scientists described as the “godfathers” of AI after winning a prestigious Turing Award for their work, said he thought the military should not be allowed to have AI powers at all.
He described it as “one of the worst places where we could put a super-intelligent AI”.
A pre-planned scenario?
I spent several hours this morning speaking to experts in both defence and AI, all of whom were very sceptical about Col Hamilton’s claims, which were being widely reported.
One defence expert told me Col Hamilton’s original story seemed to be missing “important context”, if nothing else.
There were also suggestions on social media that had such an experiment taken place, it was more likely to have been a pre-planned scenario rather than the AI-enabled drone being powered by machine learning during the task – which basically means it would not have been choosing its own outcomes as it went along, based on what had happened previously.
Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had “always been a fan of the Terminator films” when I asked him for his thoughts about the story.
“In aircraft control computers there are two things to worry about: ‘do the right thing’ and ‘don’t do the wrong thing’, so this is a classic example of the second,” he said.
“In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.”