#

AI Drone That Killed His Operator: Truth or Fiction

Illustrative photo: MQ-9 Reaper. The exact type of the drone involved in the "experiment" wasn't disclosed / Photo credit: US Department of Defense
Illustrative photo: MQ-9 Reaper. The exact type of the drone involved in the "experiment" wasn't disclosed / Photo credit: US Department of Defense

Or should humanity be concerned about AI takeover in the near future

The information about a drone that decided to kill its operator during artificial intelligence (AI) tests in the United States has gone viral in the media as it shows the threat posed by rapid technological advancement and overall looks like it's been taken right from The Terminator fiction movie.

Here's how the story was born: during the FCAS 23 summit organized by the Royal Aeronautical Society in London this May, Colonel Tucker Hamilton, chief of AI test and operations in the U.S. Air Force, said during an experiment, an AI-powered drone tried to destroy its remote pilot because the latter was interfering with its mission. After being forbidden to do so, the drone attempted to attack the communication station to cut off the control.

Read more: In russia, They Took Their the Sirius UAV Into the Air for the First Time (Video)
Kratos XQ-58a Valkyrie drone during the test of unmanned aerial vehicle capabilities for the USAF
Kratos XQ-58a Valkyrie drone during the test of unmanned aerial vehicle capabilities for the USAF / Illustrative photo credit: U.S. Air Force

After the report was shared throughout the media, the Royal Aeronautical Society hastened to clarify it was not about a real situation but a "thought experiment." On top of that, this story wasn't in the official report but a part of the talk on the outlines of the conference about a hypothetical situation.

The U.S. Air Force also stated they never intended to do such experiments. Even more, there have been no simulations done either, and reminded it would be unethical and irresponsible to utilize artificial intelligence that way.

On the part of Defense Express, we should highlight that it's indeed all about ethics and responsibility since the proper technological level enabling a machine to make own decisions of using weapons was reached decades ago. And there is no need to involve AI for that, even. In fact, the first mass-produced defense solutions are already in active service these days.

Skyborg – an AI-capable future drone
This is how the US Air Force Research Laboratory envisions the Skyborg – an AI-capable future drone / Render image by the AFRL

In particular, the well-known Phalanx CIWS system can operate in a manual, semi-automatic, or fully automatic mode. the last one suggests that the weapon system will destroy any aerial target that comes within its area of effect; it won't ask for a permit from a human operator because it's the only condition of its effectiveness – it is intended to intercept anti-ship missiles. With an enemy missile incoming, there could be no time to wait for the operator to trigger a fire. That's why when there is a threat of such an attack, the operator switches the system to the automatic mode, and from then onward, the entire work is only up to algorithms.

The same kind of solution is implemented in some of the modern air defense missile systems, including the Patriot PAC-3 which can operate in a fully autonomous mode, too. Once again, this is the only way it can intercept ballistic missiles approaching at a speed of several kilometers per second. Human reaction is just too slow to make decisions for this sort of task.

In other words, in terms of technology, machines are long capable of using weapons on their own. the issue is whether the same algorithms should be applied to other functions as well. The thing is, the main condition for all such systems is following the Human-in-the-Loop (HITL) principle which means no weapon can function without a human at all. Artificial intelligence can take up on the task of acquisition, calculations, etc. but the final decision to use the weapon is still up to a human to make, even if it's an all-in-one permit that looks like just an "OK" button on the screen.

MQ-9 Reaper control station
MQ-9 Reaper control station / Illustrative photo credit: US Department of Defense

Overall, the story about an attempt by a drone to destroy its operator looks more like a warning about the consequences of pulling out the human factor from the "decision-making loop." Whether this will ever actually happen – definitely yes because the AI development pace is breakneck, whereas the number of not only terrorist groups but also whole terrorist states is objectively a threatening factor.

Read more: How Patriot Works When Intercepting Ballistic Targets