Fully autonomous ‘mobile intelligent entities’ coming to the battlefields of the future

A killer robot by any other name is far more palatable to the general public. That may be part of the logic behind the Army Research Laboratory Chief Scientist Alexander Kott’s decision to refer to thinking and moving machines on the battlefield as “mobile intelligent entities.” Kott pitched the term, along with the new ARL concept of fully autonomous maneuver, at the 2nd Annual Defense News Conference yesterday, in an panel on artificial intelligence that kept circling back to underlying questions of great power competition.

“Fully autonomous maneuver is an ambitious, heretical terminology,” Kott said. “Fully autonomous is more than just mobility, it’s about decision making.”

If there is a canon against which this autonomy seems heretical, it is likely the international community’s recent conference and negotiations over how, exactly, to permit or restrict lethal autonomous weapon systems. The most recent meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems took place last week in Geneva, Switzerland and concluded with a draft of recommendations on Aug. 31st.

This diplomatic process, and the potential verdict of international law, could check or halt the development of AI-enabled weapons, especially ones where machines select and attack targets without human interventions. That’s the principle objection raised by humanitarian groups like the Campaign to Stop Killer Robots, as well as the nations that called for a preemptive ban on such autonomous weapons.

Kott understands the ethical concern, drawing an analogy to the moral concerns and tradeoffs in developing self driving cars.

“All know about self driving cars, all the angst, the issue of mobility… take all this concern and multiply it by orders of magnitude and now you have the issues of mobility on the battlefield,” said Kott. “Mobile intelligent entities on the battlefield have to deal with a much more unstructured, much less orderly environment than what self-driving cars have to do. This is a dramatically different world of urban rubble and broken vehicles, and all kind of dangers, in which we are putting a lot of effort.”

Throughout the panel, where Kott was joined by Jon Rambeau, vice president and general manager of Lockheed Martin Rotary and Mission Systems; Rear Admiral David Hahn of the Office of Naval Research; and Maj. Gen. William Cooley of the Air Force Research Laboratory, the answers skirted around the edges of lethal autonomy — focusing instead on the other degrees of autonomy that will be developed in accordance with the Department of Defense’s own policy guidelines mandating human-in-the-loop control.

“As industry then takes on developing some of these highly capable AI-enabled systems, our responsibility is to make sure that we develop within those boundary conditions,” said Rambeau. This is likely to be a continuous process, as AI is a continuously updated medium and will likely need regular evaluation to make sure it doesn’t develop on its own in malicious or unexpected ways.

“We don’t know if other participants in the worldwide great power competition will follow suit,” said Kott. “There’s strong suspicion that they may not, but for us, this is the policy.”

However AI develops, whether tightly controlled and regulated or allowed to more organically process information and reach its own conclusions without constant checks, the presence of AI on battlefields of the future is likely to change how nations fight wars, and maybe even how people understand war itself.

“For the first time in human history, the warfighting profession, the battlefield of the future will be population not only by intelligent humans but also by intelligent and largely autonomous entities that are no longer humans,” said Kott. “How are we going to adjust to the introduction of new intelligent beings into battlefield life?”

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *