Geopolitics

Lethal Autonomous Weapons & RMA
Star Rating Loader Please wait...
Issue Courtesy: CLAWS | Date : 30 Sep , 2017

Pic Courtesy: Sumit Walia

Recently the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWs). The group is headed by Amandeep Singh Gill, an Indian diplomat. An Open Letter was promptly circulated by a group of CEOs and Founders whose companies are working in Artificial Intelligence and Robotics’ technologies, welcoming the UN move and calling for an outright ban on Lethal Autonomous Weapons.[1]

Background

A killer robot is not needed to disrupt the financial networks of a country or damage its communication channels, soft AI technologies have already advanced enough to carry out such tactical non-kinetic offensives. Algorithmic Intelligence has gotten better to the level of defeating humans at a game like Poker – bluffing, approximating and optimally randomising its play – such intelligence could therefore also be easily trained to run Command & Control and tell the difference between border scare and actual aggression or infiltration – given the gigantic data available through the space-cyber network of cognitive sensors & satellites. Airspace, Maritime and Industrial Security already rely on a significant level of intelligence in physical devices.

It is true that sometimes it takes a lot of time before a technology is adopted for combat usages; the Matchlock for example, was invented in 16th century but saw a large scale acceptance only in the 19th century, geopolitics had necessitated that a gun could fire repeatedly and rapidly and did not require two men to do so.[2]

There are many enhanced weapon systems which are yet to see any significant adoption by combat forces, however it seems that geopolitics would come at play again in armies deploying weapon systems based on two major game changers of military power: Artificial Intelligence & Robotics. An example of this are the US Predator drones – initially unarmed UAVs that were used for surveillance – were weaponized after 9/11 and helped to take out hundreds of targets including taking down a Television propaganda transmission.[3]

The  Issue

There seems to be a notion that killing by a machine is worse than killing by a human, because the machine does not have the moral agency of a human being. What such arguments fail to take into account is that most of the killing that happens in war is not the result of careful moral deliberations of soldiers but is rather almost so situational, mechanical and training-dependent that it resembles a machine in itself. Other than that,the fluctuations in the moral agency of humans have caused many a fratricides and an innumerable number of war crimes.

Now while the weapon systems are reaching a higher degree of autonomy, a fully independent killing machine is far from realised at present. But researchers in AI are continuously pushing the frontiers and an integration of present technologies adapted for battlefield can still be swifter & deadlier than its more humane counterparts.

These advances in Artificial Intelligence and related technologies such as Brain-Computer Communications are unequivocally going to change how we perceive the world and communicate with each other as well as with the machines – and most certainly, how we fight our wars. As proposed in a paper on Autonomous Military Technolgy – “If states seek only to maximize their non-military resources, trends in automation do not alter the frequency of war. And if they expect to go to war and expect military losses… increasing the level of military automation reduces battlefield casualties without changing civilian losses and without changing the size of nations’ war efforts.”[4]

The NATO guide on Autonomous Systems states that the history of disruptive innovations in warfare suggests that understanding how best to use a new technology is more important than developing the technology first, or even having the best technology.[5] The issue therefore seems to be located in the development of a theatre-independent-kill-chain and man-machine interfacing at a doctrinal level.

Automatic versus Autonomous 

There is some perceptual gap in the understanding of how Autonomous systems work, caused due to the legacy of the traditional idea of machines that they are “automatic” systems producing mechanical responses. What differentiates an automatic machine from an autonomous one is the cognitive abilities of the latter.

The attributes of Mobile Intelligent Agents such as goal-orientation, mobility, planning, reflection, and cooperation are what distinguish them from other types of software and hardware.[6] Here it must be understood that technology based deployments are interoperable, meaning in terms of systems engineering, their components may find many use-cases elsewhere too. In case of military, the learning and intelligence that an autonomous machine garners in field environment would greatly enhance the command and control interface back at the base.

Therefore, the deployment of Lethal Autonomous Weapons can be better understood as the cognification of the already deployed weapon systems. And any and all cognitive systems have two broad classifications of constraints in action: [7]

  • Law Driven Constraint
  • Intent Driven Constraint

While the first kind of constraint accounts for natural laws as well as the architectural & design limitations, the second category of constraints points towards the skill & knowledge of the system. Therefore, if a totally independent LAW system has to take any action for bringing a change in its world-model, the predicted world-model can be first cross-checked with a world-model that is in accordance with the Commander’s intent and approval.

As per the US Army Research, currently, there are several levels of autonomy for unmanned systems which should be adjustable as per the mission needs: [8]

  • Remotely Operated: Least autonomous
  • Assisted Systems: Work parallel to and based on human inputs
  • Delegated: Some controls with human inputs but lack of any direct     human supervision
  • Supervised: Extensive autonomy but goals & mission directed and supervisedby human
  • Fully Autonomous: Self-driven & humans can only intervene in emergency situations 

The matter of debate should be whether giving a greater degree of autonomy to a machine in the use of force and violence would bring down the overall casualties of war and project stronger military power over the enemy. Often people make a mistake of turning it into a man versus machine scenario, overlooking the fact that these LAWs will be deployed by men to achieve a dangerous military objective against other men who they’d have had to otherwise fight and kill themselves.

Man-Machine Combat Teaming

In 18th century, Pierre-Joseph Bourcet had conceptualized the war-machine as something which flows. This fluidity, he had remarked, was essential and directly proportional to this machine’s manoeuvrability. Now fluidity in warfare comes from approximation that borders on randomisation, and this is where the army’s Intelligence Amplification through man-machine teaming can give a shot in the arm to its combat effectiveness.

Some important points are being missed in an all-out opposition to LAWs:

  • Robots need to be trained with humans in game based tactical environments that fit the modern-day warfighting needs. 
  • No AI based weapon system is built as “ready-to-deploy” but for specific operating environments and most importantly, with definite goals that it is trained to achieve. 
  • Machines would produce faster decision loops, and far better reaction time and aim than human soldiers, also providing a 360-degree view of their environment, bringing in significant operational intelligence which could save many lives.  
  • Meaningful human control still has to be worked upon and especially in context of hybrid networks and there is a need to develop better coordination strategies for Intelligent Agents.[9]

Paul Scharre, who led the US Department of Defense working group that drafted the policies on autonomy in weapon systems, says in his writing that “How militaries incorporate autonomous systems into their forces will be shaped in part by strategic need and available technology, but also in large part by military bureaucracy and culture”. At the end, the grand notion of RMA boils down to the bureaucracy and unfortunately major overhauls are preceded by major failures. If the technology is not developed indigenously, generally the technology acquisition turns into a time taking process, sometimes consuming years and leaving the combat forces at a disadvantage of not keeping pace with the latest technology.

Be it a machine, platform or a programme, there is no doubt that well integrated Autonomous Weapon systems can create a very high-level of Situational Awareness at all levels of the chain of command. But how well does it increase or decrease the threshold of war and make war-fighting and conflict resolution more streamlined is a matter requiring more attention.

The Accountability Gap

Traditional legal framework is not suitable to deal with the bad actions of AI. At many places, it is suggested that the responsibility for a mishap by an autonomous system could be also collective – as in charging the States under the violation of international laws, manufacturers under the violation of corporate laws, and the programmers and designers under the criminal law for committing war crimes.[10] But such a strategy makes as much sense as punishing the parents for a crime their child commits; of course they were responsible to train the child to be a good human being but if away from home “in field conditions” the child learned some tricks and did the unspeakable, certainly it is not the parents who are to be punished.

The manufacturers and designers of autonomous systems are not responsible for the crimes of the machine or its attackers, they are however and only responsible for their own mistakes such as:[11]

  • Hardcoded credentials in system
  • Backdoors
  • Undocumented & Insecure protocols
  • Weak access control &
  • Poor authentication mechanisms

A fully autonomous machine will however be free from most of the above troubles as the lack of human control would also mean the reduced need for a high-speed secure communication channel. Accountability therein will fall on the commander who deployed the machine, but since the actions of Intelligent Agents can be unpredictable, the matter would inevitably find itself in a court debating the military advantage and the operational details of the conflict.

It is although best that even in case of independent LAWs, there is a human in the loop as machines are not the addressees of human law. The weapon system can execute its non-kinetic EW capabilities and other non-lethal kinetic capabilities on its own but would require a human operator like with the UAVs, to take any lives. One constraint that would arise here will be from whether or not the LAWs can act self-preservatively.

Concluding Remarks

The scope of current research & developments in autonomous systems consists of a variety of fields such as ICT, Machine Learning, AI, Neuroscience, Cognitive Psychology, Signal Processing, Systems & Control Theory etc – which are very active fields in their own right, commercially & academically. However, what isn’t clear is whether an emergent system at present is dependable enough yet for high stakes independent decision-making involving the prioritization, selection and attacking of human targets. But looking at the pace of the research outputs, it certainly will be.

One way to begin the deployment of Lethal Autonomous Weapons is to limit them to a non-human theatre such as the space and the seas. LAWs can play a very effective role in Anti-Access/Area-Denial systems as well as in protecting strategic assets. However more research needs to be done towards information security to reduce bias from interference, as well as towards drafting an open integration guideline and due diligence for the manufacturers as well as the commanders.

Efforts to assess the strategic implications of Lethal Autonomous Weapons are still on-going and governments have to carefully watch the degree of autonomy and independent manoeuvrability that weapon systems develop. But one thing is certain – that these will keep adding to the historical increase in “the impersonalization of battle”.

References 

Bibliography

1) An Open Letter to the United Nations Convention on Certain Conventional Weapons, https://futureoflife.org/autonomous-weapons-open-letter-2017/

2) http://effective-altruism.com/ea/1dz/nothing_wrong_with_ai_weapons/

3) Peter W. Singer, Military Robots & the Laws of War, Brookings, February 11, 2009

4) Kyle Bogosian, Autonomous Military Technology and the Frequency of Interstate Conflict, August 2017

5) Autonomous Systems’ Issues for Defence Policymakers, Capability Engineering and Innovation Division, NATO

6) A. Karmouch, Mobile Software Agents for Telecommunications, IEEE Communications Magazine, July 1998

7) Kevin B. Bennet, “Ecological Interface Design for Military Command and Control”, Journal of Cognitive Engineering and Decision Making / Winter 2008

8) Unmanned Systems Integrated Roadmap FY2011-2036, DoD United States

9) Katrine Nørgaard, Autonomous Weapon Systems and Risk Management in Hybrid Networks, Royal Danish Defense College

10) Autonomous weapon systems: Technical, military, legal and humanitarian aspects. Expert Meeting, Geneva, Switzerland, 26 – 28 March 2014

11) Ruben Santamarta, SATCOM Terminals: Hacking by Air, Sea, and Land, Black Hat 2014

Courtesy: With permission reproduced from www.claws.in

Rate this Article
Star Rating Loader Please wait...
The views expressed are of the author and do not necessarily represent the opinions or policies of the Indian Defence Review.

About the Author

Shashank Yadav

Contact at: shashank.inbox@gmail.com

More by the same author

Post your Comment

2000characters left