Skip to content

In Defence of Combat Robots

Human weaknesses in combat are well-documented and largely consistent. Technology, by contrast, improves with every passing week.

· 12 min read
In Defence of Combat Robots

Next week in Geneva, diplomats will assemble to discuss Lethal Autonomous Weapons Systems (LAWS). The precise definition of LAWS is contested but the Pentagon speaks of “weapons that once activated can select and engage targets without further human intervention.” Activists striving for a ban on LAWS call them ‘killer robots,’ language some find emotive but which is nevertheless useful for grabbing headlines.

To illustrate, Disney/Lucasfilm’s K2SO is a ‘killer robot’ as is R2D2. In the Star Wars spin-off, Rogue One, K2SO—a reprogrammed Imperial droid fighting for the Rebel Alliance—kills about a dozen Imperial Stormtroopers with a grenade, a blaster, and his bare robot hands. More pertinently, existing systems like Aegis and Patriot running in ‘auto-fire’ mode also qualify as LAWS under the Pentagon’s definition.  Unsurprisingly, nations fielding Aegis and Patriot think that banning LAWS would be premature.

Some analysts have suggested that we drop the word ‘lethal’ and speak instead of non-lethal as well as lethal Autonomous Weapons Systems (AWS). In what follows I will discuss AWS or ‘combat robots’ that can deter, capture, wound, or kill. A narrow focus on killing distorts the true mission of democratic militaries, which is not killing per se, but defence and security—and these goals can also be met by deterrence, capture, and wounding.

Activists campaigning for a ban obviously believe the arguments against lethal AWS are compelling and those in favour are not. However, I have never found the case against AWS to be as decisive as those making it seem to think it is. That said, many in the military make the case for AWS with some reluctance. As Thomas K. Adams wrote in his prescient 2001 article, “Future Warfare and the Decline of Human Decision Making,” the development of “robotic platforms” are “taking us to a place where we may not want to go, but probably are unable to avoid.”

I. Arguments Against AWS

The arguments against AWS can be grouped into four groups, which I will address in turn.

(i) International Humanitarian Law (IHL) Compliance

Five years ago, it was routinely claimed that AWS could not be used in accordance with IHL principles of distinctionproportionality and command responsibility. But these claims are more subdued today.

Under IHL, distinction is the ability to distinguish combatant from civilian. Recent advances in vision systems have been dramatic. It is now possible to point a smartphone running Microsoft’s Seeing AI at the world and it will recognize people, their age, their gender, their skin pigment, and their clothes. It can also identify currency and read menus. Designed as an aid for the visually impaired, no one technical seriously doubts a military-grade version of software with functionality similar to Seeing AI could be used to identify tanks supported by infantry carrying rifles. For example, a military vision system could report “Looking north, I see an Armata T-14, two (autonomous) Nerehkta-2 tanks, and 27 Caucasian troops wearing the insignia of the Russian 2nd Guards Motor Rifle division.” It is just a matter of finding some non-Googlers (e.g. Microsofties or IBMers) willing to train the AI in the required object recognition.

The IHL principle of proportionality requires a belligerent to avoid excessive collateral damage compared to the military advantage gained. A proportionality calculation typically results from a target classification. If the AI reports an enemy ship, it can engage it with an anti-ship missile. If the AI reports a tank, it can engage it with an anti-tank missile. If the AI reports an infantryman, it can engage him with a bullet. This is a problem solvable with a look-up table that matches a target to the appropriate weapon. There are more complex proportionality problems, of course, (for instance, if the tank is next to a kindergarten) but nothing that strikes me as impossible to solve.

However, the point on responsibility can be partly conceded. It is pointless to hold a robot responsible for its actions and punish it for wrongdoing if it feels nothing and follows rules mechanically. However, responsibility under IHL can be assigned to the person who signs off on the configuration of the AWS at activation, and/or to the commander of the unit fielding the AWS. Commanders can be deemed liable for the unlawful actions of their robotic subordinates as well as their human personnel. We will revisit responsibility in a moment when we consider what it means for a machine to ‘make a decision.’

(ii) Political Claims

The claim that AWS will cause an undesirable arms race has some intuitive plausibility. However, an ‘arms race’ is just a term used to describe technological competition in weapons systems. Today, the vast bulk of AI innovation is civilian. According to the World Bank, just over two percent of global GDP is spent on defence. Thus we would expect about two percent of AI to be defence-related. Indeed, it was recently reported that only two of the top 100 AI firms in the US were engaged in defence contracts.

It used to be the case that the cutting-edge in AI was military. This is no longer true. The bulk of published papers are civilian. The ‘unicorns’ of Silicon Valley and even the non-profits lure the top AI talent with huge pay packets. Almost all AI is dual use. Today, the military are largely applying civilian innovations like object and event recognition and simultaneous location and mapping to military purposes such as targeting and navigating hostile unmapped spaces.

Many factors contribute to arms races: namely a lack of trust, a lack of shared values, grievances over past conflicts, and strategic rivalry between antagonistic powers. However, banning AWS will not obviate these issues, and so the causal claim is weak.

As for politicians being tempted into reckless military adventures—one of the most wretched examples being the War of the Triple Alliance between 1864 and 1870, which killed 70 percent of adult males in Paraguay—this is an old problem that predates AWS. Banning AWS will not solve the problem of rulers like Putin, Xi, Trump, and the hapless Paraguayan dictator Francisco López from pursuing national greatness, military glory, and world domination. On the other hand, mandatory psychological testing of politicians for sociopathic tendencies might.

However, it is undoubtedly true that AWS could be used to carry out unattributable missions such as the Stuxnet attack on Iranian nuclear centrifuges but, even so, ‘false flag’ operations are not unique to AWS either.

What Are Reasonable AI Fears?
Although there are some valid concerns, an AI moratorium would be misguided.

(iii) Intuitive Arguments

In his UN report on lethal robots (§94), Christoph Heyns argued that “machines lack morality and mortality and should not as a result have life and death powers over humans.” Arguments like these carry a strong intuitive appeal. Many in AI and robotics want nothing to do with military projects and have a visceral loathing of turning robots into killing machines.

The problem with this argument is that reframing the question as “friendly robot vs enemy human attacking friendly human” produces different intuitions. What, after all, is wrong with good robots killing bad people? Polling published by Michael Horowitz in 2016 found that public opposition to AWS in America is contextual—“fear of other countries or non-state actors developing these weapons makes the public significantly more supportive of developing them” and “the public also becomes much more willing to actually use autonomous weapons when their use would protect US forces.”

So, context matters. If a robot is protecting friendly troops from salafi-jihadis, it is a much easier sell to voters than a ‘slaughterbot‘ that massacres students in lecture halls. According to Paul Scharre, author of a recent book on AWS entitled Army of None, the ‘slaughterbot’ video made by Elon Musk’s Future of Life Institute was propaganda not argument. I agree.

Closely related to the ‘power of life and death over humans’ argument is the ‘dignitarian’ argument. At its simplest, this claims that “death by algorithm” is the “ultimate indignity.” In its more complex forms, the argument holds that there is a fundamental human right not to be killed by a machine—that the right to human dignity, which is even more fundamental than the right to life, demands that a decision to take human life requires a specific consideration of the circumstances by a human being. A related claim is that meaningful human control of an autonomous weapon requires that a human must approve the target and be engaged at the moment of combat.

The great problem with this argument from a military perspective is that it puts the slow human brain at the centre of battlespace cognition. It requires that a person has enough information in the right format to make a decision. To achieve “meaningful human control” an individual needs time to understand the user interface and time to hit the button to confirm the engage decision. For this to work, the war has to be paced so as not to throw too many decisions at the authorizing person in the same second. No one in Defence seriously thinks future war will be slower than contemporary war. On the contrary, most accept that future war will increasingly become too fast for human brains. There is a grave risk, then, that countries which insist on relying upon human cognitive architecture risk losing their next war.

A further intuitive argument is based on the claim that AWS have no skin in the game. People feel that a ‘soulless’ machine can have no grasp of what it truly means to take human life. They therefore think it unfair and obscene that machines should be tasked with the decision to do so. This argument can be blunted by challenging the claim that machines are really able to make a decision. They only have delegated agency, not ‘real’ or ‘human-level’ agency.

In classic Turing computation, a machine is programmed to follow explicit rules keyed in by humans. For example, a rule might stipulate (in a programming language not English) that “if you see a human wearing an enemy uniform and carrying a weapon, shoot.” Suppose the machine’s sensors detect an object that is human shaped, wearing a uniform, and carrying a rifle. It therefore ‘makes a decision’ by following the human-defined rule and triggered by what it senses in the environment. In this case, does it make sense to say that the machine actually made the decision? Or would it make more sense to say that the human who put the rule in the machine made the decision at installation and that it was then mechanically executed by the machine? The latter interpretation surely makes more sense. The machine is simply following a programmed rule without any feelings of doubt, conscience, or guilt, just as it would had it been programmed to record a television broadcast. It is possible to claim that a machine ‘decides’ insofar as cognition is installed in the machine. But the human who inputs the rules that determine a particular action under a particular set of circumstances is the one who really decides. The install time decision by the human involves an authentic, evaluative, and deliberative choice. The execution by the machine does not.

Even if the ‘rules’ (or ‘action selections’ or ‘behaviours’) result from a human setting up machine learning with training data and are not programmed but emerge from curation of training data applied to a neural network, those installing the decision procedures and exposing them to training data are morally responsible for the decisions. This is especially true if the AI trainers do not understand and cannot explain the decisions made by their inscrutable ‘deep learning’ machines. Such ignorance would make them reckless and negligent.

Delegating targeting decisions to machines carries great risks. The machine might miss a morally relevant detail that a human might pick up. It might make a classification error. However, comparable risks are entailed by leaving firing decisions in human hands. Malaysian Airlines flight MH17 was shot down by weapons under ‘meaningful human control’ causing the deaths of hundreds of innocents. A state-of-the-art AWS could have identified MH17 as a civilian airliner in 50 milliseconds via an IFF poll. People, on the other hand, can be over-enthusiastic, hasty, prone to classification errors and emotions like panic, rage, or fury. Human weaknesses in combat are well-documented and largely consistent. Technology, by contrast, improves with every passing week.

(iv) Risk Arguments

There are downside risks of AWS. AWS may be vulnerable to cyberhacking and thus ‘defect’ in mid-battle, turning on friendly forces, and killing them in a fratricidal attack. AWS might also tempt commanders to undergo missions deeper into enemy territory that will expose civilians to greater risk. Thus AWS could result in increased risk transfer to civilians. Indeed, the most substantial arguments about AWS come down to risk.

ChatGPT and the Future of the Professions
Professionals must learn to work with the machines or they will be replaced by them.

II. Arguments For AWS

The arguments in favour of AWS come in two groups based on claims regarding military necessity and claims of risk reduction.

(i) Military Necessity

If a country does not adopt AWS but its enemy does, then it will lose the next war. It will suffer the fate of the Polish Cavalry against Mark III Panzers in World War II. Surrender or death will be the only options. Those without AWS will have to kiss their freedom and independence goodbye. Militaries find this a compelling argument, but it can be countered. Activists stress the importance of a ban treaty similar to those prohibiting chemical and biological weapons. However, verifying treaty compliance presents a problem. Since an autonomous weapon can look and generate network traffic much like a telepiloted one, proving autonomy in weapons is hard.

(ii) Risk Reduction Claims

It is claimed that increased precision made possible by AI vision systems in weapons will reduce the risk of collateral damage to civilians, reduce the risk of casualties to friendly forces, and even reduce the risk of harm to foes. Combat robots performing infantry missions could plausibly be designed to stun and capture rather than wound and kill. As robots, they can be expected to assume more risk than humans to ensure what they are shooting is a lawful target.

Reducing collateral damage to civilians is appealing. Democratic Western militaries take their obligations under International Humanitarian Law seriously. Military lawyers are consulted on targeting decisions. While they are far from perfect, they do make an effort and seek to improve constantly. Other militaries (notably the Russians and jihadist irregulars like ISIS in Syria) have been lax in their IHL compliance, to say the least.

Given a choice between a Hellfire missile that vaporizes a target from a mile up and a robot on the ground that can capture said target, a fair-minded human rights lawyer might concede that there is a case for fielding combat-capable robots. However, such robots will need onboard autonomy to survive close quarter fighting unless militaries are willing to accept the network lag time of a satellite connection. Furthermore, the enemy will certainly hide in wi-fi resistant locations making telepiloting impossible.

Given the foregoing, who do you really want to send into the subterranean bunker to rescue Nigerian schoolgirls or kill a sociopathic drug lord: someone’s irreplaceable child or a replaceable machine? I know what the Dallas Police Chief will choose.

III. Looking Forward

Regarding AWS regulation, the moral question is whether the upside risks and benefits outweigh the downside risks and the costs. There are strong arguments on both sides. However, it is far from clear to me why we cannot employ combat robots in some circumstances (capturing or wounding rather than killing enemy combatants) while nonetheless banning their use in others (killing innocents), the most egregious of which (massacre, genocide) are already forbidden. By the same token, banning Skynet—a machine that becomes self-aware and decides its own targeting policy (kill all humans) without any human review or approval—strikes me as a fairly straightforward, worthwhile, and attainable diplomatic goal.

Getting support for a ban on all kinds of AWS will be much harder. Machines with autonomy in the select-and-engage functions of targeting that fight in compliance with IHL, reduce risk to friendlies, civilians, and foes, and defend good people from bad ones, are not necessarily going to be perceived by the voters as either evil or even undesirable.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette