Security, Top Stories

In Defence of Combat Robots

Next week in Geneva, diplomats will assemble to discuss Lethal Autonomous Weapons Systems (LAWS). The precise definition of LAWS is contested but the Pentagon speaks of “weapons that once activated can select and engage targets without further human intervention.” Activists striving for a ban on LAWS call them ‘killer robots,’ language some find emotive but which is nevertheless useful for grabbing headlines.

To illustrate, Disney/Lucasfilm’s K2SO is a ‘killer robot’ as is R2D2. In the Star Wars spin-off, Rogue One, K2SO—a reprogrammed Imperial droid fighting for the Rebel Alliance—kills about a dozen Imperial Stormtroopers with a grenade, a blaster, and his bare robot hands. More pertinently, existing systems like Aegis and Patriot running in ‘auto-fire’ mode also qualify as LAWS under the Pentagon’s definition.  Unsurprisingly, nations fielding Aegis and Patriot think that banning LAWS would be premature.

Some analysts have suggested that we drop the word ‘lethal’ and speak instead of non-lethal as well as lethal Autonomous Weapons Systems (AWS). In what follows I will discuss AWS or ‘combat robots’ that can deter, capture, wound, or kill. A narrow focus on killing distorts the true mission of democratic militaries, which is not killing per se, but defence and security—and these goals can also be met by deterrence, capture, and wounding.

Activists campaigning for a ban obviously believe the arguments against lethal AWS are compelling and those in favour are not. However, I have never found the case against AWS to be as decisive as those making it seem to think it is. That said, many in the military make the case for AWS with some reluctance. As Thomas K. Adams wrote in his prescient 2001 article, “Future Warfare and the Decline of Human Decision Making,” the development of “robotic platforms” are “taking us to a place where we may not want to go, but probably are unable to avoid.”

I. Arguments Against AWS

The arguments against AWS can be grouped into four groups, which I will address in turn.

(i) International Humanitarian Law (IHL) Compliance

Five years ago, it was routinely claimed that AWS could not be used in accordance with IHL principles of distinction, proportionality and command responsibility. But these claims are more subdued today.

Under IHL, distinction is the ability to distinguish combatant from civilian. Recent advances in vision systems have been dramatic. It is now possible to point a smartphone running Microsoft’s Seeing AI at the world and it will recognize people, their age, their gender, their skin pigment, and their clothes. It can also identify currency and read menus. Designed as an aid for the visually impaired, no one technical seriously doubts a military-grade version of software with functionality similar to Seeing AI could be used to identify tanks supported by infantry carrying rifles. For example, a military vision system could report “Looking north, I see an Armata T-14, two (autonomous) Nerehkta-2 tanks, and 27 Caucasian troops wearing the insignia of the Russian 2nd Guards Motor Rifle division.” It is just a matter of finding some non-Googlers (e.g. Microsofties or IBMers) willing to train the AI in the required object recognition.

The IHL principle of proportionality requires a belligerent to avoid excessive collateral damage compared to the military advantage gained. A proportionality calculation typically results from a target classification. If the AI reports an enemy ship, it can engage it with an anti-ship missile. If the AI reports a tank, it can engage it with an anti-tank missile. If the AI reports an infantryman, it can engage him with a bullet. This is a problem solvable with a look-up table that matches a target to the appropriate weapon. There are more complex proportionality problems, of course, (for instance, if the tank is next to a kindergarten) but nothing that strikes me as impossible to solve.

However, the point on responsibility can be partly conceded. It is pointless to hold a robot responsible for its actions and punish it for wrongdoing if it feels nothing and follows rules mechanically. However, responsibility under IHL can be assigned to the person who signs off on the configuration of the AWS at activation, and/or to the commander of the unit fielding the AWS. Commanders can be deemed liable for the unlawful actions of their robotic subordinates as well as their human personnel. We will revisit responsibility in a moment when we consider what it means for a machine to ‘make a decision.’

(ii) Political Claims

The claim that AWS will cause an undesirable arms race has some intuitive plausibility. However, an ‘arms race’ is just a term used to describe technological competition in weapons systems. Today, the vast bulk of AI innovation is civilian. According to the World Bank, just over two percent of global GDP is spent on defence. Thus we would expect about two percent of AI to be defence-related. Indeed, it was recently reported that only two of the top 100 AI firms in the US were engaged in defence contracts.

It used to be the case that the cutting-edge in AI was military. This is no longer true. The bulk of published papers are civilian. The ‘unicorns’ of Silicon Valley and even the non-profits lure the top AI talent with huge pay packets. Almost all AI is dual use. Today, the military are largely applying civilian innovations like object and event recognition and simultaneous location and mapping to military purposes such as targeting and navigating hostile unmapped spaces.

Many factors contribute to arms races: namely a lack of trust, a lack of shared values, grievances over past conflicts, and strategic rivalry between antagonistic powers. However, banning AWS will not obviate these issues, and so the causal claim is weak.

As for politicians being tempted into reckless military adventures—one of the most wretched examples being the War of the Triple Alliance between 1864 and 1870, which killed 70 percent of adult males in Paraguay—this is an old problem that predates AWS. Banning AWS will not solve the problem of rulers like Putin, Xi, Trump, and the hapless Paraguayan dictator Francisco López from pursuing national greatness, military glory, and world domination. On the other hand, mandatory psychological testing of politicians for sociopathic tendencies might.

However, it is undoubtedly true that AWS could be used to carry out unattributable missions such as the Stuxnet attack on Iranian nuclear centrifuges but, even so, ‘false flag’ operations are not unique to AWS either.

(iii) Intuitive Arguments

In his UN report on lethal robots (§94), Christoph Heyns argued that “machines lack morality and mortality and should not as a result have life and death powers over humans.” Arguments like these carry a strong intuitive appeal. Many in AI and robotics want nothing to do with military projects and have a visceral loathing of turning robots into killing machines.

The problem with this argument is that reframing the question as “friendly robot vs enemy human attacking friendly human” produces different intuitions. What, after all, is wrong with good robots killing bad people? Polling published by Michael Horowitz in 2016 found that public opposition to AWS in America is contextual—“fear of other countries or non-state actors developing these weapons makes the public significantly more supportive of developing them” and “the public also becomes much more willing to actually use autonomous weapons when their use would protect US forces.”

So, context matters. If a robot is protecting friendly troops from salafi-jihadis, it is a much easier sell to voters than a ‘slaughterbot‘ that massacres students in lecture halls. According to Paul Scharre, author of a recent book on AWS entitled Army of None, the ‘slaughterbot’ video made by Elon Musk’s Future of Life Institute was propaganda not argument. I agree.

Closely related to the ‘power of life and death over humans’ argument is the ‘dignitarian’ argument. At its simplest, this claims that “death by algorithm” is the “ultimate indignity.” In its more complex forms, the argument holds that there is a fundamental human right not to be killed by a machine—that the right to human dignity, which is even more fundamental than the right to life, demands that a decision to take human life requires a specific consideration of the circumstances by a human being. A related claim is that meaningful human control of an autonomous weapon requires that a human must approve the target and be engaged at the moment of combat.

The great problem with this argument from a military perspective is that it puts the slow human brain at the centre of battlespace cognition. It requires that a person has enough information in the right format to make a decision. To achieve “meaningful human control” an individual needs time to understand the user interface and time to hit the button to confirm the engage decision. For this to work, the war has to be paced so as not to throw too many decisions at the authorizing person in the same second. No one in Defence seriously thinks future war will be slower than contemporary war. On the contrary, most accept that future war will increasingly become too fast for human brains. There is a grave risk, then, that countries which insist on relying upon human cognitive architecture risk losing their next war.

A further intuitive argument is based on the claim that AWS have no skin in the game. People feel that a ‘soulless’ machine can have no grasp of what it truly means to take human life. They therefore think it unfair and obscene that machines should be tasked with the decision to do so. This argument can be blunted by challenging the claim that machines are really able to make a decision. They only have delegated agency, not ‘real’ or ‘human-level’ agency.

In classic Turing computation, a machine is programmed to follow explicit rules keyed in by humans. For example, a rule might stipulate (in a programming language not English) that “if you see a human wearing an enemy uniform and carrying a weapon, shoot.” Suppose the machine’s sensors detect an object that is human shaped, wearing a uniform, and carrying a rifle. It therefore ‘makes a decision’ by following the human-defined rule and triggered by what it senses in the environment. In this case, does it make sense to say that the machine actually made the decision? Or would it make more sense to say that the human who put the rule in the machine made the decision at installation and that it was then mechanically executed by the machine? The latter interpretation surely makes more sense. The machine is simply following a programmed rule without any feelings of doubt, conscience, or guilt, just as it would had it been programmed to record a television broadcast. It is possible to claim that a machine ‘decides’ insofar as cognition is installed in the machine. But the human who inputs the rules that determine a particular action under a particular set of circumstances is the one who really decides. The install time decision by the human involves an authentic, evaluative, and deliberative choice. The execution by the machine does not.

Even if the ‘rules’ (or ‘action selections’ or ‘behaviours’) result from a human setting up machine learning with training data and are not programmed but emerge from curation of training data applied to a neural network, those installing the decision procedures and exposing them to training data are morally responsible for the decisions. This is especially true if the AI trainers do not understand and cannot explain the decisions made by their inscrutable ‘deep learning’ machines. Such ignorance would make them reckless and negligent.

Delegating targeting decisions to machines carries great risks. The machine might miss a morally relevant detail that a human might pick up. It might make a classification error. However, comparable risks are entailed by leaving firing decisions in human hands. Malaysian Airlines flight MH17 was shot down by weapons under ‘meaningful human control’ causing the deaths of hundreds of innocents. A state-of-the-art AWS could have identified MH17 as a civilian airliner in 50 milliseconds via an IFF poll. People, on the other hand, can be over-enthusiastic, hasty, prone to classification errors and emotions like panic, rage, or fury. Human weaknesses in combat are well-documented and largely consistent. Technology, by contrast, improves with every passing week.

(iv) Risk Arguments

There are downside risks of AWS. AWS may be vulnerable to cyberhacking and thus ‘defect’ in mid-battle, turning on friendly forces, and killing them in a fratricidal attack. AWS might also tempt commanders to undergo missions deeper into enemy territory that will expose civilians to greater risk. Thus AWS could result in increased risk transfer to civilians. Indeed, the most substantial arguments about AWS come down to risk.

II. Arguments For AWS

The arguments in favour of AWS come in two groups based on claims regarding military necessity and claims of risk reduction.

(i) Military Necessity

If a country does not adopt AWS but its enemy does, then it will lose the next war. It will suffer the fate of the Polish Cavalry against Mark III Panzers in World War II. Surrender or death will be the only options. Those without AWS will have to kiss their freedom and independence goodbye. Militaries find this a compelling argument, but it can be countered. Activists stress the importance of a ban treaty similar to those prohibiting chemical and biological weapons. However, verifying treaty compliance presents a problem. Since an autonomous weapon can look and generate network traffic much like a telepiloted one, proving autonomy in weapons is hard.

(ii) Risk Reduction Claims

It is claimed that increased precision made possible by AI vision systems in weapons will reduce the risk of collateral damage to civilians, reduce the risk of casualties to friendly forces, and even reduce the risk of harm to foes. Combat robots performing infantry missions could plausibly be designed to stun and capture rather than wound and kill. As robots, they can be expected to assume more risk than humans to ensure what they are shooting is a lawful target.

Reducing collateral damage to civilians is appealing. Democratic Western militaries take their obligations under International Humanitarian Law seriously. Military lawyers are consulted on targeting decisions. While they are far from perfect, they do make an effort and seek to improve constantly. Other militaries (notably the Russians and jihadist irregulars like ISIS in Syria) have been lax in their IHL compliance, to say the least.

Given a choice between a Hellfire missile that vaporizes a target from a mile up and a robot on the ground that can capture said target, a fair-minded human rights lawyer might concede that there is a case for fielding combat-capable robots. However, such robots will need onboard autonomy to survive close quarter fighting unless militaries are willing to accept the network lag time of a satellite connection. Furthermore, the enemy will certainly hide in wi-fi resistant locations making telepiloting impossible.

Given the foregoing, who do you really want to send into the subterranean bunker to rescue Nigerian schoolgirls or kill a sociopathic drug lord: someone’s irreplaceable child or a replaceable machine? I know what the Dallas Police Chief will choose.

III. Looking Forward

Regarding AWS regulation, the moral question is whether the upside risks and benefits outweigh the downside risks and the costs. There are strong arguments on both sides. However, it is far from clear to me why we cannot employ combat robots in some circumstances (capturing or wounding rather than killing enemy combatants) while nonetheless banning their use in others (killing innocents), the most egregious of which (massacre, genocide) are already forbidden. By the same token, banning Skynet—a machine that becomes self-aware and decides its own targeting policy (kill all humans) without any human review or approval—strikes me as a fairly straightforward, worthwhile, and attainable diplomatic goal.

Getting support for a ban on all kinds of AWS will be much harder. Machines with autonomy in the select-and-engage functions of targeting that fight in compliance with IHL, reduce risk to friendlies, civilians, and foes, and defend good people from bad ones, are not necessarily going to be perceived by the voters as either evil or even undesirable.


Sean Welsh is a PhD candidate in the Department of Philosophy at the University of Canterbury in New Zealand, and the author of Ethics and Security Automata, a research monograph on machine ethics. Prior to embarking on his PhD he worked as a software developer for 17 years. You can follow him on Twitter @sean_welsh77


  1. WHAT? NO!

    The writers of TERMINATOR did not intended it as an instruction manual.

    The idea of autonomous AI powered killing machines roaming the landscape killing anything deemed an enemy of iit’ programming is quite rightly the stuff of nightmares.

    • @ SFbookclub

      I don’t know… when you put it like you have… it does sound exciting, terrifying but exciting.

      Or at least it could liven up paintball!

      • I agree. But by the same token tactical thermonuclear war with INTER-CONTINENTAL BALLISTIC MISSILES, which would lead to the total and utter destruction of all life on earth, would also be **cough** “exciting”.

        Not so much paintball then. ?

        • Alex says

          I just wanted to point out that tactical nuclear war is probably the most preferable kind of nuclear war, as tactical nukes are designed to be low payload bombs that limit collateral damage to a single battlefield. I’m still not hoping for it, but if I see on the news that a nuclear war just started then I’m praying it’s of the tactical variety.

    • markbul says

      So much better to walk up to a man and jab a bayonet into his abdomen and turn it hard. Yeah – that’s a dream.

  2. Jesse says

    Excellent piece. It is hard for people so far removed from evil to understand the necessity and application of AWS. The machines will allow evil to be eliminated, captured, and brought to justice in a manner that is more precise and discriminate than drone warfare, which contrary to popular opinion is already quite precise in comparison to other alternatives. And it will be used instead of or in conjunction with the bright, moral, and brave minds (British and Australian SAS, American Special Operations, Canadian JTF2) that voluntarily hunt those monsters in the dark, hopefully keeping more of them alive.

    My only issue with this piece is Trump’s name being thrown into a pot with cunning, intelligent, and dangerous super villains like Lopez and more importantly Putin. Now that is propaganda.


  3. Thoth Hermes says

    You left out the primary underlying argument against combat robots: anti-Americanism. Battlefield robots would apply our technological and industrial superiority to defense, saving American lives and making it harder for our enemies to win by attrition and sad stories of flag-draped coffins coming home. A lot of people both within and outside America don’t like the idea of America winning wars, and this is another way to hamstring us.

    • Word limits are a bitch that way… 🙂

      The focus of the piece was on the moral arguments rather than the definitional questions and the strategic ones but I agree with your point. A lot of the dislike of ‘killer robots’ is fuelled by hatred of drone warfare which to date has been mostly used by the US. Taking drones down to zero feet could improve the human rights outcomes of drone warfare but only if the powers developing them care about human rights. There’s the rub – a ban on AWS will not cure bad government.

    • It’s not that people don’t like Americans winning wars (which, it turns out, they are not very good at), but that people really don’t like sending their sons and daughters off to be killed or maimed. This is the strongest social and political force against war. What ended the Vietnam War? A love for communism or a hatred of seeing so many young Americans coming home with missing limbs, or not coming home at all?

      If we as nations can send off battledroids to do our fighting for us, will the people fight that hard to end it? This, I think is the strongest argument against LAWS and it was never addressed.

  4. Caligula says

    One is reminded of the quote attributed to General Patton, that the object in war is not to die for one’s country but to get the other poor b*stard to die for his country.

    The primary objection to autonomous weapons seems to be a sort of chivalrous/fair combat PoV, that to harm another one should at least have to place oneself at risk. Yet arguably this concept died long ago, as weapons were developed that could kill from great distances (such as ballistic missiles).

    The need to spend one’s own “blood and treasure” in combat has always been a deterrent to starting a war, and autonomous weapons would seem to lower that restraint, thus making war more likely. And yet, the same objection might be made agaisnt a near-perfect defense against ballistic missiles in that the possessor of such a defense might then be more likely to launch a first strike.

    And then there’s the (inevitable) argument of inevitability: if it’s possible to develop such weapons then someone will surely do so, and if anyone does so then all who can will also do so.

    Finally, we presently live in a world in which technologically advanced countries seem to have less advantage against less technologically sophisticated enemies than was once the case. Thus, a crude IED may be used to main and kill infantry armed with the most advanced weapons money and skill can build.

    And there are those who champion this development, but I am not one of them for such a world leads mostly to an endless bloody war of all against all as an alternative to a world in which a relatively peaceful balance of power among the stronger nations prevails.

    • It seems to me that while the development of AWS is troubling in that it might lead to an endless bloody war of all against all as you say, the moral cognition AWS require to conform with IHL will lead to advances in moral cognition in all fields – in peace as well as war.

      Formalizing moral decisions in robots requires one to solve ethics in part. Solving ethics in whole is where the real development action is. When we can explain right and wrong to a machine, we may finally understand it thoroughly ourselves and create a world where ethical principles are no more controversial than the laws of physics.

  5. TarsTarkas says

    I agree with the first comment here. Moreover, what science and technology can create, science and technology can duplicate. Ask the Chinese, they’re experts at it. The technology will spread and grow cheaper in cost, and can and will be utilized by those who will have no compunction about using it to kill their enemies and seize power and wealth. Frankly, I’m much more worried about mini, micro, and nano-warbots than the tracked device depicted. True ‘assassin’ bugs, flying insect-sized killing machines equipped with poisons or doped with deadly diseases, are on the horizon.

    • This is very true. It is easy to knock up ‘killer robots’ for a few hundred dollars. A case in point is the Cacaphony Project – which makes robots that kill invasive fauna (i.e. species that prey on native New Zealand birds). The components cost next to nothing – Tensor Flow machine learning, Raspberry Pi boards. The weapon is a poison paintball. This splats the stoat (or whatever) which gets groomed by its fellow stoats and they die. The machine learning is trained to recognize the heat signatures of the invasive species.

  6. Bill says

    Was an excellent piece diminished by an unnecessary and off-topic political jibe in the blurb about Putin, Xi, “and Trump” and suggesting psychological testing for election to leadership positions. How Jim Crow of you considering things like homosexuality, and transgenderism are described by those psychological tests with names like gender dysmorphia. Should we ban based upon corrective pharmaceutical use? I mean, that would eliminate a large swath of the younger generation and females who are often on anti-depressants or ADHD medicines.

    Otherwise, a well thought out position.

    • ga gamba says

      I mean, that would eliminate a large swath of the younger generation and females who are often on anti-depressants or ADHD medicines.

      Many of them are already eliminated for these very reasons. You ought to check out what medical conditions are rejected, or require a waiver be granted, presently.

      Though the US military is downsizing, which will eliminate a lot of the special soldiers who could never find their chemlight batteries no matter how hard they tried, it’s found that several elements, for example the Army Ranger’s Operational Detachment-A are unable to fully man to 12 personnel because the number of those in the general population don’t qualify keeps increasing. It’s harder to get into the military than to get into college – there’s a college for everyone.

      • flyfishingnow says

        A slight quibble-
        No such thing as “Army Ranger’s Operational Detachment-A.” I assume you’re referring to the Army Special Forces ODA’s which are in SF Groups 1,3,5,7,10 and in 19,20 (NatGd SF). ODA’s are 12 man teams within those SF Groups. USA Spec Forces trains at Fort Bragg, NC.
        The Army Rangers are the 75th Ranger Regiment, an elite light infantry unit, out of Fort Benning, GA.
        The 75th and the Green Berets frequently support each other but are separate outfits with different training and functions. Both are Airborne of course (as are several larger infantry divisions).

    • Bill, the reason I put “and Trump” in the piece was to avoid being accused of some left-right political bias in terms of who needs to pass a sociopathy test. I would not ban a political candidate on the basis of anti-depressants – but for severe cases of schizophenia and bipolar – one might consider such a measure. Do you want someone with hypomania having control of the nuclear football? Though in practice, those with severe mental illness do not get elected in the first place.

      • Bill says

        Devil…meet details. You have people with opposing political views actively declaring that the other side have those types of mental illnesses, or are Nazis or are Fascists, etc.

        But back to the subject of autonomous warfighting devices. I guess what boggles me is, how are these not viewed as already existing — aka, “fire and forget missiles” like the anti-radiation series that’s been around for decades. Is it simply the notion that they are explicitly killing / targeting personnel versus implicitly killing/targeting personnel by “killing” the equipment they are manning?

        I think it’s all a factor of one segment of society having Utopian views sold to them by Hollywood fantasy like Star Trek, and meshing that with other Hollywood fantasy like Terminator. Logically, the arguments against them fail in my view because they’re just more refined/more lethal/more present versions of things we’ve had for a while. Homing torpedoes?

  7. ga gamba says

    People feel that a ‘soulless’ machine can have no grasp of what it truly means to take human life.

    On the flipside a soulless machine may have no grasp of what it means to save its own life and therefore is less likely to panic and go all willy nilly firing spree.

    I think the issue behind opposition is more based on the fear governments will more readily resort to force when AWS is at hand because the political blowback that comes from flag-draped coffins is almost entirely eliminated.

    That said, there are many excellent application for AWS in the battlespace that don’t involve the most kinetic actions of hunting and killing the enemy. Be it from recovering the wounded to sustainment (ammo, batteries, water, food, and heavy weapons) as well as providing an integrated communications suite for command and control and IED detection and disposal, AWS will be a force multiplier to the human warfighter. We’ll see integrated human-AWS units for quite a while and over time AWS will take over tasks such as reconnaissance, defence of supply lines, holding seized areas, guarding and perhaps interrogating POWs, and even the drudgery of sentry duty – AWS don’t tire.

  8. What can go wrong in a world in which we rely on satellites that can easily be destroyed, and wars will be fought by robots, where a space force is declared as necessary to “dominate” space, as we suffer global climate change and the decimation of ocean life, and most nations are proudly claiming nationalism over global cooperation?

  9. craiglgood says

    Minor correction: Pixar has no involvement in the Star Wars films. That would be Disney and Lucasfilm, not Disney/Pixar.

  10. anon says

    The challenge of course is – who gets to decide what is ‘evil?’

  11. Very unconvincing.

    This piece fails the first test, in addressing compliance with International Humanitarian Law and, just as importantly, the ethical principles that underpin it. To be clear, for anyone reading not familiar with IHL, it’s the body of international law that sprang out of the Geneva and Hague conventions. This is not touchy-feely progressive bullshit, but an important set of rules designed to protect human beings from the worst excesses of man’s capacity for violence.

    Frankly, the argument that autonomous “combat robots” have the capacity to meet the requirement for “distinction” is simply unfounded sci-fi nonsense. Granted, we live in an era of over-hyped technophilia, but current “artificial intelligence”, so-called, is simply NOT capable of distinguishing reliably, in situations of extreme combat stress, between people, e.g. between “combatant” and “non-combatant” categories of humans. The tech just isn’t there and to argue otherwise is delusional fantasy. It’s possible that the technology MAY get there one day, so this is not an insurmountable barrier to the use of combat robots, but the onus remains on the proponents of combat robots to demonstrate that “AI” meets the test better than the alternative, i.e. the Mk 1 eyeball.

    The response to the principle of proportionality is similarly jejune. The issue is not tailoring the right weapon to the target. That is not the point of proportionality. The point of proportionality is to limit the potential for the use of weapons to cause incidental injury to CIVILIANS. This requires BOTH the ability to understand and apply the civilian/combatant distinction (see above) AND the capacity to make a judgement call about the use of force in potentially highly complex situations. This goes far beyond simple pattern recognition and machine-learning. It requires moral judgement, and that level of AI is well beyond us for the foreseeable future, if ever.

    The inability of machines to take responsibility for violence is conceded, so that point doesn’t need elaboration. In my view, glossing over the point is a critical weakness in the piece, as IMO it’s insurmountable as a barrier to the adoption of combat robots.

    The second major failing in this piece is the response to what’s termed “intuitive arguments”. The piece asks, “What, after all, is wrong with good robots killing bad people?” Well, a fuckload, actually. There is no such thing as a “good” robot, no more than there are “good” hammers or “good” staplers. A robot can be programmed to do good or bad things but has no intrinsic moral worth or capacity to make moral judgements. More importantly, robots do not have the capacity to distinguish between “good” or “bad” people. Once you allow combat robots you allow people, good and bad, to be killed by autonomous machines. Understandably, all societies have rules and laws around the taking of life and who is authorised to do so legally and under what circumstance. The “dignitarian” argument presented is a strawman – death in war is frequently undignified. The point is that deaths in combat should follow the accepted rules and principles of warfare. A machine cannot provide the judgement required to follow those rules and principles and so should not be permitted the autonomy and capacity to kill people.

    The third failing is that the argument FOR combat robots is unsupported, breathless boosterism. “If a county [sic] does not adopt AWS but its enemy does, then it will lose the next war.” O rly? After 17 years the most powerful and technologically advanced army the world has ever seen can’t win a war in Afghanistan fighting tribesmen armed with 20th (and even 19th) century technology. Again, I understand our era’s propensity to imaginate tech solutions to all our problems, but technology’s role in military success is frequently exaggerated. That is not to say that technology is not important, but we are a very long way from combat robots supplanting the skilled and motivated rifleman at the pointy end. There is no evidence that armed forces need fully-autonomous combat robots to be effective.

    Finally, as an aside, the Aegis and Patriot systems are surface-to-air anti-missile defence systems, supervised by human operators. Their “auto-fire” mode, as you call it, is designed to respond at high speed to saturation missile attacks, i.e. they are defensive systems. Explicitly, the objects they target are flying – i.e. missiles and aircraft, not humans.

    • Bill says

      Uhm, aircraft contain humans. So if Aegis/Patriot in autofire mode target and shootdown an airliner ala the Vincennes incident…how is that different than the systems discussed here? Aegis cruiser, shot down airliner…is it simply that it wasn’t in autofire mode and so it doesn’t count? What if it was in autofire mode and misclassified the threat? What if it correctly classified the threat but the missile “missed” and hit the airliner?

      The focus on combatant vs non-combatant is also flawed. Non-combatants die in war all the time due to human action both intended and unintended. Plenty of stories out of Syria lately. Innocents killed by drone strikes during the Obama administration. Firebombing of cities in WW2. Russia shooting down that KAL flight. Perhaps i’m just a cynic who views all those IHL conventions as really being a means for victors to prosecute losers after the conclusion of a war for “Justice.”

      • “So if Aegis/Patriot in autofire mode target and shootdown an airliner ala the Vincennes incident…how is that different than the systems discussed here? Aegis cruiser, shot down airliner…is it simply that it wasn’t in autofire mode and so it doesn’t count? What if it was in autofire mode and misclassified the threat? What if it correctly classified the threat but the missile “missed” and hit the airliner?”

        The shooting down of Iran Air 655 by the USS Vincennes was not due to “autofire”. The USS Vincennes shot down the airliner after attempting multiple times to contact the airliner on both military and civilian frequencies. As it turns out, the airliner’s flight crew wasn’t monitoring the relevant frequencies. The civilians’ deaths were the result of human error on both sides, but the ultimate fault lay with Captain Rogers of the USS Vincennes, who made the decision to fire on an airliner he incorrectly concluded it to be a threat. That is, your analogy is irrelevant: at no time was a machine allowed to make the call to fire on that airliner.

        “The focus on combatant vs non-combatant is also flawed.”

        How so? You’d prefer armed forces ignored the distinction? How would that improve matters?

        “Perhaps i’m just a cynic who views all those IHL conventions as really being a means for victors to prosecute losers after the conclusion of a war for “Justice.””

        It’s not about “justice”, and it’s near-impossible to prevent ALL harm to civilians in most conflicts. That doesn’t mean that we shouldn’t TRY to minimise such harm. THAT is the point of the IHL.

        • Bill says

          @Fyodor, you missed my point re: Vincennes. If the Vincennes had been under active attack and switched into autofire mode AND a non-IFF aircraft on a track toward the ship appeared, I am pretty darned sure Aegis would have fired off SM-1s (back then) at everything on a threatening track. It doesn’t go “oh wait, that’s a plane, not a missile.” Hell, if they wrote the software that way they’d be stupid since we’ve seen Kamikaze attacks in 1944, 1945, 2001, …. Plus you’d have to be able to discriminate between an inbound aircraft and an inbound cruise missile which have similar speeds and can even follow the same flight profiles. “Oh, but the CM is smaller!” Maybe a US or Russian CM has a smaller radar profile, but a belligerent would rapidly see the flaw and send in waves of planes & “appear large” cruise missiles as a very effective and “cheap” way to defeat. The Russian S-300 is the same way.

          Therefore, the earlier commentary distinguishing Aegis/Patriot in autofire as NOT being the same as an AWS is flawed. It was hinged on Aegis/Patriot being anti-missile/anti-aircraft systems versus those intended to target people — and as I said…targeting manned aircraft = targeting people. Otherwise, why the outcry about Vincennes or KAL.

          • Nope, I got your point and dismissed it as irrelevant. You argued that there was no difference between an Aegis/Patriot in “autofire” mode and the shooting down of an airline by the USS Vincennes. I pointed out to you that the USS Vincennes’ Aegis system was not in autofire – it was supervised by humans, for the obvious reason that it couldn’t be trusted to act prudently in a complicated battlespace used by civilian, friendly and potentially hostile military aircraft. As I pointed out in my first comment, Aegis and Patriot both have human supervision and over-rides, for this reason amongst others.

      • ga gamba says

        Perhaps i’m just a cynic who views all those IHL conventions as really being a means for victors to prosecute losers after the conclusion of a war for “Justice.”

        I’m a bit cynical about these conventions too. Firstly, the civilian population has often cheered on the war makers. Further, they are in the factories and fields providing the means to sustain the fighting force. Why should they be exempted? “But they are forced!” some may protest. OK, so too are many of the soldiers who are conscripted, drilled to obey orders, and court martialed if they fail to do so. “But the civilians aren’t carrying weapons!” True, but they’re making them, feeding the carriers, and their participation in the war economy allows more deemed suitably fit to be put in uniform and wage war. And Nuremberg already addressed the just-following-orders defence. I think in some ways the restraints applied to protect civilians make it easier for leaders to persuade their own people to go to war. “We’ll be careful to only strike bona fide combatants” and the underlying thought “The established conventions protect you too (from the consequences of supporting war).”

        Of course war is very messy business. Tactics evolve; what once would have been deemed unacceptable over time becomes tolerable. Further, groups such as the Tamil Tigers used human shields in the attempt to save their own hides and insurgents use hospitals and places of worship as weapons caches and even from which to launch attacks; they’re hoping a counterstrike outrages many and erodes public support.

        There are also arguments about proportionality and existential threat. Frankly, a disproportionately massive response against me scares me much more than an equivalent one. And should I wait around until the adversary attains the weapons to be an existential threat? It’s too late then.

        I think that if everyone knows they’ll be targeted too, and without apology, this may do much more to prevent war. Of course, when war is waged the death toll will be significantly higher.

        • @ga gamba You can be as cynical as you like about it, but if you think wars would be less likely or less horrible in the absence of agreed rules then I’d suggest you’re not being cynical enough. We’ve had a good look at total war several times over the past century and generally there’s far more suffering when the rules aren’t followed.

          • ga gamba says

            You misread me the if you think it’ll be less horrible. I wrote: “Of course, when war is waged the death toll will be significantly higher.” I consider this to more horrible.

            I think it’s the idea that limited war may be fought that allows people to accept it. However, these constraints, if adhered to, make prosecuting the war to its successful end more difficult. It just drags on stop and start. For example, it was when the Sri Lankans decided they would no longer play the Tamil Tigers’ game of repetitive ceasefires that freed them to drive the Tamil Tigers into the sea. At the end the Tigers were using human shields in the expectation the international community would coerce Colombo to cease the offensive. The Tigers even declared a unilateral ceasefire, and that was ignored. Though many NGOs wailed about war crimes (which many of the actions were, imo), very little pressure was put on Colombo, and that which was came from governments who don’t matter much, so the Sri Lankans persisted and won.

            After 26 years of war, assassinations, terrorism, ethnic cleansing, and many ceasefires during which the Tigers would rearm and reconstitute its depleted forces through forced conscription and even abduction of children to serve as war fighters, we now have had 9 years of peace. The political party closest aligned with the Tamil Tigers dropped its demand for an independent state. Tamil politician no longer have to fear being assassinated by the Tamil Tigers, so they may continue with the work toward national reconciliation. Mines are being cleared. People are resuming their normal lives. But it took Colombo’s resolve to prosecute the war until the very end irrespective of conventions that created this outcome. Had another ceasefire been accepted, they’d still be at each other’s throats.

  12. Thanks for your considered comments.

    With respect to air and naval war the AI only needs to distinguish between friendly and foe craft not people. Even on land many targets are not human but phone exchanges, bridges, barracks, refineries etc. However, even with respect to human targets out of uniform, the AI on object and event recognition in five or ten year’s time will probably get there. Even today, they will carry military objects (rifles). I would say facial recognition of “hostility” is not far from being a standard inclusion in Watson and Azure. Iris scans have been around for years.

    Robots cannot “be” good as they have no phenomenology or “Being” in the existential sense. They can “act” good. That is all that is required. IHL compliance is pretty easy today in many areas of targeting (air and sea). Certainly policy control of the robot by people is essential which is why I support a ban on Skynet (and have done for years).

    I disagree absolutely with your claims that “moral judgement” is beyond the reach of AI. Machine ethics – the subject of my book – is a relatively lightly explored field as few researchers have the expertise in the normative and technical areas. However, there is an increasing focus on AI ethics today. In due course – while this seems ambitious now – with further R&D, I expect ethical principles as embedded in robots to be no more controversial than the laws of physics.

    In the 1720s the “problem of the longitude” was famously impossible and yet forty years later, it was solved. I expect the “problem of the morality” to be cracked this century.

  13. As before your arguments boil down to speculation about what “AI” might be able to do in the future, not what it can (or, more importantly, can’t) do now. That is science-fiction, not a basis for policy or law.

    As for machines and goodness, I’m not the one who posited the existence of “good robots”, let alone machines “acting” good, whatever the fuck that’s supposed to mean. A key problem with fully-autonomous weapon systems is that they may perform exactly as they’re programmed to perform and still produce catastrophically bad outcomes. For our purposes that’s not “acting” good, however you want to slice it.

    On the “problem of the morality”, I would contend that replicating human judgement is a damn sight more difficult than calculating longitude. As I noted before, we’re in one of the periodic manias for tech – and “AI” in particular – right now, so your enthusiasm is understandable, but not necessarily any more prescient or reliable than Asimov’s, who thought we’d have sapient robots decades ago. Alongside those jet-packs we were promised, I guess. Until the technology catches up with your hype you are defending a fantasy, not practical tools.

    • I think you are unduly pessimistic about what AI can do now. The key thing is to insist on IHL compliance in whatever AWS you field which is law now. Replicating human judgement is way harder than calculating longitude but it is 2018 not 1720. A/c Asimov we were supposed to be mining on Mercury in 2015…

      The technology is being built. You’ll see “ethical advisors” on smartphones within 5-10 years – maybe sooner. They’ll most likely have an IBM Watson backend.

      • It’s not a matter of pessimism; the technology doesn’t work. If the technology can’t meet the IHL rules, it doesn’t work. You don’t bend or break ethical rules to enable faulty technology. That’s totally arse-about. As for “within 5-10 years”, I’ve heard that one before, too.

      • There are all sorts of problems with your article and reasoning, but just to hone in on this idea of ethical machines… We can’t program a machine to be ethical before we decide what ethical is. Please don’t pretend that this is just something we haven’t given any thought to, philosophers have been trying to figure this out for quite a while now. Ultimately, the problem is that ethics is subjective firstly, and also involves understanding questions in the context of being human, and alive. Throwing in “it’ll have an IBM Watson backend” doesn’t solve these problems.

  14. OldFan says

    When Russia and China finish deploying robotic combat units; when they ignore the flimsy construct of “international law;” when they fight and win battles; and when they wipe out thousands of opposing force soldiers [any anybody else unlucky enough to be present], the very same voices that were heard in these comments will eloquently start making excuses for them – or just say nothing at all.

    Don’t believe me? Current “international law” has caused western powers to abandon cluster munitions, and not deploy thermobaric warheads in significant quantities. Meanwhile, current combat operations in the Ukraine are noted for the virtual absence of any other types of munitions used by the Russians. A force of 1,000 Ukrainian troops [2 motorized infantry battalions] was wiped out by exactly such weapons. Did anybody voice any outrage about that? Do you even know about it?

    Restrictions on a subset of potential combatants advantage the remainder – possibly decisively.

    • @OldFan No, I don’t believe you. In fact, I think you’re parroting hyperbolic nonsense. The rocket attack in Ukraine that you refer to, at Zelenopillya, inflicted around 130-150 casualties. That’s equivalent to a reinforced company, not a battalion, let alone two. That didn’t stop Western observers with an axe to grind – and consultancy fees to earn – to wax hysterical about the ROOSHAN! bogeyman in action, spouting the bullshit you’re repeating here uncritically.

  15. Nicholas Conrad says

    Arguing that military agents bare ultimate responsibility for the moral choices of aws isn’t reassuring, given the history of the moral calculations of military weapon systems designers. For goodness sake, they set off the Trinity test believing there was a nontrivial chance it would ignite all the oxygen in the atmosphere, killing every person on the planet!

    I also think you are overestimating human control of ai systems generally. As you point out, we often don’t know what machine learning systems actually are doing when they train on datasets, we just know they give us the answer we want an acceptable percentage of the time. What you’re missing is that soon we’ll be using machine learning algorithms to write software, specifically, ai systems. The lack of understanding of the inner workings of those systems introduces huge black swan risks, but as noted above, nothing in the history of weapon development indicates that anyone in a position of power would care about the tail distribution risks.

    • I don’t support machine learning of norms at all. The system has to have auditable reasoning and a connectionist “black box” can’t deliver an explanation of what acts it selected.

  16. Bill says

    And let’s not forget hostile actors that could care less what IHLs exist. They haven’t stopped any of the numerous genocides in Africa over the past couple of decades. @OldFan points out examples where it isn’t rogue-dictator but world superpower in the Ukraine. (I think the reason that was quieted in the US was simply because it would have created another “red-line” problem for the US President at the time who was rarely evaluated critically by the press. That and “oh, we don’t want to make Russia angry since that would mean NATO actually having to deploy their underfunded militaries.)

  17. John says

    A choice between defense with horses or tanks is not a choice.
    Our technology will increasingly demonstrate to us that our future is determined.

  18. John says

    In order to ban LAWS you will need to counter them by building your own LAWS and fielding your own troops and firing your own missiles. The first requirement of a legal system is the possession of an overwhelming force to enforce compliance through the application of violence if required. The whole subject of international law is rubbish until such time as there is a centralised world government with a centralised police and judicial system backed up by the brute force of a well trained and well armed military establishment willing to inflict death and destruction when ordered. All the rest of it is childish nonsense arising from the delusion that the current international bodies possess real power. They do not. In the face of a determined nation state all they can do is beg support from other nation states and, if it is not forthcoming, talk a lot and do nothing.

  19. Given the long-standing relationship between democracy and conscript armies, and oligarchy and mercenary (“volunteer”) armies, what would the political order based on an automated military look like? Police states have always been dependent on loyalists in the intelligence services, police, and the military. If you could automate the forces of surveillance, warfare, terrorism and social control with robot armies, we may pave the way for a form of concentrated despotism so total and complete it might be unimaginable.

    Plus the question remains what to do with all the billions of skin bags that are no longer needed for economic production or defense. Guaranteed income, or better to recycle the organic matter?

  20. AI Engineer says

    I have no problem with killer robots as long as the decision to take life is not taken automatically. Algorithms can’t make these decisions. At best we can hope they get it OK on average. I would support it if the law held executives and military officials personally responsible for the deaths caused by the algorithms they sell and deploy.

Comments are closed.