RAIN+ Ethics Primer #4
Laws and Liabilities: Autonomous Weapon Systems + the Law
About the primer
This primer aims to provide guidance for the complex debate surrounding the legality of autonomous weapon systems and their use in armed conflict. It is the fourth in a series of RAIN+ Ethics primers. It builds on the second and the third primers about the definition and
ethics of autonomous weapon systems by focusing on the legal aspects of the debate. Other primers can be found here:
https://www.raindefense.ai/policy/primers
This primer will be updated regularly. Feedback or suggestions are welcome and can be sent to: primers@raindefense.ai.
Headlines
- Autonomous weapon systems are not inherently illegal under article 36 of the Additional Protocol of the Geneva Convention and other international conventions.
- The potential future deployment of autonomous weapon systems has disadvantages and advantages when it comes to compliance with the principles of international humanitarian law (IHL).
- Without more technological maturity, testing and real-life situations, it is impossible to predict whether the balance of disadvantages and advantages may increase compliance compared to traditional systems.
- The use of fully autonomous weapon systems will make it challenging to establish individual criminal responsibility for potential mistakes and violations of IHL. State responsibility, on the other hand, will be easier to establish.
- The most important political, legal and ethical question to be resolved is whether to allow or ban the development of AWS in the first place – a question that needs to reflect on both the weapon systems themselves as well as their (AI-enhanced) components and capabilities.
1. Introduction
Autonomous weapon systems are systems that can search, identify, track and engage targets independently without the need for direct human control. They have sparked fierce debate about the potential operational, ethical and legal issues that may arise with their use. Fully autonomous weapon systems do not yet exist, but their potential future use in armed conflicts raises a number of legal questions that will be addressed in this primer:
- Are autonomous weapon systems by nature legal under international law?
- Is the use of autonomous weapon systems legal under international humanitarian law?
- Who can be held accountable when autonomous weapon systems violate international humanitarian law?
2. Key questions of the legal debate
Are autonomous weapon systems legal under international law?
A. Is the weapon system prohibited by specific international conventions, such as the Biological Weapons Convention (1972), the Chemical Weapons Convention (1993) or the Convention on Certain Conventional Weapons (1981)?
B. Would the use of the weapon system cause superfluous injury or unnecessary suffering (Art 35 Additional Protocol to Geneva Conventions (API))?
Similar to the first criteria, this does not prohibit autonomous weapon systems as long as they do not carry munitions that would cause superfluous injury or unnecessary suffering.
C. Would the use of the weapon system likely result in indiscriminate attacks (Art 51 API)?
This criteria is less straightforward because it requires a technical understanding of the autonomous
weapon system and its likely use in a certain environment. For example, the use of an autonomous
weapon system in a densely populated area would raise more questions about its ability to conduct
discriminate attacks than its use at sea. Nevertheless, there is no evidence suggesting that autonomous
weapon systems are inherently indiscriminate.
Anti-personnel landmines
The use and development of anti-personnel landmines is illegal because they cannot discriminate between civilians and combatants. Anti-personnel landmines are sometimes referred to as a very basic type of autonomous weapon system because once activated they are no longer under human control. What sets anti-personnel landmines apart from AWS is their inability to find or select targets.
D. Will the weapon systems meet the principles of humanity and dictates of public conscience as defined in the Martens Clause (Art 1(2) API)?
This clause is generally understood to mean that something not explicitly prohibited by international (humanitarian) law is not immediately permissible. In other words, it confirms that there are limits to the methods and means of warfare. One of the key arguments against the use and development of autonomous weapon systems is that their lack of human control would run contrary to the principles of humanity and dictates of public conscience. The argument consists of two parts. First, a machine cannot justifiably take a human life because it lacks human judgment – that is, the ability to value individual life and the significance of its loss. Second, human dignity would be denied to someone killed by a machine because the victim cannot appeal to the machine’s humanity.
Both arguments have been contested. First, the legality of a targeting decision is determined by its compliance with the objective requirements of international humanitarian law. There is no legal requirement about who should take that decision; a machine would not act illegally so long as it complies with IHL. Second, the notion that people can appeal to a commander’s compassion or their humanity is more a theoretical matter than reflective of any real-life situation. Finally, and perhaps most importantly, the Martens Clause is not a formal source of law, so it alone cannot be used to enforce a pre-emptive ban of autonomous weapon systems.
The fact that a prohibition of autonomous weapon systems based on the Martens Clause is unlikely, does not mean that the arguments against their use are not influential. Thirty countries around the world have joined the Campaign to Stop Killer Robots, which is committed to preventing the development and use of autonomous weapon systems without a human in the loop.
Is the use of autonomous weapon systems legal under international humanitarian law?
Even if a weapon system is not illegal by nature or based on its expected use, it must still be reviewed under the principles laid out in the laws of armed conflict, also referred to as international humanitarian law (IHL). IHL governs the deployment of weapon systems, which is why it is also referred to as targeting law. IHL dictates that the deployment of autonomous weapon systems can only be legal when they comply with the principles of distinction, military necessity, proportionality and precaution.
How might autonomous weapon systems comply with the principle of distinction?
In the complex battlefield of today it becomes increasingly difficult for soldiers and commanders to distinguish between enemy combatants and civilians. Enemy combatants can no longer be identified based on easily perceivable signs, such as military uniforms. Rather, they are identified based on certain behaviour and actions, or ‘signatures’, on the battlefield. Therefore, an IHL compliant AWS needs to possess advanced perceptual and cognitive capabilities, including bodily posture and gesture recognition and an understanding of emotional expressions. The AWS must also be able to exercise these capabilities in the contexts of complex, chaotic and ever-changing battlefield situations. Proponents of a ban on autonomous weapon systems contend that machines will not be able to possess such distinguishing capabilities to the extent that humans can. Whether this will be the case remains unresolved.
STM’s KARGU UAV
The KARGU UAV is an interesting example of a system with autonomous capabilities that is already in use. Developed by the Turkish defense contractor STM, KARGU is a small loitering munition (hybrid between a drone and a missile) used by the Turkish defense forces as close air support and for counter- insurgency operations. Computer vision, deep learning algorithms and facial recognition software enable Kargu to track, detect and classify static or moving targets (such as vehicles or persons) without the need for direct human control. The legal question that may arise is how facial recognition and other AI software will be able to distinguish effectively between combatants and civilians. On the other hand, the KARGU will almost certainly be more discriminating in its use than conventional missiles, which are not able to abort attacks or loiter over a designated area to identify a target.
How might autonomous weapon systems comply with the principle of military necessity?
An AWS must be able to apply only the force necessary to achieve a legitimate military objective in order to comply with the principle of military necessity. The principle of military necessity requires a subjective analysis of a situation and the ability to make a value judgment of the expected military gains for a specific action or range of actions. Programming autonomous weapon systems to comply with the principle of military necessity is certainly a challenge. The system should not only be able to distinguish between actors on the battlefield (see discussion of distinction), it should also be able to apply the commander’s Rules of Engagement (ROE). This means that the system would need to be coded with rules governing the escalation steps a soldier is obliged to abide by in order to limit death and destruction to what is deemed necessary in specific combat situations.
Hors de combat
Consider, for example, a situation where an AWS attack leaves the enemy combatant incapacitated (hors de combat). In this case, the combatant is protected under IHL and can no longer be the object of an attack. Will the AWS be able to recognise this? Or will the AWS execute its mission and fire a second time to eliminate the target? These situations could, at least partially, be overcome if an AWS would be coded more restrictively. Another possibility would be to maintain the AWS under human supervision after activation. Finally, it is important to note that humans are also limited in their ability to comply with the principle of necessity.
How might autonomous weapon systems comply with the principles of proportionality?
In order to comply with the principle of proportionality, an AWS needs to be able to weigh the expected military gains of a decision against the potential harm suffered by civilians as a consequence of that decision. Proportionality analysis also encapsulates the principles of distinction and necessity and is therefore considered to be the most difficult principle to comply with. Similar to the principle of necessity, it has many qualitative and subjective elements that require reason and common sense. According to opponents of AWS, these are typical human faculties that AWS are unlikely to possess. In this view, an AWS cannot comply with the principle of proportionality unless its function is to support a human decision maker.
One of the main difficulties with applying proportionality analysis is that the expected military gains of an attack against a legitimate target constantly change in response to operational plans and developments in the battlefield. Hence, a machine cannot be left alone in assessing the proportionality of an attack but must be constantly updated about operational plans and developments on the battlefield. Nevertheless, parts of proportionality analysis could be particularly well-suited for an AWS. For example, an AWS can process large amounts of data very quickly to calculate the weapon’s blast and fragmentation radius as well as expected collateral damage.
How might autonomous weapon systems comply with the principle of feasible precaution?
States are obliged to take all feasible precautions in an attack to avoid or minimise civilian casualties or injuries as well as damage to civilian property. The principle of precaution applies to the deployment and development of autonomous weapon systems. During the research and development of autonomous weapon systems or systems with autonomous capabilities, states will have to make sure that the systems are predictable, reliable and accurate. Machines are generally held to a higher standard than humans. However, it is also virtually impossible to reduce the probability of mistakes to zero. The question is: how much precaution is required and how many mistakes are permitted?
Autonomous weapon systems have two key advantages over systems under direct human control that could result in better compliance with the principle of precaution. First, autonomous systems can take greater risks and even self-sacrifice. For example, an AWS can put itself in harm’s way to better assess a situation and it can shoot conservatively to avoid civilian casualties. Second, machine-learning algorithms can be trained and continuously updated with effective precautionary steps based on real-life combat.
Who is responsible when an AWS violates international humanitarian law?
There are two types of responsibility under international law: individual criminal responsibility and state responsibility. The potential difficulty of establishing individual responsibility for the unlawful use of autonomous weapon systems is often brought forward as a major ethical and legal concern (see primer #2 for a more detailed discussion). States are required under the Additional Protocol of the Geneva Conventions to prosecute those responsible for grave breaches of IHL. There is no issue with establishing responsibility in case an army commander instructs an AWS to violate IHL or if they are aware that an AWS may violate IHL. However, establishing legal responsibility gets more complicated when an operator or
commander is unaware how the AWS really functions. This raises a number of questions:
- Who holds the legal responsibility in case an AWS unexpectedly malfunctions and then violates IHL? Can the commander still be held accountable or should accountability be assigned to the manufacturer or programmer?
- What if the process of finding, tracing and engaging targets becomes so complex that an operator or commander cannot reasonably be expected to know why an AWS identifies or engages a certain target?
Accountability and decentralised drone swarms
The future military use of drone swarms poses a challenge to establishing legal and ethical responsibility. Artificial swarming intelligence enables a single operator to control a swarm of dozens, and potentially even hundreds, of unm@nned aerial vehicles. Decentralised swarms that do not rely on centralised control and exhibit a higher degree of autonomy are particularly complex to control and interact with. This raises the question of whether it would be reasonable to hold an operator responsible when the swarm violates IHL.
State responsibility, on the other hand, is probably less difficult to establish, especially when an autonomous weapon system is used by a state’s armed forces. In contrast to individuals, states cannot evade international responsibility based on unexpected system failures or malfunctions. In the cases mentioned above, the state using the AWS can be held accountable based on negligent and/or reckless conduct. Under international law, a state can seek compensation or reparation for the wrongdoings of another state. However, cooperation from both states is required. In practice, this
means that states with a lot of economic or military weight will be able to evade responsibility, while less powerful states cannot. It should be noted that this issue is not exclusive to autonomous weapon systems and is more indicative of the international legal order as a whole.
How might AWS make compliance with international humanitarian law more difficult?
There are several ways in which this might occur, described below.
Predictability and reliability
Knowing how an autonomous weapon system will function in any given circumstance of its use is crucial to ensure its compliance with international humanitarian law. However, ensuring the predictability and reliability of autonomous systems is difficult – particularly if these systems are given multiple tasks and increasing freedom to operate over time and space. The reliability and predictability of autonomous systems is dependent on the quality and the quantity of the available training and testing data. This data is relatively easily available in areas such as medicine or the stock market.
Active battle, on the other hand, is a less common feature, making it more difficult to gather enough quality data for the training and testing of machine-learning models.
Bias
Data is almost always human-made – it reflects human behaviour or it is collected, identified, structured and/or coded by humans. Consequently there will always be biases in data; humans themselves are also subject to biases. However, human bias coded in autonomous weapon systems may
be more difficult to detect.
Automation bias
Referring to the tendency of humans to defer to machines even in cases where human judgment is required. For example, human judgment may be required to determine the proportionality and military necessity of an attack but, due to this tendency, a human commander or operator may leave this decision to the autonomous system.
Lack of emotions
AWS lack basic human emotions such as compassion and empathy, which could complicate their ability to show restraint.
Black boxes
Some autonomous systems are so complex and opaque that nobody knows exactly how their algorithms process data and how they reach certain outcomes. As a result, it becomes more difficult for operators and commanders to understand and predict how the system will operate, which makes it harder to comply with IHL. The lack of transparency also makes it more difficult to establish who is responsible for mistakes.
How might AWS enhance compliance with international humanitarian law?
Although it is important to identify and address the challenges for developing IHL-compliant autonomous systems, they should also be viewed in comparison to existing weapon systems. AWS offer some key advantages over human-operated systems, which could enable a better compliance with the principles of distinction, proportionality and military necessity. These advantages are described below.
Persistent surveillance
AWS allow for a more persistent surveillance of the battlefield, which in turn results in enhanced situational awareness.
Data processing
AWS enhanced with AI and machine-learning algorithms are much more efficient and accurate than humans in processing large amounts of surveillance data, identifying behavioural patterns and detecting small abnormalities.
Physical limitations
AWS are not subject to physical limitations, such as fatigue, limited senses or an inability to process large amounts of surveillance data to come to a decision.
Lack of emotions
AWS are not affected by emotions, such as fear or anger and can maintain higher standards of objectivity than humans and refrain from retribution against civilians.
Human error
AWS are also less prone to human error because they are not subject to physical limitations and emotions. Nevertheless, human error in the development and deployment of AWS can result in mistakes.
Targeting accuracy
AWS allow for more accurate targeting. For example, loitering munitions enhanced with AI can use image recognition software to strike a specific target.
Risk and self-sacrifice
AWS can take greater risks and even self-sacrifice in order to assess the military necessity and proportionality of an attack and to identify whether a potential target is legitimate. This also provides AWS with more feasible precautionary measures.
Training
AI and machine-learning algorithms can be trained and continuously improved through battle simulations and real war scenarios.
Linking legal responsibilities and decisions to use AWS:
There are two ways to make the link between legal responsibilities and the operational decision making to use AWS on the battlefield. First, the commanders responsible for operations will normally at all times have the legal implications in mind when taking decisions to use these weapons. Second, some argue that there should be a more direct link between the legality of tactical decision-making and the operations on the ground. The latter could be seen as a variation of the observe-orient-decide-act loop (OODA loop). The OODA loop is a constantly repeating cycle that forms the basis of military decision-making. The idea behind this classic military concept is that the individual or military who can go through this cycle more rapidly than its opponent has a tactical or strategic advantage. In this case, you would then include the ‘L’ for lawyer (OODLA loop). The OODLA model stresses the evolving nature of the legal part of decision- making. In some cases, such as with targeted killings, there is normally already a lawyer in the loop, who provides the legal go-ahead for a specific action. As new technologies such as AI and machine-learning empower lower-level or dispersed operators to decide in tactical situations, the speed to act could be hampered by the need to wait for legal advice or confirmation before an operator can act. This is why General Mike Murray doubts whether the fast pace of the future battlefield will allow commanders to consult with lawyers about the legality and morality of military actions: ‘I fully believe that there will be nuanced cases where humans just can’t keep up with the speed of engagement.’
4. KEY CONCEPTS OF THE LEGAL DEBATE
Autonomous weapon systems (AWS)
AWS are types of systems that use sensory data and computer algorithms to search independently for, identify and engage targets based on programmed constraints and descriptions. A lethal autonomous weapon system (LAWS) is an autonomous weapon system that has the potential use of killing people.
Systems with autonomous capabilities (SAC)
Systems with autonomous capabilities can perform certain tasks autonomously. For example, a UAV with autopilot can fly autonomously and a system equipped with image recognition software can be used to autonomously identify pre-defined targets. In contrast to fully autonomous weapon systems, these systems already exist. The examples that feature in this primer are all systems with autonomous capabilities.
International humanitarian law (IHL)
IHL, also known as the laws of war or the Laws of Armed Conflict (LOAC), is a set of rules governing the use of force in armed conflict. Its purpose is to restrict the means and methods of warfare and to protect persons who are not, or are no longer, taking part in fighting. IHL is codified in the Geneva
Conventions of 1949 and the Additional Protocols of 1977. IHL contains several principles restricting the means and methods of warfare:
Distinction
Referring to the capability of a weapon or system to distinguish between a legitimate target and an unlawful target (e.g. civilians or their possessions).
Proportionality
If a weapon or system meets the requirement of distinction, the effects of its use should also be proportionate in the sense that any collateral damage (e.g. to civilians or infrastructure), or risk thereof, is not excessive when compared to expected military gains..
Military Necessity
The use of a weapon system is permitted provided it is necessary to accomplish a legitimate military objective and is not otherwise prohibited by international humanitarian law.
Feasible precaution
States are obliged to take all feasible precautions in an attack to avoid or minimise civilian casualties or injuries, as well as damage to civilian property.
Difference between ethics and lawfulness or legality
Lawfulness or legality refers to legal standards; to behaviour that is in accordance with the law. It is about our basic rights and obligations as laid down in laws and regulations. Ethics, as a system of moral principles, is not limited by the law. Ethical standards are about human rights and wrong, and do not necessarily have a legal basis. In fact, laws can be at odds with ethical behaviour, and vice versa. Nevertheless, there is a strong relationship between ethics and international law. The two key legal principles of International Humanitarian Law, distinction and proportionality, have a strong
influence on the ethical debate.
Automation bias
The human tendency to put too much confidence in automated decision-making systems, including in contexts where machines are less suited to take decisions. Systems with complex algorithmic processes, such as autonomous weapon systems, intensify this tendency because their outputs are
often difficult to explain. In other words, an operator or commander cannot easily establish why a system is giving particular suggestions.
Predictability
The predictability of an autonomous weapon system is reflected in the understanding of how it will function in any given circumstances of its use and the effects it will produce.
Reliability
The reliability of an autonomous weapon system is reflected in how consistently the system will function as intended – that is, without system malfunctions or unintended effects.