RAIN+ ETHICS Primer #3

Controlled Autonomy: Ethics of Autonomous Weapon Systems

About the primer

This primer aims to provide an introduction to the complex concepts and debates surrounding the ethics of the design, development and deployment of (AI-enhanced) systems for defense purposes. It is the first in a series of RAIN+ ETHICS primers. Other primers can be found here: https://www.raindefense.ai/ethics/primers.  

This document will be updated regularly. Feedback or suggestions are welcome and can be sent to primers@raindefense.ai

Executive summary

  • There are no universal rules or guidelines on the ethical dimensions of the design. development and deployment of autonomous weapon systems or systems with autonomous capabilities.
  • Many discussions boil down to the amount of human control and oversight – to what extent there should be a ‘human-in-the-loop´, as it is often referred to.
  • There is a perception that the amount of human control will  automatically decrease as the autonomy of vehicles and systems is enhanced by the use of AI, which will therefore affect responsibility and accountability.
  • At the moment, however, virtually all autonomous weapon systems are still under human control, which means that much of the ethical debate about these systems is actually more about future, rather than current, capabilities.

1. Introduction

The ethical debate on autonomous weapon systems is concentrated on the potential risks and benefits of weapon systems capable of finding, tracing and engaging targets independent of human control. While such systems do not yet exist, there are systems currently in operation and development that possess varying degrees of autonomy. That is why this primer not only discusses the ethical questions related to the potential, future use of autonomous weapon systems but also the questions that can be raised about existing systems with autonomous capabilities. 

2. Key questions of the ethical debate

Are autonomous weapon systems ethical?

Even if autonomous weapons might be lawful, this does not mean that they are ethical. There is no consensus on this question. There are roughly two sides to the debate about autonomous weapon systems: 

  1. Unethical: Autonomous weapons are not ethical because they go against human dignity and other principles of humanity. In this view, it would, for example, be unethical to delegate to autonomous weapon systems the decision to take human lives, even if these systems would comply with International Humanitarian Law;
  2. Ethical: Autonomous weapons are ethical (or even more ethical) as they are more accurate and reliable. As such, their use will result in fewer mistakes and less collateral damage. 

What is an autonomous weapon system allowed to do on its own? 

The increasing automation and autonomy of munitions, platforms and operational systems raises the legal and ethical question of what these systems are allowed to do on their own. This can be referred to as the question of machine permissibility. Some important issues related to this are:

  • Do we allow an autonomous  weapon system to make decisions in combat situations when there is no human in the loop or when the autonomous system is beyond-line-of-sight in a contested or denied area?  
  • If an autonomous system enters enemy (air) space, will it make decisions based on pre-planned instructions or will it make decisions using machine learning in real time? What are the consequences? And how does it affect legal accountability and moral responsibility?
  • Do we allow autonomous systems to make decisions based on pre-programmed military objectives, while the parameters, specifics, nuance and context of military objectives might be changing all the time?
  • How do we ensure that an autonomous system acts in accordance with international humanitarian law?
  • How do we ensure that the use of autonomous systems does not increase the responsibility gap – the difficulty in determining the moral and legal responsibility in case of collateral damage or serious wrongdoing.

Taranis: Although the technology for flight and targeting automation already exists, there are few unmanned systems equipped with these capabilities. The Taranis unmanned combat aerial vehicle (UCAV), currently developed by the British company BAE Systems, provides an interesting example. This large combat UAV is reportedly capable of autonomous flight in predefined areas, where it can identify, trace and strike targets on its own. Yet the Taranis only strikes after it receives the go-ahead from a human operator, who receives real-time information about the target through a satellite link. In other words, the Taranis does not find and kill targets without human involvement. In this case, it means the ethical question is more related to its function of autonomously identifying a target through machine learning algorithms. 

Who takes responsibility for what the autonomous weapon system does on its own?

Another key concern with autonomous weapon systems closely related to that of machine permissibility is the risk of erosion or diffusion of responsibility for decisions made by an autonomous system. This can be referred to as the question of machine accountability. Who takes responsibility for what the machine does on its own? 

  • When and how should responsibility be assigned to the operator of an autonomous weapon system and the commander of the mission? 
  • When and how should responsibility be assigned to the programmer and manufacturers?

Who is responsible? Consider, for example, a situation where an autonomous unmanned aerial system wrongly identifies a group of civilians as enemy combatants. A commander decides to authorise a strike based on this information, killing the civilians. Should responsibility for mistakes always be assigned to the operator or commander? Or could responsibility also be assigned to civilian actors involved in the development chain of these systems, such as programmers and manufacturers?

Will the use of autonomous systems weaken or reduce transparency in the military kill chain? 

Responsibility is likely to be dispersed with the application of AI in weapon systems, not only within the military chain of command but also across the civil-military divide. In this situation, ensuring transparency is key to establish an effective system of accountability. Transparency in the programming and operation of autonomous systems is a necessary condition for a clear overview of what is happening and who is responsible. A complicating factor, however, is that the AI in these systems is often a ‘black box’, which means that not even the developers understand exactly how the system is combining variables and data to make predictions and take decisions.

How can humans maintain effective control over autonomous weapon systems?

The question of human control is at the core of the ethical debate on autonomous weapon systems. Human control is generally referred to as the ability of a human to influence the outcome of a mission. It is a mechanism for attaining the intent of commanders and operators of weapon systems. With the discussion of autonomous weapon systems, the most popular concepts are those of “appropriate”, “effective” and “meaningful” human control. However, there is no agreement on what these terms actually mean and what needs to be controlled. Nevertheless, there is international consensus about the legal, operational and ethical need for human control over weapon systems. 

There are three key factors that determine the ability for humans to exercise effective control over autonomous systems: 1) the predictability and reliability of the system; 2) human intervention in how the system functions during its development and deployment; 3) the information and knowledge about the system’s functioning and its use in a particular environment.

Firestorm: The technologies that are tested as part of the US Army’s Project Convergence seek to significantly accelerate the military kill chain from identifying a target to destroying it. The AI-enhanced FIRESTORM system autonomously identifies targets and couples them to shooters, thereby accelerating the entire cycle of observe-orient-decide-act (OODA loop) from about ten minutes to a matter of seconds. The ultimate decision to pull the trigger is still left to a human being. But can an operator make a good judgement in a matter of seconds? Or will they simply follow up on the machine’s recommendation, effectively delegating decision-making power to the system?  

 

1. Predictability and reliability: In order to control an autonomous weapon system effectively, it must operate predictably and reliably. This requires verification and validation through testing of the system in realistic environments. There are two main challenges when it comes to ensuring predictability and reliability. First,  it becomes increasingly difficult to predict how weapon systems will operate as they are given more operational freedom in tasks, time and space. Second, predictability is largely dependent on the quantity and quality of the testing and training data available in the development of machine-learning algorithms. This data is relatively easy available in areas such as medicine and the stock market. Active battle, on the other hand, is a less common feature, making it more difficult to gather sufficient quality data for the training and testing of machine learning models. 

2. Human supervision and intervention: The ability to oversee and intervene in the functioning of an autonomous weapon system is an important condition for maintaining effective human control. A commander or operator should have the ability to overrule or deactivate an autonomous weapon system after it has been activated. But what happens when a human simply cannot keep up with the speed of decision making in war? Or when enemy systems jam your communication network so that human control becomes impossible? Will that human need to delegate critical decision- making power to the machine? Or should a human remain in or on the loop at any cost, even if that comes at the expense of tactical advantages vis-à-vis opponents? 

3. Information and knowledge: In order to maintain effective control over an autonomous weapon system, a developer, commander or operator must understand how the system will function under a given set of operational parameters (a specific task, type of target, operational timeframe and environment). Understanding how an autonomous weapon system functions becomes increasingly difficult when they are given more freedom in tasks, and over time and space. Another complicating factor is the tendency to bring AI-enhanced technologies to soldiers on the battlefield (the so-called tactical edge). Lower-ranked soldiers may not know in detail how an autonomous platform identifies, selects and engages targets. This lack of knowledge may affect soldiers’ ability to exercise effective human control. Furthermore, it makes it more difficult to hold them accountable for mistakes, which they may regard as unforeseen circumstances.

Reducing the human in the loop? Consider a situation where a swarm of drones is attacking a military base. An autonomous weapon system detects the swarm but requires human intervention before engaging the targets. This reduces targeting speed, especially when the swarm contains a very high number of drones, which prevents the system from taking down the swarm. Is it ethically permissible, in this instance, to reduce the level of human supervision and ability to intervene? And if exceptions would be made for such defensive scenarios, what kind of precedents would this set for the offensive use of autonomous weapon systems?

Human-machine interfaces: The complexity that soldiers have to deal with on the battlefield is also referred to as the “cognitive load”. In order to reduce the cognitive load that comes with controlling AI-enhanced platforms, military contractors develop intuitive human-machine interfaces. This can be as simple as an app on a smartphone that provides soldiers with options, such as ‘track target’ or ‘strike target’. The ethical question that can be raised here is whether control through the limited interface of a smartphone does not oversimplify the complexity of the battlefield. The automation of control functions also increases the likelihood of automation bias, which can be described as the human tendency to put too much confidence in automated decision-making systems. 

Will the development of autonomous weapon systems lower the threshold to war?

The development and use of autonomous weapon systems could lower the threshold to war because these systems may be less sensitive to political restraints or escalation thresholds. Escalation risks would be particularly significant if autonomous systems were used more regularly and in close proximity to adversaries with similar capabilities. If something goes wrong, it will happen at a speed that may prevent humans from intervening. Hence, small miscalculations by the system, or even minor misunderstandings, could have enormous consequences. A related concern is that autonomous platforms, such as UAVs, could lower the financial and human cost of war, which could encourage commanders to take greater risks and show less restraint in military operations. This could, in turn, further fuel escalation.

Box 9: Hyperwar: Hyperwar is a new type of conflict still mostly theoretical, , described by thought leaders such as John Allen, Darrell West and Amir Hussain. The concept boils down to highly complex battlefield environments in which advanced AI-driven AWS dominate and will need to act or react at an ever faster pace. The consequence is that this makes it virtually impossible for commanders or soldiers to follow the course of battle and play a meaningful role in the decision-making process. There is simply no time for that as the speed of battle increases beyond the point of human comprehension. Hyperwars are not only about AI-enhanced AWS, but also involve an unprecedented, highly complex combination of threats and capabilities, including, for example, the use of cyber attacks on offensive or defensive weapon systems or infrastructure. 

Table 1 below summarises the strengths and vulnerabilities we have identified when it comes to the ethical considerations and implications of autonomous weapon systems.

 

 

Table 1: Ethical strengths and vulnerabilities of AWS
Table 1: Ethical strengths and vulnerabilities of AWS

As the table shows, in some cases, an ethical parameter can be both a strength and a weakness. This often depends on how well developed an autonomous weapon system is. It can also depend on the extent to which important ethical parameters, such as the principles of international humanitarian law, can be safeguarded effectively through programming or through the use of effective human-machine interfaces.

4. Key concepts of the ethical debate

Human in, on and out of the loop: These concepts are interpreted in different ways, but all relate to human-machine interaction. Whether it is for operating, delegating or supervising, the idea is that there always is, or needs to be, a human being involved for a range of purposes: control, responsibility, accountability or ethical assessments. The problem is that there might be many different ‘loops’ happening at the same time, related to broader legal issues (e.g. getting permission to strike a target), related to political decision-making or military decision-making at various levels. One could even consider the use of different functionalities of unmanned vehicles and systems as separate loops – activating automated flight is one example, and  a mapping function is another –  which means it is doubtful that a human being could be in all the loops at the same time, especially as AI further enhances such vehicles and systems and algorithms calculate and process data ever faster. 

For ethics, this is, however, a central part of the debate as taking out or reducing the ‘humans in the loop’ when engaging in military or security operations could obscure legal liabilities and moral responsibilities.

OODA loop: A constantly repeating cycle of observe-orient-decide-act that forms the basis of military decision making. The idea behind this classic military concept is that the individual or military who can go through this cycle more rapidly than their opponent has a tactical or strategic advantage. The key limiting factor in the OODA loop has always been the human being, who needs time to process information and make decisions. Artificial intelligence has unprecedented potential to speed up the OODA loop, but the extent to which it will revolutionise the OODA loop will depend on how much human control is transferred to the AI system. In an extreme case, AI systems could be allowed to take decisions themselves in the OODA loop, but there are many other possibilities. For example, AI enhanced systems could also quickly analyse vast quantities of data and present the human controller with a suggested course of action or a set of options to choose from. In other words, an AI system could act itself or guide human beings into action. However, as the OODA loop concept is all about tactical and strategic advantages vis-a-vis an enemy, a faster pace of enemy systems may put additional pressure behind the argument to remove human beings as much as possible from the loop.

Black Box:  Some AI systems are so complex and opaque that nobody knows exactly how their algorithms process data and how they reach certain outcomes. The black box of AI makes it, for example, difficult to abide by the ethical principle of ´traceability’ of the US Department of Defense, which prescribes that ‘relevant personnel possess an appropriate understanding of the technology’ involved and work with ´transparent methodologies.

Predictability: The predictability of an autonomous weapon system is reflected in the understanding of how it will function in any given circumstances of its use, and the effects it will produce. 

Reliability: The reliability of an autonomous weapon system is reflected in how consistently the system will function as intended – that is, without system malfunctions or unintended effects.

Automation bias: The human tendency to put too much confidence in automated decision-making systems, including in contexts where machines are less suited to take decisions. Systems with complex algorithmic processes, such as autonomous weapon systems, intensify this tendency because their outputs are often difficult to explain. In other words, an operator or commander cannot easily establish why a system is giving particular suggestions. 

Moral responsibility: Until UAVs become their own moral agents, it is human beings (developers, operators, commanders) that have the ultimate moral responsibility for the design, development and deployment of UAVs. However, the more autonomous systems get (through AI/machine learning), the more difficult it will be to assign this moral responsibility.

Intelligence versus autonomy: Intelligence and autonomy are often used interchangeably but they are not the same. Simply put, intelligence refers to a system’s ability to perform complex tasks and decide on the best course of action to achieve its goals (e.g. adapting to new situations and information). The autonomy of a system refers to the level of freedom it has to perform its tasks and accomplish its objectives.

Hyperwar: A new form of warfare in which autonomous systems and artificial intelligence play an important role. Technological advances revolutionise the speed and scope of war, which means that human decision making is either less important or entirely absent from the classic observe-orient-decide-act (OODA) loop of traditional military operations.

Automation: While such functions are generally still bound by pre-programmed, pre-defined rules and options, AI offers unprecedented potential for increased automation.

Automation versus AI: While often used interchangeably, automation generally refers to technology and systems that follow pre-programmed rules to handle tasks and cannot analyze and apply new information in the face of new situations. AI, on the other hand, allows systems to do just that: it allows them to learn and adapt. This is related to a subset of AI called machine learning.

Ethics: Ethics is a system of moral principles about what is wrong or right. In the context of AI and security, ethics especially deals with the moral dilemmas and decisions taken surrounding the operations, policies and practices of security actors: defense, intelligence and law enforcement personnel and institutions. 

In the field of security, ethics and AI are generally about two connected sets of relations:

  1. The relations and interaction between human beings and machines/systems;
  2. The effects and impact of these interactions.

Within this field, there are two broad perspectives often found in debates: 

  1. people-centred: issues related to how human beings design, develop, use or interact with machines; and
  2. machine-centred: the ethical behaviour of the machines themselves, often referred to as machine ethics.

Are ethical standards universal? This is a complex philosophical debate. In the practical context of AI and security, we see different ethical standards, principles and viewpoints about what is considered wrong and right. In that sense, there are currently no universal standards about the ethical use of unmanned vehicles and systems. Nevertheless, there is an increasing call for standard ethical guidelines that would provide a level ethical playing field for designers, developers, suppliers and users of AI. UNESCO has, for example, called this “a common global foundation of ethical principles.”

Machine ethics: Relates to machines developing ethical principles or ways of how to deal with ethical dilemmas they may encounter. Machines can be programmed to function in an ethically responsible manner, but they could, potentially, also learn how to do this themselves through machine learning. In the latter case, however, the question becomes whether it is possible to control the machine’s interpretation of, or response to, ethical standards.

Difference between ethics and morality: Although often used interchangeably, ethics can be related to the standards of a certain community, while morality can be considered more a personal attribution. To give a simple example: A drone pilot may have his or her own ideas and norms about ‘good and bad’ or ‘right and wrong’ (morality), which may be different from the moral principles of the defense department he or she serves with (ethics).Difference between ethics and lawfulness or legality: Lawfulness or legality refers to legal standards; to behaviour that is in accordance with the law. It is about our basic rights and obligations as laid down in laws and regulations. Ethics, as a system of moral principles, is not limited by the law. Ethical standards are about human rights and wrong and do not necessarily have a legal basis. In fact, laws can be at odds with ethical behavior, and vice versa. Nevertheless, there is a strong relationship between ethics and international law: the two key legal principles of International Humanitarian Law (IHL – also known as the Law of Armed Conflict (LOAC) – have a strong influence on the ethical debate: distinction and proportionality.

Share this post

Related Posts