RAIN+ ETHICS Primer #1
Humanising AI: Ethics and Artificial Intelligence in the Defense Sector
About the primer
This primer aims to provide an introduction to the complex concepts and debates surrounding the ethics of the design, development and deployment of (AI-enhanced) systems for defense purposes. It is the first in a series of RAIN+ ETHICS primers. Other primers can be found here: https://www.raindefense.ai/ethics/primers.
This document will be updated regularly. Feedback or suggestions are welcome and can be sent to primers@raindefense.ai
Executive summary
- The debate about ethics and artificial intelligence essentially boils down to human-machine interaction and how it is changed by AI and machine learning.
- By potentially increasing autonomy and machine agency, AI raises three main ethical concerns:
- What will happen to human control?
- What will happen to moral and legal responsibility?
- What will happen to legal accountability?
- Although humans are still generally ‘in the loop’, much of the ethical debate centres on future scenarios involving increased autonomy and fully autonomous systems and how they relate to human control;
- While there is abundant rhetoric about always maintaining some human control, there are fears that this might not always be realistic when military tactics or strategic advantages outweigh ethical concerns;
- Virtually all AI applications in the defense sector are part of what is called ‘narrow AI’: AI that simulates intelligence or behaviour when performing specific tasks but that is generally bound by pre-determined, pre-defined rules and options;
- There are no universally accepted ethical principles but rather a wide range of guidelines and principles that are often interpreted in different ways.
Index:
- Introduction
- Key positions in the ethical debate
- Key questions of the ethical debate
- Key concepts of the ethical debate
- General concepts
1. Introduction
Artificial Intelligence (AI): Artificial intelligence is the science of replicating or simulating in technology the intelligence or behaviour of human beings, animals or insects. It allows technology, machines and systems to have certain cognitive or behavioural capabilities. AI can be divided into two categories: general and narrow AI. In the most extreme sense, general AI refers to machines and systems that are capable of human-like intelligence. More generally, it allows machines and systems to apply knowledge and skills autonomously to problem-solving and other complex tasks.
Narrow AI is generally limited to specific tasks that simulate intelligence or behaviour which are generally bound by pre-determined rules and options. Narrow AI is the category most commonly used in the defense sector. For example, AI can be used to improve precision targeting for weapons systems such as missiles or drones. Narrow does not mean that the tasks are simple or unimpressive; it means only that they come nowhere close to general AI. Even the most advanced autonomous AI-driven battle tank is still considered narrow AI.
Ethics: The more AI is applied in the defense sector, the more debate there will be on the ethical implications of military applications. This is, of course, not a debate that is limited to the defense sector. In fact, more and more military applications of AI will in the future be driven by developments in the commercial sector. The huge research and development budgets of big tech companies are difficult to divide into AI and non-AI parts but they dwarf spending by governments. So, while some military developments may still spur commercial innovations, the potentially huge markets for commercial applications of AI-enhanced systems, such as delivery robots and self-driving cars, will mean that the spill-over effect will likely be in the opposite direction. That trend is very important from an ethical point of view because the ethical considerations going into the research and development of these commercial applications may not be the same as those needed in a military context.
The debate about ethics and artificial intelligence essentially boils down to human-machine interaction and how it is changed by artificial intelligence and machine learning. The application of artificial intelligence has the potential to alter drastically the human-machine relationship by changing: 1. levels of autonomy; and 2. the consequences and outcomes of using AI-enhanced systems:
- Levels of autonomy: More or less human agency, versus more or less machine agency;
- Consequences and outcomes: The changing autonomy and associated potential to develop new capabilities will have consequences for three main ethical concerns that partly overlap with legal and political considerations:
- What will happen to human control?
- Control over the systems enhanced by AI;
- Control over the environment in which these systems are deployed;
- Control through human-machine interaction in general.
- What will happen to moral and legal responsibility? A change in autonomy could have consequences for the question of who is morally and legally responsible for a certain task or operation.
- What will happen to legal accountability? How does a change in autonomy affect the question of who is ultimately accountable for the consequences of a task or operation?
- What will happen to human control?
2. Key positions in the ethical debate
Positions in the ethical debate can be divided in several ways.
First, they can be classified by the amount of human control that is deemed necessary when it comes to the deployment of AI-enhanced systems. Each time an example is mentioned:
- Human-in-the-loop / human agency / human control advocates: Human control should be at the basis of all design, development and deployment of AI-enhanced systems and their respective ethical and legal consequences.
International Committee of the Red Cross: Links the need for human agency to the moral responsibility and accountability for the decisions to use force, stating that ethical and legal responsibilities of human beings cannot be transferred to machines or algorithms;
- Human-on-the-loop: These can be more autonomous in their functions as long as human beings set the operational boundaries and monitor the behaviour of AI-enhanced systems.
- Gen. Terrence J. O’Shaughnessy: Defense systems need to be able to jump into action and move “at the speed of relevance” to react to incoming threats, such as hypersonic missiles;
- Human-out-of-the loop / tactical or strategic advantage argument: It is unrealistic to expect that human control should be at the base of all deployments of AI-enhanced systems because you will decrease your competitive advantage vis-a-vis enemies that don’t apply that principle.
- To be able to win future (hyper)wars, you need to remove, as much as possible, the human-in-the-loop or the human agency from the classic ‘observe-orient-decide-act’ process (OODA-loop), otherwise you are bound to lose.
Second, the positions can be divided into three main ethical concerns of the military application of AI-enhanced systems:
- Moral reliability: Machines will not be able to make the moral considerations needed to abide by the laws of armed conflict; their moral agency is inadequate. This position is often countered by others who say that machines using AI could actually be less biased, and therefore more reliable, than human beings because moral considerations can be built into the algorithms of the system;
- Human moral primacy: It does not matter whether machines will ever be able to make such considerations; it is simply morally wrong to allow a machine to be in control of decisions about life and death;
- Responsibility gap: The application of AI-enhanced weapon systems in warfare will further complicate determining the moral and legal responsibility in cases of collateral damage or serious wrongdoing.
Third, international institutions, governments and companies have different positions, with various sets or combinations of guiding ethical principles. Although many different principles exist, research by Jobin et al. (2019) into 84 documents containing ethical principles or guidelines for AI revealed convergence around five core principles. However, these principles are often interpreted very differently by various actors who drafted the guidelines:
- Transparency: Positions related to the importance of, for example, openness, knowledge and understanding about AI systems, how they work and how they are used. Transparency is always a challenging issue in the context of defense, where secrecy can be a strategic advantage, but AI has really transformed the debate about transparency because the technology involved is difficult to understand and the consequences of its behaviour difficult to predict.
- Justice and fairness: A series of discussions often related to the question of the extent to which AI-enhanced systems can respect the principles of non-bias or non-discrimination. There is a paradox at play in the defense sector because one of the key principles in ethical warfare is, in fact, bias: the distinction that needs to be made between civilians and combatants.
- Non-maleficence, safety and security: Various positions that generally agree that systems using AI should not have or be used with bad intentions and should not cause harm to the users or people affected by them. In the context of defense, a discussion about harm necessarily includes concepts such as ‘unintentional’ or ‘disproportionate’ harm.
- Responsibility and accountability: Generally relates to how AI and machine learning affect autonomy and what it means for responsibility and accountability. For example, some argue that greater autonomy must come with greater responsibility but the question is: whose responsibility And who is ultimately accountable for the consequences of increased autonomy?
- Privacy: Many AI applications involve surveillance, impactful technologies such as facial recognition and the potential use of personal data. This opens up ethical debates about the extent to which AI infringes on people’s privacy.
The fact that so many entities adopt their own sets of guiding ethical principles already shows that there are currently no sectoral standards, let alone universal standards, when it comes to the ethics of AI applications. This puts serious limits on the governability and enforcement of ethical guidelines.
Fourth, there are the specific positions of governments and defense departments. Given their importance in this debate, the five principles adopted by the US Department of Defense are:
- Responsible: Defense personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed in such a way that personnel have an adequate understanding of the technology, development processes, and operational methods applicable to AI capabilities.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses and the safety, security, and effectiveness of such capabilities will be subject to continuous testing during their life-cycles to ensure reliability and safety.
- Governable: The Department will design and engineer AI capabilities to fulfil their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.
3. Key questions of the ethical debate
Can machines become a moral agent?
This question is essentially about whether a machine could distinguish between right and wrong, and whether it could take calculated moral decisions based on this distinction. In this debate, there is a difference between a) machines (e.g. robots) having or developing their own moral code; and b) machines that are programmed to make certain ethical decisions. In the former case, machines could be considered moral agents because they could develop a moral code independently from their programmers or designers; in the latter case, the machine would not be responsible for its actions because it simply follows the rules and instructions with which it has been programmed Advances in machine learning could make the distinction between both cases increasingly blurred.
Who has the moral responsibility?
Until machines become their own moral agents, human beings (developers, operators, commanders or politicians) will have the ultimate moral responsibility for the design, development and deployment of machines enhanced with AI. However, the more autonomous systems get through AI/machine learning, the more difficult it will be to assign this moral responsibility.
Does that also mean AI-enhanced autonomous systems cannot be held accountable?
Yes, indeed. Holding autonomous systems accountable is impossible. The deployment of AI-enhanced autonomous systems can currently only influence the moral and legal responsibility and accountability of human beings. For example, mistakes made by a machine or system could either increase the legal accountability (e.g. “you should not have trusted the system to make decisions”) or decrease it (e.g. “you were unaware that this could happen when you decided to trust the system to make decisions”).
How can we ensure that military AI is predictable and reliable?
It is critical to ensure that algorithms are predictable and reliable when AI is used to replace humans in executing tasks and making battle decisions. In machine-learning algorithms, predictability is largely dependent on the quantity and quality of data available. This data is readily available In areas such as medicine or the stock market. Active battle, on the other hand, is a less common feature, making it more difficult to gather sufficient quality data for the training and testing of machine-learning models. The inherent complexity of the battlefield makes this even more difficult. ➤ See primer #2 for more on this discussion.
Will the development of military AI lower the threshold to war?
Some argue that the development and use of AI-enhanced autonomous weapon systems could lower the threshold to war because these systems may be less sensitive to political restraints or escalation thresholds. In general, escalation risks would be particularly significant if AI-enhanced autonomous systems were used more regularly and in close proximity to adversaries with similar capabilities. If something goes wrong, it will happen at a speed that may prevent humans from intervening. Hence, small miscalculations on part of the system or even minor misunderstandings could have enormous consequences. A related concern is that autonomous platforms, such as Unmanned Aerial Vehicles, could replace highly expensive platforms such as fighter jets, which are currently still crucial for air support. This could lower the financial and human cost of a country’s involvement in war or when supporting proxies. Lower costs could encourage politicians and commanders to take greater risks and show less restraint in military operations. This in turn could further fuel escalation. ➤ See primer #2 for more on autonomous weapon systems.
4. Key Concepts of the Ethical Debate
Humans and machines: The debate about artificial intelligence generally refers to various relationships or interactions between human beings (A) and machines (B).
Both A and B can stand for many things in this relationship. For example, A could be a designer of a machine or system using AI, a drone pilot, a politician, or a subject that is using an application or affected by the machine. Similarly, B could refer to a phone application, a delivery robot, an internet browser, or a drone.
Human Machine Interface: What connects the human beings and machines is called the Human Machine Interface, which can be broadly defined as the intermediary that allows operating personnel to interact and communicate with machines or systems. This often takes the form of a user interface or dashboard.
Ethics and non-AI machines: There are many different ethical questions involved even if machines lack artificial intelligence. The most common ones are related to how the machine will affect the relationship between human beings and machines. A standard question is, for example, whether assembly robots will put people out of work.
Ethics and AI machines: The difference with non-AI machines is that artificial intelligence has the potential to change the relationship and interaction between human beings and machines in an entirely new way: it might shift the autonomy of machines in ways that are more difficult to predict or control.
Autonomy and weapons: This means the increased autonomy of weapons or of functions of weapons. Weapon systems could be classified according to four different levels of autonomy:
- Human operated: A human being makes all decisions about what an AI-enhanced system does and is allowed to do. Its behaviour depends on the input of a controller or operator;
- Human delegated (human-in-the-loop): The system has functions it can perform independently of human control, but these have been pre-programmed and are activated/deactivated by human beings. The range of behaviour is built into the system and/or some human input is required;
- Human supervised (human-on-the-loop): Within certain operational boundaries, the system can perform a wide range of activities independently based on external information it receives, while human beings monitor its behaviour. The behaviour is more flexible but still within the boundaries of pre-programmed goals or rules. A human being will oversee the process and can normally intervene if needed;
- Fully autonomous systems (human-out-of-the-loop): Once objectives are set by human beings, the system can translate these into specific tasks based on the external information it receives and operate without any human interaction. For example, a fully autonomous weapon system would be able to select, target and fire without any human intervention. Although there may still be overarching rules that the system cannot violate, it can adapt its behaviour and assess how best to meet the objectives set for it. There is no possibility for human intervention.
Human in, on and out of the loop: These concepts are interpreted in different ways, but all relate to human-machine interaction. Whether it is for operating, delegating or supervising, the idea is that there always is, or needs to be, a human being involved for a range of purposes: control, responsibility, accountability or ethical assessments. The problem is that there might be many different ‘loops’ happening at the same time related to broader legal issues (e.g. getting permission to strike a target), or to political decision-making or military decision-making at various levels. You could even consider the use of different functionalities of autonomous systems as separate loops (e.g. activating automated flight is one; activating a mapping function is another). This means it is doubtful that a human being could be in all loops at the same time, especially because AI further enhances such vehicles and systems, and algorithms calculate and process data ever faster.
Often, AI is also developed and integrated into systems precisely with the purpose of decreasing the burden on operators, for example, by reducing their cognitive load. That means that the resulting reduction of human-machine interaction can actually be desirable and is not always simply a consequence of machine learning or increased autonomy. Any changes to human-machine interaction – intended or unintended – are, nevertheless, a central part of the ethical debate because taking out or reducing the humans in and on the loop when engaging in military operations could obscure legal liabilities and moral responsibilities.
OODA loop: A constantly repeating cycle of observe-orient-decide-act that forms the basis of military decision making. The idea behind this classic military concept is that the individual or military who can go through this cycle more rapidly than its opponent has a tactical or strategic advantage. The key limiting factor in the OODA loop has always been the human being, who needs time to process information and make decisions. Artificial intelligence has unprecedented potential to speed up the OODA loop, but the extent to which it will revolutionise the OODA loop will depend on how much human control is transferred to the AI system. In an extreme case, AI systems could be allowed to take decisions themselves in the OODA loop, but there are many other possibilities. For example, AI- enhanced systems could also quickly analyse vast quantities of data and present the human controller with a suggested course of action or a set of options to choose from. In other words, an AI system could act itself or guide human beings into action. However, as the OODA loop concept is all about tactical and strategic advantages vis-a-vis an enemy, a faster pace of enemy systems may put additional pressure behind the argument to remove human beings as much as possible from the loop.
OODLA loop: A variation of the OODA loop which includes the ‘L’ for lawyer. This model stresses the evolving nature of the legal part of decision-making. As new technologies such as AI and machine learning empower lower-level or dispersed operators to decide in tactical situations, this potential strategic advantage could be hampered by the need to wait for legal advice or confirmation before an operator can act.
Distinction and proportionality of weapons systems: Distinction and proportionality are two key principles governing the design, development and deployment of weapons. Distinction refers to the capability of a weapon system to distinguish between a legitimate target and an unlawful target (e.g. civilians or their possessions). Proportionality means that, if a weapon meets the requirement of distinction, the effects of its use should also be proportionate in the sense that any collateral damage (e.g. to civilians or infrastructure), or risk thereof, is not excessive when compared to expected military gains. ➤ See primer #3 for more on the principles of distinction and proportionality.
5. General concepts
Big data: In the field of AI and security, big data refers to the accumulation of huge amounts of data (e.g. from sensors placed on vehicles and systems) that are too complex and large for traditional data analysis tools to process. This is where artificial intelligence comes into play: through machine learning, it can help sift through all this data and come up with possible patterns and solutions by itself.
Algorithms: The instructions or rules a machine is programmed to follow when solving certain problems or performing other tasks. Algorithms are the basis of AI. They are a central part of the ethical discussion on AI because they are complex and may lack transparency when it comes to how decisions are made. Former US Secretary of Defense Ash Carter said of this: “AI algorithms are obscure. It is very hard to deconvolve how an infringement was made in many cases.”
Machine learning: Part of AI, machine learning refers to unmanned vehicles and systems containing algorithms that can identify patterns and learn how to make predictions and decisions when faced with new circumstances, without being programmed to do so. The ethical question here is not so much about what a machine can learn, but what possible consequences could be: how will the machine act on the basis of new learning? And who would be responsible for this behaviour?
Deep learning: Part of machine learning, deep learning is done by advanced algorithms that can, by imitating neural networks of the human brain, extract higher-level features (e.g. patterns, analysis, options) from raw data.
Deep neural networks: Part of deep learning, deep neural networks consist of many different layers of analysis, in which each layer represents a mathematical calculation process. This allows these networks to model more complex processes or relationships.
Ethics: Ethics is a system of moral principles about what is wrong or right. In the context of AI and security, ethics especially deals with the moral dilemmas and decisions taken surrounding the operations, policies and practices of security actors – defense, intelligence and law enforcement personnel and institutions.
Ethics and AI are generally about two connected sets of relations:
- The relations and interaction between human beings and machines/systems;
- The effects and impact of these interactions.
Within this field, there are two broad perspectives that one often finds in debates:
- people-centred: issues related to how human beings design, develop, use or interact with machines; and
- machine-centred: the ethical behavior of the machines themselves, often referred to as machine ethics.
Machine ethics: Relates to machines developing ethical principles or ways to deal with ethical dilemmas they may encounter. Machines can be programmed to function in an ethically responsible manner but they could, potentially, also learn how to do this themselves through machine learning. In the latter case, however, the question becomes whether it is possible to control the machine’s interpretation of or response to ethical standards.
Ethical AI: The application or functioning of AI according to certain ethical principles and values. There is no consensus about what ‘ethical’ AI entails.
Black Box: The phenomenon that nobody knows exactly how AI algorithms process data and how they reach certain outcomes because their systems are so complex and intransparent. The black box of AI, for example, makes it difficult to abide by the ethical principle ´traceability’ of the US Department of Defense, which prescribes that ‘relevant personnel possess an appropriate understanding of the technology’ involved and work with ´transparent methodologies. Some argue that testing a new AI system might help prevent undesired outcomes of the ‘Black Box’, but would users then do this in real-life situations (so that the system can learn by doing) or keep testing it in a laboratory environment to make sure it abides by ethical and legal standards? The importance of strategic and tactical competitive advantage might urge actors to field AI systems before they are properly tested.
Difference between ethics and morality: Although often used interchangeably, ethics can be related to the standards or rules of a certain community, while morality can be considered more a personal attribution. What an individual holds to be wrong or right (a person’s moral principles or values) can be in line with or contradict the ethics of a social group, institution, profession or other type of community. To give a simple example: A drone pilot may have his or her own ideas and norms about ‘good and bad’ or ‘right and wrong’ (morality), which may be different from the ethical principles of the defense department with which he or she serves (ethics).
Difference between ethics and lawfulness or legality: Lawfulness or legality refer to legal standards, to behaviour that is in accordance with the law. It is about our basic rights and obligations as laid down in laws and regulations. Ethics, as a system of moral principles, is not limited by the law. Ethical standards are about human rights and wrong, and do not necessarily have a legal basis. In fact, laws can be at odds with ethical behavior, and vice versa. Nevertheless, there is also a strong relationship between ethics and international law: the two key legal principles of International Humanitarian Law (IHL – also known as the Law(s) of Armed Conflict (LOAC) have a strong influence on the ethical debate: distinction and proportionality