A-Z GLOSSARY OF AI ETHICS IN DEFENSE

About the glossary

This glossary contains key terms and concepts of AI ethics in defense. It includes general terms such as AI and machine learning and also ethics-specific terms such as ‘Responsible AI’ and ‘Explainable AI’. It will be updated over time.

Suggestions? Please email us at ethics@raindefense.ai.

Accountability:

Accountability is defined by the need to be able to identify who is responsible when something goes wrong in the design, development or deployment of an AI system. Accountability ensures legal compliance (legal accountability) and designates moral responsibility. The complexity of autonomous AI-enhanced systems (also see black box) can result in a lack of accountability, also known as an accountability gap.

Adversarial Robustness:

Adversarial robustness refers to the ability of an AI system or model to resist being changed or hacked by outsiders.

Algorithms:

The instructions or rules a machine is programmed to follow when solving certain problems or performing other tasks. Algorithms are the basis of AI. They are a central part of the ethical discussion on AI because they are complex and as a result may lack transparency when it comes to how “decisions” are made by the system.

Artificial General Intelligence (AGI):

Artificial general intelligence (AGI) is the ability of software to understand or learn any human cognitive task. To put it simply, AGI is a system with general intelligence similar to humans and capable of finding solutions for unfamiliar tasks. All current use of AI can still be described as narrow AI. The feasibility and desirability of AGI are debated by AI experts, technology leaders and ethicists.

Artificial Intellligence (AI):

Artificial intelligence is the science of replicating or simulating in technology the intelligence or behaviour of human beings, animals or insects. It allows technology, machines and systems to have certain cognitive or behavioural capabilities. AI can be divided into two categories: Narrow AI and Artificial General Intelligence (AGI).

Automation: 

Automation generally refers to technology and systems that follow pre-programmed rules to carry out tasks with minimal human input. In the context of defense, automation is used for munitions, platforms and operating systems with the purpose of improving efficiency, speed and reliability. Automated systems differ from AI-enhanced autonomous systems as they cannot analyse and apply new information in the face of new situations.

Automation Bias:

This refers to a human tendency to put too much confidence in automated decision-making systems, including in contexts where machines are less suited to take decisions. Systems with complex algorithmic processes, such as autonomous weapon systems, intensify this tendency because their outputs are often difficult to explain. In other words, an operator or commander cannot easily establish why a system is giving particular suggestions.

Autonomous Weapons Systems (AWS):

Autonomous weapon systems (AWS) are types of systems that use sensory data and computer algorithms to independently search for, identify and engage targets based on programmed constraints and descriptions. AWS can include munitions, platforms and operating systems. There is no consensus about the definition of AWS as opinions differ on the definition.

Autonomy:

Autonomy, or mission autonomy, refers to an autonomous system’s ability to independently formulate and choose from various courses of action in pursuit of specific objectives. In contrast to automation, which concerns technology or systems that follow pre-programmed rules to carry out specific tasks using well-defined criteria, autonomy involves machines and systems that can learn and adjust to changing environments. The key to autonomy is AI, which allows these systems, through machine learning, to become (increasingly) autonomous. Autonomy lies at the heart of the ethical debate as it raises questions about accountability and human agency.

Big Data:

In the field of AI and security, big data refers to the accumulation of huge amounts of data (e.g. from sensors and social media) that are too complex and large for traditional data analysis tools to process. Artificial intelligence and machine learning  make it possible to sift through all this data and recognise, identify and classify objects and patterns. Vice versa, AI systems also need big data to train, test and improve machine learning algorithms.

Black Box:

The ‘black box problem’ refers to the phenomenon of complex AI systems, such as deep neural networks, reaching conclusions and producing outputs that are not evident or easily explainable to humans. The nature of complex algorithms makes it difficult to decipher the system’s processing and understand why it produced a certain output. The black box problem poses challenges to the ethical principles of predictability, traceability, transparency and trustworthiness.

Data Governance:

Data governance is the process of managing the availability, integrity, quality, security and usability of data based on protocols and policies that control data use. Effective data governance ensures consistency, confidentiality, privacy, security and trustworthiness of data. In defense, governments, industry and academia are key stakeholders involved in data governance.

Deep Learning:

Deep learning is a subset of machine learning where advanced algorithms imitate neural networks of the human brain, extracting higher-level features (e.g. patterns, analysis, options) from raw data.

Deep Neural Networks (DNN):

Part of deep learning, deep neural networks consist of many different layers of analysis, in which each layer represents a mathematical calculation process. This allows these networks to model more complex processes or relationships.

Distinction and Proportionality:

Distinction and proportionality are two key principles governing the design, development and deployment of weapons. Distinction refers to the capability of a weapon system to distinguish between a legitimate target and an unlawful target (e.g. civilians or their possessions). Proportionality means that, if a weapon meets the requirement of distinction, the effects of its use should also be proportionate in the sense that any collateral damage (e.g. to civilians or infrastructure), or risk thereof, is not excessive when compared to expected military gains.

Ethical AI:

Ethical AI is the application or functioning of AI according to certain ethical principles and values. There is no consensus about what ‘ethical’ AI entails.

Ethics:

Ethics is a system of moral principles about what is wrong or right. In the context of AI and security, ethics especially deals with the moral dilemmas and decisions taken surrounding the operations, policies and practices of security actors: defense, intelligence and law enforcement personnel and institutions.

Explainable AI:

Explainable AI (sometimes called XAI, Interpretable AI or Explainable Machine Learning – XML) refers to techniques and methods that help human operators understand and interpret predictions, outputs or results produced by machine learning models and systems. In a way, explainable AI would solve the (ethical) challenge of the black box problem that is often related to complex AI and machine learning algorithms and systems.

Human in the Loop:

Human-in-the-loop (or human delegated) refers to the principle that human agency or control should, at least to some extent, be part of the design, development and deployment of AI-enhanced systems and their respective ethical and legal consequences. For example, the International Committee of the Red Cross links the need for human agency to the moral responsibility and accountability for the decisions to use force, stating that ethical and legal responsibilities of human beings cannot be transferred to machines or algorithms. The challenge, however, is that ‘the loop’ can mean many things, from the decision-making process or legal accountability chain to the sensors, data or machine learning processes used.

Human on the Loop:

Human-on-the-loop (or human supervised) refers to the situation where human beings are no longer directly involved, but still monitor the AI system and can override it if necessary. For example, a UAV is able to perform a wide range of activities independently based on external information it receives within certain operational boundaries, while a human operator monitors its behaviour. The behaviour is more flexible than under human delegated systems, but is still within the boundaries of pre-programmed goals or rules.

Human out the Loop:

Human-out-the-loop (or fully autonomous system) refers to AI systems performing specific tasks based on external information they receive and operate without any human interaction. For example, a fully autonomous weapon system would be able to select, target and fire without the need for human intervention. Although there may still be overarching rules that the system cannot violate, it can adapt its behaviour and assess how to best meet set objectives. In the current deployment of AI-enhanced autonomous systems, there is always still a human in the loop. There are also no actors advocating for taking the human fully out of the loop as it raises obvious concerns about accountability and human control.

Human-Machine Interface (HMI):

The human-machine interface  connects human beings with machines. In defense, the HMI is the intermediary that allows operating personnel to interact and communicate with machines or systems through a user interface or a dashboard. The HMI is relevant for the ethical debate as its functioning determines the human operator’s ability to effectively control, monitor and understand the machine’s actions.

Human-Machine Ineraction:

Human-machine interaction refers to the relation between human beings and AI-enhanced machines. More specifically, it refers to the communication and interaction between a human being and a machine through a certain type of user interface (see human-machine interface. The debate about ethics and AI essentially boils down to the question of how AI and machine learning affect human-machine interaction. The application of AI has the potential to alter drastically the human-machine relationship by changing two variables: the levels of autonomy; and the consequences of using AI-enhanced systems.

Hyperwar:

Hyperwar is a new form of warfare in which autonomous systems, AI and other emerging technologies play an essential role. Technological advances revolutionise the speed and scope of war, which means that human decision making is either less important or entirely absent from the classic observe-orient-decide-act (OODA) loop of traditional military operations.

Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR):

ISTAR is the activity of equipping armed forces with information and intelligence to assist in combat roles and other operational duties. Uncrewed vehicles and systems use a variety of sensors and functions to collect data, which is then shared with personnel to improve situation awareness.

International Humanitarian Law (IHL):

IHL, also known as the laws of war or the laws of armed conflict,  is a set of rules governing the use of force in armed conflict. Its purpose is to restrict the means and methods of warfare and to protect persons who are not, or are no longer, taking part in fighting. IHL is codified in the Geneva Conventions of 1949 and the Additional Protocols of 1977. IHL contains several principles restricting the means and methods of warfare. Distinction, proportionality, military necessity and feasible precaution are the most important principles governing the use of force.

International Humanitarian Law (IHL) Programming:

The idea that (parts of) IHL could be coded and integrated into AI enhanced (weapon) systems. There is no consensus about whether AI can be designed to comply with legal rules such as the principles of distinction and proportionality. The core questions are to what extent a machine could effectively perform legal assessments and what this would mean for legal accountability. If a machine would have such capabilities, it would also raise various ethical concerns.

Legal Accountability:

Legal accountability relates to the legal processes and norms that hold people legally responsible for their behaviour and that prescribe sanctions in case of violations. The key ethical questions are to what extent AI systems could eventually be held legally accountable in the future and how this would affect the legal accountability of the human (operator) involved.

Lethal Autonomous Weapon Systems (LAWS):

Lethal autonomous weapon systems (LAWS) are a type of weapon systems that use sensors and computer algorithms to independently search for, identify and engage targets based on programmed constraints and descriptions. They are different from autonomous weapon systems in that they are specifically designed to target people. The existence and desirability of LAWS is heavily debated among countries, advocacy groups and scientists.

Machine Ethics:

Relates to machines developing ethical principles or ways of how to deal with ethical dilemmas they may encounter. Machines can be programmed to function in an ethically responsible manner, but they could, potentially, also learn how to do this themselves through machine learning. In the latter case, however, the question becomes whether it is possible to control the machine’s interpretation of, or response to, ethical standards.

Machine Learning:

Machine learning enables machines and systems to learn from data they receive, analyse and apply new information in the face of new situations and recommend or take decisions that go beyond pre-programmed options.

Manned-Unmanned Teaming (MUM-T):

MUM-T refers to a collaborative network of crewed and uncrewed systems as well as soldiers that allows a single operator to team with one or multiple uncrewed systems. MUM-T is a key strategic priority for militaries as it enhances decision-making and the level of interoperability between ground forces, crewed aircraft and uncrewed aircraft. One example of MUM-T technology is the US Air Force’s Skyborg programme in which uncrewed combat aerial vehicles (UCAVs) are developed to function as a ‘loyal wingman’ to crewed aircraft.

Meaningful Human Control:

Meaningful human control is a concept that states the need for human agency or control over AI-enhanced autonomous systems, the environment in which they are deployed, or control over the human-machine interaction in general. There is no consensus about what ‘meaningful’ control means or which human it relates to (e.g. a programmer, operator, commander or politician). A key ethical question is also how ‘meaningful’ human control can be when autonomous systems can or have to make decisions in milliseconds.

Moral Responsibility:

Moral responsibility refers to the responsibility of human beings (as moral agents) for their actions and their consequences. There is no consensus about whether machines are or could become moral agents. For the ethical debate, the key questions are whether human beings (developers, operators, commanders or politicians) would always have the ultimate moral responsibility for the design, development and deployment of AI systems and whether, especially in the face of increased autonomy, it will become more difficult to assign the moral responsibility in the future.

Munitions:

unitions are guided weapons used to destroy a specific target, while minimising collateral damage. Munitions with varying degrees of autonomy have existed for quite some time. For example, the Advanced Medium Range Air-to-Air Missile (AMRAAM) used by countries around the world is equipped with a radar system to provide a guidance signal. The AMRAAM is a so-called fire and forget missile, which means that it does not require further guidance after launch. Missiles are becoming more advanced with the evolution of artificial intelligence. Image recognition software enables missiles to autonomously engage pre-defined targets with more accuracy.

Narrow AI:

Narrow AI is generally limited to specific tasks that simulate intelligence or behaviour which  are generally bound by predetermined rules and options. Narrow AI is the category most commonly used  in the defence sector. For example, AI can be used to improve precision targeting for weapons systems such as missiles or drones. Narrow does not mean that the tasks are simple or unimpressive; it means only that they are not close to a general AI capability. Even the most advanced autonomous AI-driven battle tank is still considered narrow AI.

OODA Loop:

The OODA loop is the constantly repeating cycle of observe–orient–decide–act that forms the basis of military decision making. The idea behind this classic military concept is that the individual or military who can go through this cycle more rapidly than its opponent has a tactical or strategic advantage. The key limiting factor in the OODA loop has always been the human being, who needs time to process information and make decisions. AI has unprecedented potential to speed up the OODA loop, but the extent to which it will revolutionise the OODA loop will depend on how much human control is transferred to the AI system.

OODLA Loop:

A variation of the OODA loop which includes the ‘L’ for lawyer. This model stresses the evolving nature of the legal part of decision-making. As new technologies such as AI and machine learning empower lower-level or dispersed operators to decide in tactical situations, this potential strategic advantage could be hampered by the need to wait for legal advice or confirmation before an operator can act.

Operational Systems:

n operational system represents the level of command that brings together the details of tactics with the goals of strategy. At the operational level, commanders use their skills, knowledge, experience and judgement to strategise, plan and organise for the deployment of military force. Operational decision-making requires a strong situational awareness and an understanding of the full context.

Platforms:

Platforms are vehicles or facilities that carry and use equipment with particular military purposes required in the field. Tanks, ships, satellites, and unmanned aerial vehicles (UAVs) are all examples of platforms. Platforms are used for a variety of military purposes, such as surveillance, electronic warfare and launching missiles or other weaponry.

Privacy:

Privacy in the context of military AI refers to the design, development and deployment of AI that respects the autonomy and dignity of individuals whose data is collected, stored and analysed. AI applications such as surveillance and facial recognition risk infringement upon the privacy of individuals.

Proportionality and Distinction:

Proportionality and distinction are two key principles governing the design, development and deployment of weapons. Distinction refers to the capability of a weapon system to distinguish between a legitimate target and an unlawful target (e.g. civilians or their possessions). Proportionality means that, if a weapon meets the requirement of distinction, the effects of its use should also be proportionate in the sense that any collateral damage (e.g. to civilians or infrastructure), or risk thereof, is not excessive when compared to expected military gains.

Responsible AI:

Responsible military AI refers to the design, development and deployment of AI that upholds principles such as accountability, explainability, fairness, privacy, safety, security or transparency. It is a broad concept with different interpretations around the world.

Safety:

Often used together with (or as a synonym for) robustness, security and reliability. In general, this is related to AI systems not producing any unsafe outcomes for the users of AI systems and those affected by their operation. More specifically, it is related to AI system failure, whether as a result of the system itself or whether caused by adversaries. In case of the latter, the term adversarial robustness is also used to describe an AI system that, for example, is hack-proof and cannot be manipulated by others.

Swarming:

A swarm is a multitude of systems that interact and operate as a collective capable of accomplishing a shared objective. This capability is underpinned by distributed AI that models the collaborative behaviour exhibited by insects and birds. Swarms are usually composed of small multirotor UAVs, mini-helicopters. or tube-launched loitering munitions with heterogeneous capabilities, such as surveillance, electronic warfare and precision targeting.

Systems with Autonomous Capabilities (SAC):

Systems with autonomous capabilities can perform certain tasks autonomously. For example, a UAV with autopilot can fly autonomously and a system equipped with image recognition software can be used to autonomously identify pre-defined targets. In contrast to fully autonomous weapon systems, these systems already exist.

Traceability:

The traceability principle states that military AI algorithms should be developed together with mechanisms for documenting and monitoring the development processes and operational functioning of the algorithms. This will ensure increased transparency surrounding the dataset used to train and evaluate algorithms, the variables used in the AI models and potential biases. Traceability is considered a key requirement for trustworthy AI.

Transparency:

Transparency relates to the explainability, interpretability, traceability and understandability of AI systems throughout their lifecycle. To counter the opacity (see black box) and complexity of AI algorithms, transparency expresses the need to understand, explain and communicate how AI models and systems have reached certain outcomes. Transparency is considered a key requirement for trustworthy AI by militaries.

Trustworthy AI:

Trustworthy AI is related to the concept that AI should be lawful, traceable, ethical and robust (or safe in the sense of not causing any unintentional harm). More practically, the term is also used for the trust the operator (e.g. a soldier or commander) needs to have in the AI system to do the right thing, such as producing the right options to guide decision-making.

Uncrewed Combat Aerial Vehicles (UCAV):

Uncrewed combat aerial combat vehicles are advanced uncrewed systems used predominantly for covert ISR missions and strategic bombing. Three different talents define the UCAV segment: stealth (to gain access to enemy territory undetected), hypersonic speed and agility, and serving as an “attritable aerial asset,” meaning a disposable apparatus that could be expected or afforded to be lost during a mission.

Uncrewed Systems:

Uncrewed systems are vehicles (aerial, space, terrestrial or marine) and their corresponding components, such as sensors or software, that can perform tasks without a human operator on board. They are central to the ethical debate, especially if their level of autonomy is relatively high on the continuum that runs from remotely human operated to fully autonomous.

Unsupervised Learning:

Unsupervised learning is a type of machine learning in which the machine is training on unlabelled data without any guidance. Unsupervised learning is used to understand patterns and discover output. The ethical discussion focuses, for example, on possible bias in the training data as well as on the lack of human oversight during the process.