There has always been debate about scientific and technological innovation. Today, as artificial intelligence looks set to change forever the ways humans and machines interact, increased autonomy is one of the biggest prizes of this revolution – most people can see how transformative self-driving cars could be for our big cities. In the defense sector, however, this increased autonomy is at the heart of an ongoing ethical debate. The crux of this dilemma can be summarised in three questions:
First, human control. How much human control are we willing to give up? Do we allow a machine to take decisions over life and death?
Second, responsibility. What happens to the moral responsibility of the human beings developing or using such systems? Could part of that responsibility even be transferred to machines in the future?
And third, accountability. When machines start making more decisions, is it still possible to establish who is accountable in case something goes wrong?
In the next three videos, RAIN ETHICS will go more in the details of these three principles