As AI systems become more and more autonomous, there is a recurrent ethical question about whether machines could become moral agents.

This question is essentially about whether a machine could distinguish between right and wrong, and whether it could take calculated moral decisions based on this distinction. At RAIN ETHICS we like to think about this in two ways: 

First, machines could be programmed to make certain ethical decisions. In this case, the moral agency basically remains with the programmers or those responsible for the use of the machine. The machine would simply follow the rules and instructions with which it has been programmed.

Second, machines could theoretically develop their own moral code. In that case, machines could be considered moral agents because they could make moral decisions independently from their programmers or designers.

Advances in machine learning could make the distinction between both cases increasingly blurred, but for the moment, we are still talking mostly about future scenarios.

Share this post

Related Posts