Newsletter of Phenomenology

Keeping phenomenologists informed since May 2002

Repository | Book | Chapter

200438

(2015) Beyond artificial intelligence, Dordrecht, Springer.

Moral enhancement and artificial intelligence

moral AI?

Julian Savulescu, Hannah Maslen

pp. 79-95

This chapter explores the possibility of moral artificial intelligence – what it might look like and what it might achieve. Against the backdrop of the enduring limitations of human moral psychology and the pressing challenges inherent in a globalised world, we argue that an AI that could monitor, prompt and advise on moral behaviour could help human agents overcome some of their inherent limitations. Such an AI could monitor physical and environmental factors that affect moral decision-making, could identify and make agents aware of their biases, and could advise agents on the right course of action, based on the agent's moral values. A common objection to the concept of moral enhancement is that, since a single account of right action cannot be agreed upon, the project of moral enhancement is doomed to failure. We argue that insofar as this is a problem, it is a problem for some biomedical interventions, but an agent-tailored moral AI would not only preserve pluralism of moral values but would also enhance the agent's autonomy by helping him to overcome his natural psychological limitations. In this way moral AI has one advantage over other forms of biomedical moral enhancement.

Publication details

DOI: 10.1007/978-3-319-09668-1_6

Full citation:

Savulescu, J. , Maslen, H. (2015)., Moral enhancement and artificial intelligence: moral AI?, in J. Romportl, E. Zackova & J. Kelemen (eds.), Beyond artificial intelligence, Dordrecht, Springer, pp. 79-95.

This document is unfortunately not available for download at the moment.