How to deal with risks of AI suffering

Dung L (2023)


Publication Type: Journal article

Publication year: 2023

Journal

DOI: 10.1080/0020174X.2023.2238287

Abstract

We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.

Authors with CRIS profile

How to cite

APA:

Dung, L. (2023). How to deal with risks of AI suffering. Inquiry-An Interdisciplinary Journal of Philosophy. https://dx.doi.org/10.1080/0020174X.2023.2238287

MLA:

Dung, Leonard. "How to deal with risks of AI suffering." Inquiry-An Interdisciplinary Journal of Philosophy (2023).

BibTeX: Download