Short-Term or Long-Term AI Ethics? A Dilemma for Ethical Singularity Only

Müller VC (2025)


Publication Type: Authored book

Publication year: 2025

Publisher: wiley

ISBN: 9781394258840

DOI: 10.1002/9781394258840.ch21

Abstract

There seems to be a dilemma whether we should direct our efforts in AI ethics towards problems that are clearly visible on the horizon today (short-term), or towards problems for which we see significant probability of them occurring at some point (long-term), provided they are significant enough. Some authors have argued that we should put a heavy focus on the one or the other. I will argue that this is a false dilemma: Any rational agent will consider both short- and long-term consequences (as well as other factors). My analysis is that the supposed dilemma rests on the assumption that we are in a very unusual situation that forces favoring of one option - what I call an “ethical singularity”. The only serious argument for this view is that the longer term involves (a) an ethically “significantly different” situation, and (b) that this demands a version of “ethical fanaticism.” Both premises turn out to have little in their favor. We should thus return to the “normal balance” of expected utility in AI ethics and solve problems in all relevant time horizons concurrently.

Authors with CRIS profile

How to cite

APA:

Müller, V.C. (2026). Short-Term or Long-Term AI Ethics? A Dilemma for Ethical Singularity Only. wiley.

MLA:

Müller, Vincent C.. Short-Term or Long-Term AI Ethics? A Dilemma for Ethical Singularity Only. wiley, 2026.

BibTeX: Download