Current cases of AI misalignment and their implications for future risks

Dung L (2023)


Publication Type: Journal article

Publication year: 2023

Journal

Book Volume: 202

Article Number: 138

Journal Issue: 5

DOI: 10.1007/s11229-023-04367-0

Abstract

How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models and game-playing agents are sometimes misaligned. These cases suggest that misalignment tends to have a variety of features: misalignment can be hard to detect, predict and remedy, it does not depend on a specific architecture or training paradigm, it tends to diminish a system’s usefulness and it is the default outcome of creating AI via machine learning. Subsequently, based on these features, I show that the risk of AI alignment magnifies with respect to more capable systems. Not only might more capable systems cause more harm when misaligned, aligning them should be expected to be more difficult than aligning current AI.

Authors with CRIS profile

How to cite

APA:

Dung, L. (2023). Current cases of AI misalignment and their implications for future risks. Synthese, 202(5). https://doi.org/10.1007/s11229-023-04367-0

MLA:

Dung, Leonard. "Current cases of AI misalignment and their implications for future risks." Synthese 202.5 (2023).

BibTeX: Download