Risks of general artificial intelligence

Müller VC (2014)


Publication Type: Journal article

Publication year: 2014

Journal

Book Volume: 26

Pages Range: 297-301

Journal Issue: 3

DOI: 10.1080/0952813X.2014.895110

Abstract

The papers in this special volume of the Journal of Experimental and Theoretical Artificial Intelligence are the outcome of a conference on the 'Impacts and Risks of Artificial General Intelligence' (AGI-Impacts) that took place at the University of Oxford, on 10 and 11 December 2012. Schneier presented the theory from his new book Liars and Outliers: Enabling the Trust that Society Needs to Thrive that security needs to mature to a wider discipline, which crucially relies on establishing and maintaining trust, a trust that is undermined by many, including state agents. The paper by Omohundro introduces the problem of risk and the author presses his point that even an innocuous artificial agent, like one programmed to win chess games, can very easily turn into a serious threat for humans. B. Goertzel explains how his 'Goal-Oriented Learning Meta-Architecture' may be capable of preserving its initial caring goals while learning and improving its general intelligence.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Müller, V.C. (2014). Risks of general artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence, 26(3), 297-301. https://dx.doi.org/10.1080/0952813X.2014.895110

MLA:

Müller, Vincent C.. "Risks of general artificial intelligence." Journal of Experimental and Theoretical Artificial Intelligence 26.3 (2014): 297-301.

BibTeX: Download