Security Solutions for Intelligent and Complex Systems

Security Solutions for Intelligent and Complex Systems

Stuart Armstrong, Roman V. Yampolskiy
DOI: 10.4018/978-1-7998-0951-7.ch060
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Superintelligent systems are likely to present serious safety issues, since such entities would have great power to control the future according to their possibly misaligned goals or motivation systems. Oracle AIs (OAI) are confined AIs that can only answer questions and do not act in the world, represent one particular solution to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper begins with the definition of the OAI Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay negative effects from a potentially unfriendly superintelligence while allowing for future research and development of superintelligent systems.
Chapter Preview
Top

Introduction

With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology (Bostrom, 2006; Chalmers, 2010; Hall, 2000; Hibbard, 2005; Yampolskiy, 2011a, 2011b; Yampolskiy & Fox, 2012a, 2012b; Yudkowsky, 2008). A common theme in Artificial Intelligence (AI1) safety research is the possibility of keeping a superintelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. Such ideas originate with scientific visionaries such as Eric Drexler who has suggested confining transhuman machines so that their outputs could be studied and used safely (Drexler, 1986). Similarly, in 2010 David Chalmers proposed the idea of a “leakproof” singularity (Chalmers, 2010). He suggested that for safety reasons, AI systems first be restricted to simulated virtual worlds until their behavioral tendencies could be fully understood under the controlled conditions.

This chapter is based on combined and extended information from three previously published papers: (Armstrong, 2011; Armstrong, Sandberg, & Bostrom, 2012; Yampolskiy, 2012a)*. We evaluate feasibility of previously presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies. While it is unlikely, that long-term and secure confinement of AI is possible, we are hopeful that the proposed protocol will give researchers a little more time to find a permanent and satisfactory solution for addressing existential risks associated with appearance of superintelligent machines.

In this chapter we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. The key question is: are there strategies that reduce the potential existential risk from a superintelligent AI so much that while implementing it as a free AI would be impermissible a confined implementation would be permissible? The chapter will start by laying out the general design assumptions for the confined AI and formalizing the notion of confinement. Then it will touch upon some of the risks and dangers deriving from the humans running and interaction with the confined AI. The final section looks at some of the other problematic issues concerning the confined AI, such as its ability to simulate human beings within it and its status as a moral agent itself.

Complete Chapter List

Search this Book:
Reset