Wired for Warmth: Robotics as Moral Philosophy

Wired for Warmth: Robotics as Moral Philosophy

Alan E. Singer
DOI: 10.4018/ijsodit.2012070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

An aspect of the relationship between philosophy and computer engineering is considered, with particular emphasis upon the design of artificial moral agents. Top-down vs. bottom-up approaches to ethical behavior are discussed, followed by an overview of some of the ways in which traditional ethics has informed robotics. Two macro-trends are then identified, one involving the evolution of moral consciousness in man and machine, the other involving the fading away of the boundary between the real and the virtual.
Article Preview
Top

2. The Engineering Of Philosophy

According to Dennett (1997) “you don’t really know how something works if you can’t build it,” so that “robotocists are doing philosophy, whether or not they think this is so.” A decade later this seems increasingly to be the case. Yet this is a distinctive “experimental and constructive computational philosophy” (Wallach & Allen, 2009), or a “philosophy plugged” (e.g., Singer, 2010) that also fits well with some independently developed epistemological and ontological notions such as:

  • i.

    Knowledge as coordination-of-action (Zeleny, 2005);

  • ii.

    Information as “in-formation”; that is, codes co-creating physical form as in robotic manufacturing contexts (Zeleny, 2005); and

  • iii.

    The convergence and unity of the physical and mental worlds, somewhat in line with Spinoza’s 17th century writings, discussed subsequently.

The task of constructing AMA’s has repeatedly spun-off sharply-framed questions that are both philosophical and technological in nature, but that also carry significant implications for policy (i.e., macro-ethics, to use terminology from business-ethics). Indeed, when ethics is plugged-in, so to speak, it looks and feels quite different from the penned works of Kant, Mill, Bentham, or the Bible. In part this is due to the fact that, as correctly predicted by Alvin Toffler (e.g., Toffler & Toffler, 1990) the development of robots and AMA’s is almost entirely a project of the military-industrial complex, being “done” outside the public gaze and far away from the desk of the traditional philosopher. For example, one military project involves installing (or instilling) a “functional morality” into a robot machine gun. The design-objective in this case was to re-program the robot guns with a form of ethics so they would stop killing friendlies or “innocent” civilians and concentrate all their firepower on the bad guys1. Eventually, as Singer has noted, AMA’s “might be endowed with a conscience that would…make them more humane (as) soldiers than humans” (2009, p. 425).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 6: 1 Issue (2017)
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing