This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.
It is not an obvious move to grant moral concern to the nonhuman objects around us. It is common to hold the view that the things we come into contact with have at best instrumental value and that only humans have moral rights and responsibilities. If some nonhuman thing elicits moral concern, it does so only because it is the property of some human through whom these rights extend. This all seems very straight forward and beyond question. But here is my worry—we have been mistaken in past about our definition of what it takes to be a human moral agent. Historically women, low caste men and children have been denied this status. We have come to regret these past indiscretions, it is possible that that our beliefs about moral agency are still misguided.
Some people may be willing to grant moral rights to animals, ecosystems, perhaps even plants. If machines were shown to be similar to these things might they not also be reasonable candidates for moral rights? If so, what happens if these entities acquire agency similar to that of a human, then must they also bear moral responsibilities similar to that of a human agent? The answer to the latter question is simple; of course anything that displays human level agency enough to satisfy even harsh critics would be a candidate for moral rights and responsibilities because it would have undeniable personhood and all persons have moral worth. The possibilities for this happening any time soon though are fairly low. But, they have made some progress on attaining interesting levels of agency, so what we need to inquire into is whether or not these meager qualities are enough to grant moral agency and worth to artificial agents.
Key Terms in this Chapter
Synthetic Biological Constructs: Artificial Life entities created by basic chemical processes, an example would be synthetic living cells created entirely from basic chemical processes and not via genetic engineering.
Moral Rationalism: The philosophical position that holds that moral agents must be fully and completely rational in order to maintain their status as moral agents. Typically, those who hold this position deny that human agents can achieve this requirement, meaning that only AI or Alife agents could be true moral agents.
Malware: Software or software agents that are consciously programmed and set lose to create evil affects in the operation of information technology. A computer virus is an example of malware.
Artificial Autonomous Agent: An autonomous agent whose ontology is removed significantly from the natural world but who nevertheless resembles natural autonomous agents in its ability to initiate events and processes.
Artificial Moral Agent (AMA): A moral agent whose ontology is removed significantly from the natural world.
Level of Abstracton (LoA): The level of complexity from which the observer views the system under consideration. Higher levels of abstraction provide the observer with fewer dietails while lower levels of abstraction provide much more detail of the operations of the system.