Article Preview
Top1. Introduction
Technological improvements give the illusion that – once a specific technology is available – a problem or a class of problems can quickly and easily be solved. If in certain cases that might be true, in some others we would better slow down to avoid or, at least, minimize side effects or unexpected consequences. Technologies that actually solve one problem may cause another to appear.
For one reason or the other, the violent and detrimental effects of a technology are hidden or just removed from sight because of the benefits that a particular technology is bringing about with respect to specific problems. For example, GPS devices are extremely useful at helping us interact with places and travel between them. Especially when navigating through unfamiliar places, GPS is just what one needs to avoid losing time, missing an important appointment, etc. However, GPS devices might create new problems. For instance, driving is not just driving. But it is driving and navigating. Without GPS, these two acts are not separated and cannot be. For instance, while one is driving, she or he should pay attention to other cars, landmarks, barriers, pedestrians, curves. All these activities involve driving as well as navigating skills. However, if navigating is completely delegated to GPS, it is very likely to adopt dangerous, antisocial, and narcissistic behaviors like, for instance, changing direction suddenly or failing to notice the presence of pedestrians. Who is to blame?
Generally speaking, technology may create moral dilemmas. Moral dilemmas emerge, as designers and engineers face the challenge of accommodating different, heterogeneous, and sometimes conflicting values. When successfully identified, those moral dilemmas may lead to design trade-offs (Shelley, 2011, in press). With such trade-offs we face what Kuran called “moral overload” (Kuran, 1998). Moral overload emerges when an agent is overloaded by different obligations, which cannot be all fulfilled at the same time. The outcome of moral overload is what is called “moral residue” (Van den Hove, Lokhorst, & Poel, in press). Moral residue is the feeling we have when we have not fulfilled a duty or a value commitment has not been met.
Van den Hoven et al. (in press) posit that moral residue is not necessarily a bad thing, because it gives us an incentive to avoid moral overload in the future by using technology itself as part of the solution. More precisely, it brings up a second-order principle that helps us drive technological innovation. Accordingly, ethics would no longer be a source of constraints, but an active partner in finding innovative solutions.
I am quite sympathetic with Van den Hoven and colleagues’ approach. Ethics is not necessarily a source of constraints. What I disagree on is about the meaning of the moral residue Van de Hoven and colleagues talk about. To me a moral residue is also residual of something that is intrinsic to us as moral beings that cannot be designed away: violence. Magnani (2011) in his Understanding Violence offers a deep and insightful account about the relationship between violence and morality, which may help us clarify the point. In a nutshell, he argues that our morality is what often gives us license to be violent by providing us with what we perceive as overwhelming reasons and/or emotions for doing or not doing something. Such overwhelming reasons and emotions conceal our violence. Conversely, what we commonly call violence is any other violent action or behavior that our morality cannot justify as good or right. This philosophical stance leads Magnani to contend that the problem of violence in technology, on the one hand, and the problems related to the so-called “ethics of technology” (or design ethics), on the other, are two different domains, and the former has priority over the latter (Magnani, in press).