Giving Robots a Voice: Testimony, Intentionality, and the Law

Giving Robots a Voice: Testimony, Intentionality, and the Law

Billy Wheeler
Copyright: © 2018 |Pages: 34
DOI: 10.4018/978-1-5225-2973-6.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Humans are becoming increasingly dependent on the ‘say-so' of machines, such as computers, smartphones, and robots. In epistemology, knowledge based on what you have been told is called ‘testimony' and being able to give and receive testimony is a prerequisite for engaging in many social roles. Should robots and other autonomous intelligent machines be considered epistemic testifiers akin to those of humans? This chapter attempts to answer this question as well as explore the implications of robot testimony for the criminal justice system. Few are in agreement as to the ‘types' of agents that can provide testimony. The chapter surveys three well-known approaches and shows that on two of these approaches being able to provide testimony is bound up with the possession of intentional mental states. Through a discussion of computational and folk-psychological approaches to intentionality, it is argued that a good case can be made for robots fulfilling all three definitions.
Chapter Preview
Top

Introduction

The great goal of social robotics is to have fully autonomous machines working alongside humans who are capable of establishing relationships of trust, dependence and co-operation in something like normal everyday social environments. Sectors that have been considered to benefit most by this new technology include healthcare, the service sector and education (European Commission, 2016). Closer ties between humans and robots in society brings with it implications for existing social structures which, for the most part, evolved without the need to involve such machines. One area where this is particularly true is the law. Although not a social robot, the mismatch between existing social institutions and emerging technologies can be seen in the case of the driverless car. In May 2016, a driverless car was involved in the death of a passenger when its autopilot feature steered towards an oncoming lorry (Lee, 2016). This incident created somewhat of a media frenzy as news reporters and commentators speculated on who was to blame. Was it the passenger? Was it the car manufacturer? Or was it the car itself? To date, no criminal charges have been brought against either the manufacturer or artificially intelligent car.

Whether robots in the future can be subject to prosecution is not the only issue raised by this emerging technology’s relationship with the law. Another, much less discussed issue, concerns the role robots might play as a source of evidence within legal proceedings. In fact, in the future, with closer ties between humans and robots, it is more likely that robots will be a witness to a crime than directly involved in one themselves, either as the victim or the perpetrator. But how should the information gathered by a robot witness be used in a trial? What rights should be granted to them during legal proceedings? And can a robot be ‘cross-examined’ by a lawyer in the same way a human witness can? These are not easy questions to answer. This chapter will explore the philosophical underpinnings for different answers to these questions.

Traditionally, formal or legal testimony has been seen as a type of epistemic testimony more generally (Gelfert, 2014). Epistemic testimony is the knowledge one gains through ‘being told so’ by somebody else. If robots are to be witnesses for the purpose of criminal proceedings, then it needs to be established whether or not robots can be testifiers in this more general sense. Part of this chapter will therefore investigate current definitions of testimony to explore whether and to what extent some machines can be testimonial sources of knowledge.

It turns out that on some definitions, whether a source of belief is testimonial depends upon whether that source is possessive of intentional states about what they are saying. On certain ‘exclusive definitions’ to be a testifier requires the speaker to intend to ‘offer up’ their statement as true and for it to be believed by a potential hearer (Coady, 1992). Such an understanding is clearly presupposed in the legal case where the authenticity of witness testimony can be questioned if it emerges that the witness asserts a pre-prepared script or has been put under pressure to offer a statement dictated to them by a third party (Calo, 2016). This takes us into the issue of whether robots can have intentional mental states and whether the kind of intentionality that is required in the legal case squares with what robots are capable of achieving.

To answer this question the chapter will explore two pieces of evidence in favor of robot intentionality with important implications for testimony in court. The first comes from theoretical considerations about the nature of human intentionality and whether this can be implemented in machine architecture. Well-known criticisms to these approaches will be discussed as well as some recent responses that might not be so well known. However, theoretical considerations can only take us so far, and most people think that whether robots should be treated as testifiers depends upon whether they are treated as testifiers by the average person. The second piece of evidence will therefore come from studies in experimental psychology that show how folk-psychological intuitions about robots can, in some circumstances, attribute intentionality. If one argues that courts should attribute intentional states on the basis of folk intuitions rather than theoretical considerations, as some have indeed argued, (cf. Malle & Nelson, 2003), then this provides a strong case for including robots as witnesses in legal proceedings.

Complete Chapter List

Search this Book:
Reset