Trust and Decision Making in Turing's Imitation Game

Trust and Decision Making in Turing's Imitation Game

Huma Shah (Cogent Computing, Futures Institute, Innovation Village, Coventry University, UK) and Kevin Warwick (Coventry University, UK)
Copyright: © 2018 |Pages: 14
DOI: 10.4018/978-1-5225-2255-3.ch023
OnDemand PDF Download:
List Price: $37.50


Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter we present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.
Chapter Preview


Analyses and opinions on the imitation game’s salience have varied (see Shah & Warwick, 2015; Shah, 2013; Shah, 2011; Shah & Warwick, 2010). Turing evolved his ideas on an imitation game posing an interview in which a human interrogator questions a hidden entity to determine whether it is human or machine (Turing 1950; Turing 1952). This was Turing’s viva voce test (Shah, 2010; Turing, 1950). The ‘standard Turing test’ is accepted as involving a human interrogator simultaneously questioning two hidden entities at the same time (Stanford Encyclopedia of Philosophy, 2011). Designing an experiment to implement both of Turing tests requires setting parameters interpreting Turing’s description. These include:

  • Adequate duration for a test;

  • Number of interrogators; and

  • Style of interrogation.

An evaluation is necessary of what it means exactly for a machine to pass as human: what are the implications of any pass beyond the test? Can it be used to raise awareness of human susceptibility to deception and safeguarding trust in cyberspace interactions?

In the next section the authors present Turing’s scholarship on the imitation game.

Key Terms in this Chapter

Imitation Game: A means to examine whether a machine, hidden from sight and hearing, can think based on its answers to any questions put by a human interrogator.

Economics of Trust: Humans rely on their subjective values of certainty and confidence risking dependence on another party to provide goods, services or timely and relevant information in cyberspace transactions.

Eliza Effect: An artefact is attributed with intelligence. From Sherry Turkle: “our more general tendency to treat responsive computer programmes as more intelligent than they really are” evinced from “very small amounts of interactivity” tendency to “project own complexity onto the undeserving object” (1997: p. 101).

Simultaneous Comparison Turing Test: A 3-participant set-up in which a human interrogator questions two hidden entities in parallel to identify which is the human and which is the machine.

Viva Voce Turing Test: A 2-participant set-up in which a human interrogator questions one hidden entity and decides whether it is human or machine based on received responses.

Trust: Humans believe in the honesty of other individuals.

Confederate Effect: A human is misclassified as a machine through responses to questions in computer mediated interaction.

Complete Chapter List

Search this Book: