Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes

Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes

Huma Shah, Kevin Warwick
DOI: 10.4018/978-1-60566-354-8.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based conversational measure of gender, is the ultimate test for deception and hence, thinking. So conceived Alan Turing when he introduced a machine into the game. His idea, that once a machine deceives a human judge into believing that they are the human, then that machine should be attributed with intelligence. What Turing missed is the presence of emotion in human dialogue, without expression of which, an entity could appear non-human. Indeed, humans have been confused as machine-like, the confederate effect, during instantiations of the Turing Test staged in Loebner Prizes for Artificial Intelligence. We present results from recent Loebner Prizes and two parallel conversations from the 2006 contest in which two human judges, both native English speakers, each concomitantly interacted with a non-native English speaking hidden-human, and jabberwacky, the 2005 and 2006 Loebner Prize bronze prize winner for most human-like machine. We find that machines in those contests appear conversationally worse than non-native hidden-humans, and, as a consequence attract a downward trend in highest scores awarded to them by human judges in the 2004, 2005 and 2006 Loebner Prizes. Analysing Loebner 2006 conversations, we see that a parallel could be drawn with autistics: the machine was able to broadcast but it did not inform; it talked but it did not emote. The hidden-humans were easily identified through their emotional intelligence, ability to discern emotional state of others and contribute with their own ‘balloons of textual emotion’.
Chapter Preview
Top

Introduction

Humans steep their ideas in emotion (Pinker, 2008). Emotional states, Minsky writes, “are usually simpler than most of our other ways to think” (2007). Daily conversation is “glued together through exchange of emotion” (Tomasello et al, 2005). Be it an expression of joy or displeasure “sharing emotions with people other than our intimates is a useful tool to bond and to strengthen social relationships” (Derks, Fischer & Bos, 2008). We consider the conversations between unknowns in the 2006 Loebner Prize for Artificial Intelligence, from hereon referred to as the LPAI, to compare and contrast the human-human and human-machine dialogues to find any display of emotion, be it happiness or annoyance, in the participants. The LPAI is an annual science contest that provides a platform for Alan Turing’s imitation game (Turing, 1950), which, some would argue, should be killed1, because it offers nothing that furthers the science of understanding emotions, intelligence or human consciousness. The game, originally configured for a human interrogator whose task it is to distinguish between an unseen and unheard man and woman through text-based conversational measure of gender, is the ultimate test for deception, and hence, thinking. So conceived Turing, when he altered the interrogator’s task to one that entails distinguishing a machine from a hidden-human. Turing believed that once a machine deceived a human judge into believing that they were the human, then that machine should be attributed with intelligence. But is the Turing Test nothing more than an emotionless game?

Key Terms in this Chapter

Emotional Intelligence: Involves emotion perception, expression, understanding and regulation.

Jabberwacky: An ACE, twice winner of the Loebner Prize for Artificial Intelligence (2005 & 2006).

Emotion: A ‘state’ that can convey information to others, such as delight, happiness, surprise, anger and disappointment.

Imitation Game: A thought experiment devised by 20th century British mathematician Alan Turing in which a human interrogator must distinguish between two unseen and unheard entities during text-based conversation. If in man/woman imitation scenario, an interrogator must identify both correctly; if in a machine/human game, the interrogator must decide which is artificial from natural dialogue.

Loebner Prize for Artificial Intelligence: An annual science contest providing a platform for Turing’s imitation game.

Parallel-Paired Comparison: 1950 version of the imitation game in which an ACE is compared against a ‘hidden-human’ for conversational intelligence.

Turing Test: See imitation game.

ACE: An artificial conversational entity.

Jury Service, One-to-One Imitation Game: Modified form of the imitation game in which the interrogator speaks directly to an unseen and unheard ACE to determine whether it is human or machine.

Complete Chapter List

Search this Book:
Reset