A Comparison of Human and Computer Information Processing

A Comparison of Human and Computer Information Processing

Brian Whitworth (Massey University, New Zealand) and Hokyoung Ryu (Massey University, New Zealand)
DOI: 10.4018/978-1-60566-014-1.ch032
OnDemand PDF Download:
$37.50

Abstract

Over 30 years ago, TV shows from The Jetsons to Star Trek suggested that by the millennium’s end computers would read, talk, recognize, walk, converse, think, and maybe even feel. People do these things easily, so how hard could it be? However, in general we still don’t talk to our computers, cars, or houses, and they still don’t talk to us. The Roomba, a successful household robot, is a functional flat round machine that neither talks to nor recognizes its owner. Its “smart” programming tries mainly to stop it getting “stuck,” which it still frequently does, either by getting jammed somewhere or tangling in things like carpet tassels. The idea that computers are incredibly clever is changing, as when computers enter human specialties like conversation, many people find them more stupid than smart, as any “conversation” with a computer help can illustrate. Computers do easily do calculation tasks that people find hard, but the opposite also applies, for example, people quickly recognize familiar faces but computers still cannot recognize known terrorist faces at airport check-ins. Apparently minor variations, like lighting, facial angle, or expression, accessories like glasses or hat, upset them. Figure 1 shows a Letraset page, which any small child would easily recognize as letter “As” but computers find this extremely difficult. People find such visual tasks easy, so few in artificial intelligence (AI) appreciated the difficulties of computer-vision at first. Initial advances were rapid, but AI has struck a 99% barrier, for example, computer voice recognition is 99% accurate but one error per 100 words is unacceptable. There are no computer controlled “auto-drive” cars because 99% accuracy means an accident every month or so, which is also unacceptable. In contrast, the “mean time between accidents” of competent human drivers is years not months, and good drivers go 10+ years without accidents. Other problems easy for most people but hard for computers are language translation, speech recognition, problem solving, social interaction, and spatial coordination.
Chapter Preview
Top

Introduction

Over 30 years ago, TV shows from The Jetsons to Star Trek suggested that by the millennium’s end computers would read, talk, recognize, walk, converse, think, and maybe even feel. People do these things easily, so how hard could it be? However, in general we still don’t talk to our computers, cars, or houses, and they still don’t talk to us. The Roomba, a successful household robot, is a functional flat round machine that neither talks to nor recognizes its owner. Its “smart” programming tries mainly to stop it getting “stuck,” which it still frequently does, either by getting jammed somewhere or tangling in things like carpet tassels. The idea that computers are incredibly clever is changing, as when computers enter human specialties like conversation, many people find them more stupid than smart, as any “conversation” with a computer help can illustrate.

Computers do easily do calculation tasks that people find hard, but the opposite also applies, for example, people quickly recognize familiar faces but computers still cannot recognize known terrorist faces at airport check-ins. Apparently minor variations, like lighting, facial angle, or expression, accessories like glasses or hat, upset them. Figure 1 shows a Letraset page, which any small child would easily recognize as letter “As” but computers find this extremely difficult. People find such visual tasks easy, so few in artificial intelligence (AI) appreciated the difficulties of computer-vision at first. Initial advances were rapid, but AI has struck a 99% barrier, for example, computer voice recognition is 99% accurate but one error per 100 words is unacceptable. There are no computer controlled “auto-drive” cars because 99% accuracy means an accident every month or so, which is also unacceptable. In contrast, the “mean time between accidents” of competent human drivers is years not months, and good drivers go 10+ years without accidents. Other problems easy for most people but hard for computers are language translation, speech recognition, problem solving, social interaction, and spatial coordination.

Figure 1.

Letraset page for letter “A”

Advanced computers struggle with skills most 5 year olds have already mastered, like speaking, reading, conversing, and running:

As yet, no computer-controlled robot could begin to compete with even a young child in performing some of the simplest of everyday activities: such as recognizing that a colored crayon lying on the floor at the other end of the room is what is needed to complete a drawing, walking across to collect that crayon, and then putting it to use. For that matter, even the capabilities of an ant, in performing its everyday activities, would far surpass what can be achieved by the most sophisticated of today’s computer control systems. (Penrose, 1994, p. 45)

That computers cannot even today compete with an ant, with its minute sliver of a brain, is surprising. We suggest this is from processing design, not processing incapacity. Computer pixel-by-pixel processing has not lead to face recognition because, as David Marr (1982) observed, trying to understand perception by studying neuronal (pixel level) choices is “like trying to understand bird flight by studying only feathers. It just cannot be done.” Processing power alone is insufficient for real world problems (Copeland, 1993), for example, processing power alone cannot deduce a three-dimensional world from two-dimensional retina data, as the brain does.

Enthusiastic claims that computers are overtaking people in processing power (Kurzweil, 1999) repeat the mistake AI made 40 years ago, of underestimating life’s complexity. If computers still struggle with 5 year old skills, what about what children learn after five, while “growing up?” The Robot World Cup aims to transform current clumsy robot shuffles into soccer brilliance by 2050 (http://www.robocup.org). If computing is going in the wrong direction the question is not whether 50 years will suffice, but whether a 1,000 years will. In contrast, we suggest that:

Key Terms in this Chapter

Neural Processor: Similar to Hebb’s neural assembly, a set of neurons that act as a system with a defined input and output function, for example, visual cortex processors that fire when presented with lines at specific orientations

System: A system must exist within a world whose nature defines it, for example, a physical world, a world of ideas, and a social world may contain physical systems, idea systems, and social systems, respectively. The point separating system from not system is the system boundary, and effects across it imply system input and system output

Channel: A single, connected stream of signals of similar type, for example, stereo sound has two channels. Different channels need not involve different media, just as a single communication wire can contain many channels, so can a single medium like vision. Different channels, however, have different processing destinations, that is, different neural processors

Polite Computing: Any unrequired support for situating the locus of choice control of a social interaction with another party to it, given that control is desired, rightful, and optional. (Whitworth, 2005, p. 355). Its opposite is selfish software that runs at every chance, usually loading at start-up, and slows down computer performance

System Levels: The term information system suggests physical systems are not the only possible systems. Philosophers propose idea systems, sociologists social systems, and psychologists mental models. While software requires hardware, the world of data is a different system level from hardware. An information system can be conceived of on four levels, hardware, software, personal, and social, each emerging from the previous, for example, the Internet is on one level hardware, on another software, on another level an interpersonal system, and finally an online social environment

Process Driven Interaction: When a feedback loop is initiated by processing rather than input. This allows the system to develop expectations and goals.

Autonomy: The degree to which a subsystem can act from within itself rather than react based on its input. For a system to advance, its parts must specialize, which means only they know what they do, and when to act. If a master control mechanism directs subsystem action, its specialty knowledge must be duplicated in the control mechanism, as it must know when to invoke the subsystem. This defeats the point of specialization. For specialization to succeed, each part needs autonomy to act from its own nature

Complete Chapter List

Search this Book:
Reset