Abstract
Artificial intelligence is the science of creating intelligent machines. Human intelligence is comprised of numerous pieces of knowledge as well as processes for utilizing this knowledge to solve problems. Artificial intelligence seeks to emulate and surpass human intelligence in problem solving. Current research tends to be focused within narrow well-defined domains, but new research is looking to expand this to create global intelligence. This chapter seeks to define the various fields that comprise artificial intelligence and look at the history of AI and suggest future research directions.
TopIntroduction
The ENIAC computer unveiled in February 1946 is considered by many to be the first programmable digital computer. Soon after this first computer was initialized, human researchers started to dream of intelligent machines. Alan Turing (1950) proposed a means for computers to demonstrate intelligence. Thus, the science of artificial intelligence (AI) was created. The term artificial intelligence was first coined in August, 1955 (McCarthy et al., 2006). The goal of AI is to create computer programs, possibly embedded in sophisticated hardware such as robots, which are capable of intelligent thought, ultimately creating new knowledge. Intelligent systems should be able to perform any task normally associated with human cognition and intelligence, such as goal setting, goal seeking, path planning, and problem solving.
There are two schools of thought within AI. The first seeks to create machine intelligence by emulating the way humans think. An example of this approach is the MYCIN expert system, which utilized the problem solving processes of human experts in order to diagnose bacteriological infections (Shortliffe et al., 1975). The second is interested in creating intelligence through whatever mechanisms are available, regardless of its similarity to human thought processes. An example of this second approach is the Deep Thought (renamed Deep Blue® after IBM® purchased the program from Carnegie Mellon) chess playing program which utilized massively parallel computer architecture to achieve chess playing expertise by being able to examine game trees to a much deeper depth (Newborn & Kopec, 1989).
The Turing test (Turing, 1950), depicted in Figure 1, essentially states that if the output of an AI program cannot be distinguished by a knowledgeable human being who cannot see the respondents from responses made by another human, then the machine is intelligent. The Turing test has been criticized that it doesn’t really evaluate intelligence, just a computer’s ability to imitate human conversational responses to questions. The Eliza program is an early version of what has become known as a chatbot (Weizenbaum, 1966). Eliza conversationally performed psychotherapy, but in actuality was simply using canned responses to pre-identified keywords present in the human user’s responses to Eliza’s statements and questions. A modern version of the Turing test is the annual Loebner Prize competition (AISB, 2016).
Figure 1. Turing test; judge in main room tries to determine which room has the computer AI and which room has the human by asking questions and evaluating Room A and Room B responses
An interesting outcome of the Turing test is the identification that certain abilities humans may take for granted, such as speech, vision, and hearing are extremely complex for computers to perform, while complex mathematical operations are extremely easy for computers to perform. Additionally, Turing’s ideal is for AI-based computers to have global knowledge, similar to humans, so that they could converse on a wide variety of topics intelligently. However, as shown by the Eliza project, early AI research was focused on solving much more narrowly defined problems within single domains. This trend of having highly specialized intelligence to outperform human experts in problem solving within a specific domain or for a specific problem with a domain is still continued today.
The field of AI has numerous disciplines:
Key Terms in this Chapter
Artificial Neural Network: A collection of interconnected processing elements known as neurodes that mimic the electrical connectivity in the human brain to produce intelligence.
Game Tree: A knowledge representation structure shaped like a decision tree. Nodes of the tree represent game positions possible through legal moves (from the position in the parent connecting node). The layers of the game tree expand or contract based on the possible legal moves at that point in the game. It has been estimated that the game of chess has 38 84 possible positions in its game tree ( Marsland & Campbell, 1982 ).
Three Laws of Robotics: Sometimes referred to as Asimov’s Laws of Robotics or simply the Three Laws are a set of science fiction principles to govern intelligent/thinking robot behavior so that no harm comes to mankind. The laws are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Expert System: Also known as knowledge based systems, a computer program composed of a knowledge base, often represented as if-then production rules, an inference engine, and a user interface which typically includes an explanation facility to educate users on the decision making process of the expert system.
Heuristic: A rule of thumb used to reach a conclusion using only partial knowledge. While not guaranteed to be perfect, the heuristic should provide a reasonable approximation of a correct solution.