The history of artificial intelligence (AI) is commonly supposed to begin with Turing’s (1950) discussions of machine intelligence, and to have been defined as a field at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. However, the ideas on which AI is based, and in particular those on which symbolic AI (see below) is based, have a very long history in the Western intellectual tradition, dating back to ancient Greece (see also McCorduck, 2004). It is important for modern researchers to understand this history for it reflects problematic assumptions about the nature of knowledge and cognition: assumptions that can impede the progress of AI if accepted uncritically.
Symbolic AI is the approach to artificial intelligence that has dominated the field throughout most of its history and remains important. It is based on the physical symbol system hypothesis, enunciated by Newell and Simon (1976), which asserts, “A physical symbol system has the necessary and sufficient means for general intelligent action.” In effect, it implies that knowledge is represented in the brain by language-like structures, and that thinking is a computational process that rearranges these structures according to formal rules. This view has also dominated cognitive science, which applies computational concepts to understanding human cognition (Gardner, 1985).
Many symbolic AI systems are based on formal logic, which represents propositions by symbolic structures, in which all meaning is conveyed in the structure’s form, and which implements inference by the mechanical manipulation of those structures. Therefore, we will discuss the origins of formal logic and of the idea that knowledge and inference can be represented in this way. We will also consider the combinatorial methods used before the invention of computers as well as in modern AI for generating possible solutions to a problem, which leads to combinatorial explosion, a fundamental limitation of symbolic AI. Then we describe early modern attempts to design comprehensive knowledge representation languages (predecessors of those used in symbolic AI) and mechanical inference machines. We conclude with a mention of alternative views of knowledge and cognition.
Key Terms in this Chapter
Calculus: A calculus is a system of physical symbols and mechanical rules for their manipulation intended to accomplish some purpose, such as calculation, differentiation, integration, or formal inference. In principle, any process that can be accomplished by a calculus can be programmed on a digital computer.
Semantics: Refers to the meanings of expressions in a natural or artificial language and to the study of these meanings and their relation to the expressions. It is often contrasted with syntax. Since formal systems, calculi, and symbolic AI systems deal only with the forms of expressions, they can be sensitive to semantics only to the extent that the semantics is encoded in the system’s syntax.
Knowledge Representation Language: Is a formal language, implementable in the data structures of a digital computer, intended to be capable of representing all knowledge or at least all knowledge in some AI application domain. It is intended as a medium for storing knowledge and for mechanized inference in its domain. A knowledge representation language is the analogue in AI of the language of thought in cognitive science.
Symbolic AI: Is an approach to AI based on the manipulation of knowledge represented in language-like (symbolic) structures in which all relevant semantics (meaning) is explicit in the syntax (formal structure). The language-of-thought hypothesis provides part of the justification of the sufficiency of the symbolic approach to AI.
Syntax: Refers primarily to the grammar rules of a language (natural or artificial), that is, to the allowable forms of expressions without reference to their meaning (semantics). In the context of AI, syntax refers to the rules of knowledge representation in terms of data structures and to the computational processes that operate on these structures.
Language of Thought (“Mentalese”): Is a hypothesized language-like system in whose terms all human cognition is supposed to take place. Advocates of this hypothesis acknowledge that not all of our thinking is discursive (by means of an inner dialogue), but they argue that the systematic structure of ideas and thinking implies that there must be a language of thought, albeit below the level of conscious access. The language-of-thought hypothesis partly justifies symbolic AI as a sufficient basis for AI.
Generate-and-Test Procedure: Is a common method of search, used in AI and other applications, in which possible solutions are generated systematically and evaluated until a suitable solution is found. For example, a game-playing program might generate possible moves, which are evaluated in terms of their likelihood of leading to a win. The greatest weakness of generate-and-test procedures is combinatorial explosion, which refers to the exponential increase of the number of possible solutions of increasing complexity (e.g., the number of moves that a game-playing program looks forward).