Early attempts to implement systems that understand commonsense knowledge did so for very restricted domains. For example, the Planes system [Waltz, 1978] knew real world facts about a fleet of airplanes and could answer questions about them put to it in English. It had, however, no behaviors, could not interpret the facts, draw inferences from them or solve problems, other than those that have to do with understanding the questions. At the other extreme, SHRDLU (Winograd, 1973) understood situations in its domain of discourse (which it perceived visually), accepted commands in natural language to perform behaviors in that domain and solved problems arising in execution of the commands; all these capabilities were restricted, however, to SHRDLU’s artificial world of colored toy blocks. Thus, in implemented systems it appears that there may be a trade off between the degree of realism of the domain and the number of capabilities that can be implemented. In the frames versus logic debate (see Commonsense Knowledge Representation I - Formalisms in this Encyclopedia), the real problem, in Israel’s (1983) opinion, is not the representation formalism itself, but rather that the facts of the commonsense world have not been formulated, and this is more critical than choice of a particular formalism. A notable attempt to formulate the “facts of the commonsense world” is that of Hayes [1978a, 1978b, 1979] under the heading of naïve physics. This work employs first-order predicate calculus to represent commonsense knowledge of the everyday physical world. The author of this survey has undertaken a similar effort with respect to commonsense business knowledge (Ein-Dor and Ginzberg 1989). Some broader attempts to formulate commonsense knowledge bases are cited in the section Commonsense Knowledge Bases.
Commonsense And Expert Systems
The perception that expert systems are not currently sufficient for commonsense representation is strengthened by the conscious avoidance in that field of commonsense problems. An excellent example is the following maxim for expert system construction:
Focus on a narrow specialty area that does not involve a lot of commonsense knowledge. ... to build a system with expertise in several domains is extremely difficult, since this is likely to involve different paradigms and formalisms. (Buchanan et al., 1983)
In this sense, much of the practical work on expert systems has deviated from the tradition in Artificial Intelligence research of striving for generality, an effort well exemplified by the General Problem Solver (Ernst and Newell, 1969) and by work in natural language processing. Common sense research, on the other hand, seems to fit squarely into the AI tradition for, to the attributes of common sense (Commonsense Knowledge Representation I), it is necessary to add one more implicit attribute, namely the ability to apply any commonsense knowledge in ANY relevant domain. This need for generality appears to be one of the greatest difficulties in representing common sense.
Consider, for example, commonsense information about measurement; knowledge of appropriate measures, conversions between them, and the duration of their applicability are necessary in fields as diverse as medicine, business, and physics. However, each expert system represents knowledge, including the necessary knowledge about measuring scales, in the manner most convenient for its specific purposes. No such representation is likely to be very useful in any other system in the same domain, and certainly not for systems in other domains. Thus, it appears that the reason for the inability of expert systems as currently developed to represent general purpose common sense is primarily a function of the generality of commonsense versus the specificity of expert systems.
From a positive point of view, one of the major aims of commonsense systems must be to represent knowledge in such a way that it can be useful in any domain; i.e. when storage strategies cannot be based on prior information about the uses to which the knowledge will be put.