Finite State Machines (FSM) are formalisms that have been used for decades to describe the behavior of systems. They can also provide an intelligent agent with a suitable formalism for describing its own beliefs about the behavior of the world surrounding it. In fact, FSMs are the suitable acceptors for right linear languages, which are the simplest languages considered in Chomsky’s classification of languages. Since Chomsky proposes that the generation of language (and, indirectly, any mental process) can be expressed through a kind of formal language, it can be assumed that cognitive processes can be formulated by means of the formalisms that can express those languages. Hence, we will use FSMs as a suitable formalism for representing (simple) cognitive models. We present an algorithm that, given an observation of the environment, produces an FSM describing an environment behavior that is capable to produce that observation. Since an infinite number of different FSMs could have produced that observation, we have to choose the most feasible one. When a phenomenon can be explained with several theories, Occam’s razor principle, which is basic in science, encourages choosing the simplest explanation. Applying this criterion to our problem, we choose the simplest (smallest) FSM that could have produced that observation. An algorithm is presented to solve this problem. In conclusion, our framework provides a cognitive model that is the most preferable theory for the observer, according to the Occam’s razor criterion.
Cognitive Informatics [Kinsner 2005, Wang 2002, 2003] provides Computer Science with a remarkable inspiration source for solving computational problems where the objectives are similar to those performed by the human mind. In spite of the fact that computational environments have some specific requirements and constraints that must not be ignored, understanding our mind is usually the key to provide successful (particularized) intelligent systems. This cross-fertilization has yielded the development of some successful intelligence mechanisms such as neural networks [Lau 1991] and case-based reasoning algorithms [Schank and Abelson 1977].
It is particularly relevant to note that the relationship between Computer Science and other mind-related sciences is two-faced. In particular, the development of formal language theories (oriented to computational languages) has led to a better understanding of our mind. Due to the close relationship between language generation and mental processes, some mathematical formalisms proposed for dealing with formal computational languages turned out to be good approximations to model human reasonings. Following this line, it is specially relevant the language theory developed by Noam Chomsky. He proposed that natural languages can be represented as formal languages [Chomsky 1957, 1965]. Chomsky considered four categories of languages (from simpler to more complex, right linear, context-free, context-sensitive, and grammars - or recursive enumerable) and he argued that natural languages are context-sensitive. All of these categories can be produced by a kind of suitable formal machine or acceptor (finite state automata, push-down automata, linear bounded automata, and Turing machines, respectively). Thus, the generation of natural languages can be represented in terms of some kind of formal automata, specifically linear bounded automata. This statement is specially relevant: Since the language is a projection of our cognitive processes, we can say that our own reasonings can be represented by using context-sensitive languages. Similarly, other less expressive languages (like right linear or context-free) may provide approximated and simpler models to represent human mental processes.
The difficulty to use a formal language to represent reasonings in a computational environment has discouraged most researchers to explore this trend. Paradoxically, the great expressivity of formal languages is its main handicap. For example, the beliefs/knowledge of an intelligent system cannot be internally represented by a recursive enumerable language (or its acceptor, a Turing machine), because there is no method to automatically construct the Turing machine that produces some given behavior. So, such an internal representation would be unable to create and maintain. Nevertheless, in some domains, using the simplest languages according to Chomsky’s classification could provide us with formalisms endowed with a suitable structure and expressivity while being efficient to handle. In particular, let us note that right linear languages are a suitable formalism for representing the behavior of a wide range of entities and systems. Their acceptors, that is Finite State Machines, have been used for decades to model the behavior of sequential digital circuits and communication protocols. Similarly, an intelligent entity can use an FSM to represent its belief about the behavior of the world that surrounds it. As any other knowledge representation, this model should be updated and maintained so that it provides, at any time, a feasible explanation of the events the agent has observed so far. If the model is accurate then the agent could use it to predict future situations. Hence, FSMs may be the basic formalism for knowledge representation in a learning system.