Towards an Integrative Model of Deductive-Inductive Commonsense Reasoning

Towards an Integrative Model of Deductive-Inductive Commonsense Reasoning

Xenia Naidenova (Military Medical Academy, Russia)
DOI: 10.4018/978-1-60566-810-9.ch009


The most important steps in the direction to an integrative model of deductive-inductive commonsense reasoning are made in this chapter. The decomposition of inferring good classification tests is advanced into two kinds of subtasks that are in accordance with human mental acts. This decomposition allows modeling incremental inductive-deductive inferences. We give two basic recursive procedures based on two kinds of subtasks for inferring all good maximally redundant classification tests (GMRTs): ASTRA and DIAGaRa. An incremental algorithm INGOMAR for inferring all GMRTs is presented too. The problems of creating an integrative inductive-deductive model of commonsense reasoning are discussed in the last section of this chapter.
Chapter Preview


The incremental approach to developing machine learning algorithms is one of the most promising directions in creating intelligent computer systems. Two main considerations determine the interest of researchers to the incrementality as an instrument for solving learning problems. The first consideration is related to the nature of tasks to be solved. In a wide range of problems, a computer system must be able to learn incrementally for adapting to changes of the environment or user’s behavior. An example of incremental learning can be found in (Maloof, & Michalski, 1995), where a dynamic knowledge-based system for computer intrusion detection is described. Incremental clustering for mining in a data-warehousing environment is another interesting example of incremental learning (Ester, et al., 1998).

The second consideration is related to the intention of researchers to create more effective and efficient data mining algorithms in comparison with non-incremental ones. This goal implies the necessity to answer the following questions: how to select the next training example in order to minimize the number of steps in the learning process? How to select the relevant part of hypotheses already induced in order to bring them in agreement with a certain training example? The problem of how to best modify an induced Boolean function when the classification of a new example reveals that this function is inaccurate is considered in (Nieto et al., 2002). In this paper, the problem is solved by minimizing the number of clauses that must be repaired in order to correctly classify all available training examples. An efficient algorithm for discovering frequent sets in incremental databases is given in (Feldman et al., 1997).

The distinction between an incremental learning task and an incremental learning algorithm is clarified in (Giraud-Carries, 2000). A learning task is incremental if the training examples used to solve it become available over time, usually one at a time. A learning algorithm is incremental if for given training examples e1, e2,…, ei, ei+1,…, en it produces a sequence of hypotheses h1, h2,…, hi, hi+1,…, hn, such that hi+1 depends only on hi and current example ei. As it has been shown in (Giraud-Carries, 2000), it is possible to use an incremental algorithm for both non-incremental and incremental tasks.

The analysis of existing learning algorithms shows that non-incremental data processing can be a part of an incremental algorithm (see the example in (Nieto, et al., 2002)) while incremental data processing can be embodied in a non-incremental algorithm. From the more general point of view, the incrementality is a mode of inductive reasoning for creating learning algorithms.

Induction allows extending the solution of a sub-problem with lesser dimension to the solution of the same problem but with greater dimension (forward induction) and vice versa (backward induction). There does not exist only one way of applying induction to the same problem, but many different ways that lead to different methods of constructing algorithms.

Complete Chapter List

Search this Book: