As known to us, the cognition process is the instinct learning ability of the human being. This process is perhaps one of the most complex human behaviors. It is a highly efficient and intelligent information processing process. For a cognition process of the natural world, humans always transfer the feature information to the brain through their perception, and then the brain processes the feature information and remembers it. Due to the invention of computers, scientists are now working toward improving its artificial intelligence, and they hope that one day the computer could have its intelligent “brain” as human does. However, it is still a long way for us to go in order to let a computer truly “think” by itself. Currently, artificial intelligence is an important and active research topic. It imitates the human brain using the idea of function equivalence. Traditionally, the neural computing and neural networks families are the majority parts of the direction (Haykin, 1994). By imitating the working mechanism of the human-brain neuron, scientists have built the neural networks theory following experimental research such as perception neurons and spiking neurons (Gerstner & Kistler, 2002) in order to understand the working mechanism of neurons. Neural-computing and neural networks (NN) families (Bishop, 1995) have made great achievements in various aspects. Recently, statistical learning and support vector machines (SVM) (Vapnik, 1995) have drawn extensive attention and shown better performances in various areas (Li, Wei & Liu, 2004) than NN, which implies that artificial intelligence can also be made via advanced statistical computing theory. Nowadays, these two methods tend to merge under the statistical learning theory framework
It should be noted that, for NN and SVM, the function imitation happens at microscopic view. Both of them utilize the mathematic model of neuron working mechanism. However, the whole cognition process can also be summarized as two basic principles from the macroscopical point of view (Li, Wei & Liu, 2004): the first is that humans always cognize things of the same kind, and the second is that humans recognize and accept things of a new kind easily.
In order to clarify the idea, the function imitation explanation of NN and SVM is first analyzed. The function imitation of human cognition process for pattern classification (Li & Wei, 2005) via NN and SVM can be explained as follows:
The training process of these machines actually imitates the learning processes of human being, which is called “cognizing process”, While the testing process of an unknown sample actually imitates the recognizing process of human being, which is called “recognizing process”.
From a mathematical point of view, the feature space is divided into many partitions according to the selected training principles (Haykin, 1994). Each feature space partition is then linked with a corresponding class. Given an unknown sample, NN or SVM detects its position and then assigns the indicator. For more details, the reader should refer to (Haykin, 1994 & Vapnik, 1995).
Now, suppose a sample database is given, if a totally unknown new sample comes, both SVM and NN will not naturally recognize it correctly and will consider it to the closest known in the learned classes (Li & Wei, 2005).
The root cause of this phenomenon lies in the fact that the learning principle of the NN or SVM is based on feature space partition. This kind of learning principle may amplify each class’s distribution region especially when the samples of different kinds are small due to incompleteness. Thus it is impossible for NN or SVM to detect the unknown new samples successfully.
However, this phenomenon is quite easy for humans to handle. Suppose that we have learned some things of the same kind before, and then if given similar things we can easily recognize them. And if we have never encountered with them, we can also easily tell that they are fresh things. Then we can remember their features in the brain. Sure, this process will not affect other learned things. This point surely makes our new learning paradigm different from NN or SVM.