In this article we perform some experiments to study how an automatic system learns a set of rules from its interaction with an artificial environment. In particular, we are interested in comparing these capabilities to the skills shown by humans to learn the same rules in similar conditions. We perform this analysis by conducting two experiments. On the one hand, we observe the evolution of the automatic learning system in terms of its performance along time. At the beginning, the system does not know the rules, but it can observe the positive/negative results of its decisions. As its knowledge about the environment becomes more precise, its performance improves. On the other hand, seventy students faced the same artificial environment in the same conditions, though this time the experiment was presented as a game. The objective of the game consists in gaining points, but the rules of the game are not known a priori. So, there is a clear incentive for finding them out. We use these experiments to compare the learning curves of both humans and automatic systems, and we use this information to analyze the similarities/differences between both learning processes. In particular, we are interested in assessing how close the automatic system is from passing the Turing test.
TopComparing Learning Methods
It is well known that the inspiration for the scientific research is frequently found in the borders between scientific fields. In these borders, researchers from different fields find useful knowledge that belongs to the standard background of a community but is unknown for researchers of other communities. This is the case of Cognitive Informatics (Wang, 2002), which puts different areas in contact: On the one hand, Computer Science and, on the other hand, Neurology, Psychology, and other Sciences related to the human brain. Interesting results about this research area can be found in previous issues of this journal. Among them, we could highlight (Kinsner, 2007; Flax, 2007; Rajlich & Xu, 2007; López, Núñez & Pelayo, 2007; Encina, Hidalgo-Herrero, Rabanal, Rodríguez & Rubio, 2008), as they cover several different aspects of the most recent research in the area.
One of the main concerns of Computer Science in general and Cognitive Informatics in particular is the relation between artificial and human cognitive processes. This issue has attracted the attention of Artificial Intelligence (AI) researchers for decades. One of the first approaches proposed to study this relation is a criterium that has become classical: The Turing test (Turing, 1950; Saygin, Cicekli, & Akman, 2000). Alan Turing proposed that a machine should be considered intelligent if its behavior is indistinguishable from that of a human being. This claim implicitly assumes that the human mind is a kind of optimal form of intelligence. The reason is not that we assume that the intelligence of humans is perfect, but that we do not have any other model to make this comparison. So, systems under assessment are compared with the only thing we know (almost by definition) that is intelligent.
Let us note that, during its evolution, the AI field has adapted itself to several practical problems. In the optimistic beginning, researchers tried to imitate the global human behavior in a wide sense. After their failure, the AI limited dramatically its scope to that it has currently, which consists in producing intelligent behaviors in very limited and restricted knowledge domains. As the goals of AI have changed along time, we think that those concepts proposed formerly to relate and compare human intelligence with artificial intelligence should be revisited and updated as well. In spite of the fact that achieving an artificial intelligent behavior in the wide sense (that is, with no domain restriction) is a hard task, creating machines whose behavior could be considered intelligent is not difficult when a specific domain is chosen, specially when the set of rules governing that domain is relatively simple.