A Neural Dynamic Model Based on Activation Diffusion and a Micro-Explanation for Cognitive Operations

A Neural Dynamic Model Based on Activation Diffusion and a Micro-Explanation for Cognitive Operations

Hui Wei (Fudan University, China)
DOI: 10.4018/jcini.2012040101
OnDemand PDF Download:
No Current Special Offers


The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence. In this paper a computational model was proposed to simulate the network of neurons in brain and how they process information. The model refers to morphological and electrophysiological characteristics of neural information processing, and is based on the assumption that neurons encode their firing sequence. The network structure, functions for neural encoding at different stages, the representation of stimuli in memory, and an algorithm to form a memory were presented. It also analyzed the stability and recall rate for learning and the capacity of memory. Because neural dynamic processes, one succeeding another, achieve a neuron-level and coherent form by which information is represented and processed, it may facilitate examination of various branches of Artificial Intelligence (AI), such as inference, problem solving, pattern recognition, natural language processing and learning. The processes of cognitive manipulation occurring in intelligent behavior have a consistent representation while all being modeled from the perspective of computational neuroscience. Thus, the dynamics of neurons make it possible to explain the inner mechanisms of different intelligent behaviors by a unified model of cognitive architecture at a micro-level.
Article Preview

1. Introduction: The Micro-Level Operations Of Cognition

A sequence of tasks is performed in the brain when a student answers their teacher’s question. These tasks are speech analysis, question understanding, knowledge retrieval, reasoning or problem solving, sentence production. On careful examination of these continuous stages, there is awareness of the sub-tasks of a problem, and the procedure to decompose the problem and infer the final decision. However, there is no consciousness of accomplishing a sub-task, and no knowledge of details that happening in the execution of a sub-task. As another example, consider the procedure for remembering the name of an acquaintance unseen for a long time. The face is in the mind, yet it’s necessary to think deeply and for a long time to remember their name. It is not known in any detail how the brain associates a name with a face. As a person accomplishes each cognitive task, their brain knows what to do without being told how to do it: the implementation is veiled. This kind of “conscious blankness” is typically experienced in problem solving, perception, language understanding and production, memory and learning, and in the sensor-motor arc. The blankness, i.e., exactly which neural activities take place, needs to be explained through physiological psychology.

The realization of an intelligent behavior is essentially composed of a series of successive neural activities. Dividing the execution of an intelligent behavior into many steps means finite automaton (FA) are used to model all these cognitive operations in a time sequence. This automaton is activated either by outer or inner stimuli, and transforms from one state to another, finally ending with a state that stands for a directive to start a behavior, or stands for a kind of inner perception. According to cognitive science and cognitive informatics (Wang, 2003, 2007, 2010; Wang et al., 2006, 2009), humans’ cognitive behaviors are rich, and each cognitive operation corresponds to its own automat. At a higher level, the start of an FA might be connected to the termination of another, i.e., different FA may activate one another. In this way, different automatons collaboratively perform either low-level reactions such as movement control or high-level cognition such as inference, problem solving, perception, learning, memory and speech. Each automat is a particular routine of a certain sub-task. The aim of the neural dynamic model is to describe and implement such kind of abstract automaton. Related studies (Destexhe & Contreras, 2006; Fox et al., 2005; Herz, Gollisch, Machens, & Jaeger, 2006; Jirsa, 2004; Sandler & Tsitolovsky, 2001) are increasingly concerned with this. The spatial-temporal structure of neurons and connections, status evolution, and physical or mathematical models will explain the brain mechanism of information processing.

Therefore, new approaches have been tried which adopted neuron encoding (Kimoto & Okada, 2001), or structure-oriented learning (Kimoto & Okada, 2004), or sparse sub-networks (Bohland & Minai, 2001), or dynamic neuron processing (Ganguly, Maji, Sikdar, & Chaudhuri, 2004). Research in cell encoding (Ferster & Spruston, 1995; Sakurai, 1999), micro-circuits (Quinlan, 1998), functional columns (Tanaka, 1996; Yao & Li, 2002), and the structure and function of receptive fields (Sun, Chen, Huang, & Shou, 2004) has bridged the gap between the signal-processing mechanism at the molecular or cellular level and the information-processing mechanism in the functional portions of the brain. So, research into AI models fusing memory, perception, and representation is becoming more prominent (Sun, Chen, Huang, & Shou, 2004).

Complete Article List

Search this Journal:
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing