Simulation of the Action Potential in the Neuron's Membrane in Artificial Neural Networks

Simulation of the Action Potential in the Neuron's Membrane in Artificial Neural Networks

Juan Ramón Rabuñal Dopico, Javier Pereira Loureiro, Mónica Miguélez Rico
DOI: 10.4018/978-1-59904-996-0.ch005
(Individual Chapters)
No Current Special Offers


In this chapter, we state an evolution of the Recurrent ANN (RANN) to enforce the persistence of activations within the neurons to create activation contexts that generate correct outputs through time. In this new focus we want to file more information in the neuron’s connections. To do this, the connection’s representation goes from the unique values up to a function that generates the neuron’s output. The training process to this type of ANN has to calculate the gradient that identifies the function. To train this RANN we developed a GA based system that finds the best gradient set to solve each problem.
Chapter Preview


Due to the limitation of the classical ANN models (Freeman, 1993) to manage time problems, over the year 1985 began the development of recurrent models (Pearlmutter, 1990) capable to solve efficiently this kind of problems. But this situation didn’t change until the arrival of the Recurrent Backpropagation algorithm. Before this moment, the more wide used RANN were Hopfield networks and Boltzman machines that weren’t effective to treat dynamic problems. The powerful of this new type of RANN is based on the increment of the number of connections and the whole recursivity of the network. These characteristics, however, increment the complexity of the training algorithms and the time to finish the convergence process. These problems have slow down the use of the RANN to solve static and dynamic problems.

However, the chances of RANN are very big compared to the powerful of feedforward ANN. For the dynamic or static pattern matching, the RANN developed until now offer a better performance and a better learning skill.

Most of the studies that have already been done about RANN, have been center in the development of new architectures (partial recurrent or with context layers, whole recurrent, etc.) and to optimize the learning algorithms to achieve reasonable computer times. All of these studies don’t reflect changes in the architecture of the process elements (PE) or artificial neurons, that continue having an input function, an activation function and an output function.

The PE architecture has been modified, basing our study in biological evidences, to increment the RANN powerful. These modifications try to emulate the biological neuron activation that is generated by the action potential.

The aim of this work is to develop a PE model with activation output much more similar to the biological neurons one.

Complete Chapter List

Search this Book: