Neural Networks in Cognitive Science: An Introduction

Neural Networks in Cognitive Science: An Introduction

Nooraini Yusoff (University of Surrey, UK), Ioana Sporea (University of Surrey, UK) and André Grüning (University of Surrey, UK)
DOI: 10.4018/978-1-61350-092-7.ch004


In this chapter we give a brief overview of the biological and technical background of artificial neural networks as are used in cognitive modelling and in technical applications. This will be complemented by three instructive case studies which demonstrate the use of different neural networks in cognitive modelling.
Chapter Preview



Classic neuroanatomic research tells us that the nervous system of animals essentially consists of the nerve cells, the so-called neurons, and that these are connected to each other along axons and dendrites (longer and shorter tree-like branching excrescences of the neuron body) via so-called synapses (Shepherd, 1994). This is certainly a simplified picture, but – while research about the degree of neural minutiae of importance for supporting cognitive process is still on-going – it is a useful working hypothesis that neural computation takes place mainly through neurons and synapses. Most artificial network models that aim to explain nervous processing or utilise in artificial intelligence concentrate on these two ingredients (Müller et al., 1990; Rojas, 1996).

It appears that – compared to a computer in a technical sense – a single neuron is only capable of a very restricted set of computations, but that the complexity of the nervous system lies in the precise way how these neurons are connected to each other through their synapses. Hence where a classical computer has one or a few complex processor kernels connected in a comparatively simple way, nervous systems have a high number of simple elementary processors connected in a complex way.

How do these components, neurons and synapses interact? First of all the neurons fire a spike, i.e. send an electrical potential pulse down their axons and via the synapses to the dendrites of connected other neurons. More precisely, the cell bodies of neurons have an electrical potential difference against the surrounding medium. This electrical potential is a function of the input such a neuron receives from other neurons (usually through the synapses on its dendrites). If the electrical potential exceeds a certain threshold, the neuron generates an electro-chemical pulse, the spike, which propagates along its axon. This pulse is then transmitted as input to the next neuron through a synapse, and the strength (or weight) of such a synapse determines how much the potential of the post-synaptic cell changes subsequently. The receiving cell usually needs a number of such pulses within a certain time in order to reach its own firing threshold and pass on a spike to its successor neurons i.e., neurons often function approximately as leaky integrators of incoming spikes within a certain characteristic time (Gerstner & Kistler, 2002).

It is important to note that a synapse is a site of close contact of the pre-synaptic axon and post-synaptic dendrite which are however separated by the so-called synaptic cleft. Signal transmission at the synapse happens as follows: The change of electrical potential due to the spike in the pre- synaptic neuron causes it to release a chemical neurotransmitter at the site of the synapse which then diffuses across the synaptic cleft. If it arrives at the site of the synapse in the post-synaptic neuron, it there causes a change of electrical potential in the cell body. Synapses can be excitatory, i.e. the transmitted spike increases the post-synaptic potential, or inhibitory, where the potential is decreased.

To summarise for use in artificial neural networks, neurons communicate among each other by sending spikes, short electrical pulses of stereotypical form. Receiving neurons sum this spikes within a time window, weighted by the strength of the synapses over which they were received, and if they have received enough input – and dependent on the parameters of the neuron (i.e. firing threshold, internal state history etc) – then fire a spike themselves.

How is information encoded in the nervous system? What is a significant neural firing pattern? This is still an open question, and its answer might depend on where one looks in the nervous system. It is known that there are parts of the nervous system that use the firing rate, i.e. the average number of spikes a neuron emits in a time interval to encode for example the intensity of a sensory stimulus. However also precise spike times or time-locked spiking of various neurons are important for information encoding, for example in the owl auditory system, where precise runtime differences are encoded in precisely timed spikes needed for echo-location (Carr, 1993).

In artificial neural networks, for their technical simplicity often rate-neuron based models are preferred over spiking ones, since here the output variable of neuron is only a real number for every time step whereas for spiking models the precise timing of spikes and hence the cell internal states needs to be modelled (Gerstner & Kistler, 2002).

Complete Chapter List

Search this Book: