Simulating the Behavior of the Human Brain Using Sparse Linear Algebra on Distributed Memory Platforms: Applying Tasking to MPI Communication

Simulating the Behavior of the Human Brain Using Sparse Linear Algebra on Distributed Memory Platforms: Applying Tasking to MPI Communication

DOI: 10.4018/978-1-7998-7082-1.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents some novel approaches that show the effective use of tasking to solve linear algebra problems at a distributed level. Encapsulating distributed memory transfer calls within tasking is an efficient and relatively easy and transparent way to deal with the important unbalancing between communication and computation speed. Unlike the previous chapters, the authors use as a test case one real application that makes use of some of the operations previously described, such as the computation of a batch of sparse and independent linear systems of equations. This is one of the most challenging applications in computing today, the simulation of the human brain. The reader will see different techniques based on tasking that help not only to minimize the unbalancing between communication and computation, but also to balance problems highly unbalanced, such as the simulation of a multimorhology neuron net.
Chapter Preview
Top

Introduction To Human Brain Simulation

The model presented for the simulation of the Human Brain consists of two major tasks (Akar et al., 2019): i) computation of the voltage capacitance on the neurons morphology, and ii) exchange of the spikes among those neurons connected through synapses. In this model, the neurons are seen as multi-compartment cables composed of active electrical elements (see Figure 1).

Figure 1.

Multi-compartment model implemented for the simulation of the human brain (Peyser, 2017)

978-1-7998-7082-1.ch006.f01

In the following, the authors present the equation behind the computation of the voltage capacitance on the morphology of neurons (Akar et al., 2019). This is indeed a very time consuming stage of the simulation. The formula to be solved in this step presents the next general form:

978-1-7998-7082-1.ch006.m01
f and g are functions on x-dimension (neuron morphology) and the current I and capacitance C (Diaz-Pier et al., 2016) depend on the voltage V. A linear system is obtained by discretizing the previous formula on a particular morphology, which needs to be solved every time-step of the simulation. This system must be solved at each point (i) of the discretized morphology:

The coefficients of the matrix can be represented as follows:

upper diagonal: 978-1-7998-7082-1.ch006.m03lower diagonal: 978-1-7998-7082-1.ch006.m04diagonal: 978-1-7998-7082-1.ch006.m05right hand side (rhs): 978-1-7998-7082-1.ch006.m06

In the above formulas, ai and bi are constant (scalar) in time and are computed just once at the beginning of the simulation. The other coefficients; diagonal (di) and right-side-hand (rhs), have to be computed and updated every time-step. The discretization explained above can be extended or modified to include the branching of the neurons, where the spatial decomposition of the morphology of the neuron consists of a set of uni-dimensional branches which are connected via nodes. For the sake of clarity, the authors illustrate an example of a very simple neuron morphology in Figures 2, 3 and 4, which is composed by a set of branches or segments (si, i = 1, ..., 4) and the connections of the branches or nodes (ni, i = 1, ..., 6).

Figure 2.

Neuron morphology (Cumming, 2010)

978-1-7998-7082-1.ch006.f02
Figure 3.

Numbering of the neuron morphology illustrated in Figure 2 (Cumming, 2010)

978-1-7998-7082-1.ch006.f03
Figure 4.

Hines matrix representation of the numbering illustrated in Figure 3 (Cumming, 2010)

978-1-7998-7082-1.ch006.f04

The graphs formed by the neuron morphology are always acyclic graphs, i.e. they have no loops. The nodes are numbered by using a scheme that allows the matrix generated (called Hines matrix) to be solved in linear time.

Key Terms in this Chapter

Weak Scaling: Another technique used to evaluate the scalability of a computational problem by increasing both: size of the problem and resources (usually number of nodes).

Neuron: Also known as nerve cell, is an electrically excitable cell and main component of the nerve system. Usually, neurons are connected with a large number of other neurons.

Synapses: A component or structure of the nerve system which connects neurons. Synapses allow a neuron to pass an electrical or chemical signal to another neuron.

Distributed Memory Systems: Consist of a set of processing nodes interconnected by a high-speed network. Each node consists of a set of homogeneous or heterogeneous computing elements and local memory.

Strong Scaling: A technique which evaluates the scalability of both: computer program and computing capacity by fixing the size of the problem to be evaluated and increasing the number of resources, usually number of nodes, of the platform on which the problem is evaluated.

Hines Systems: A special linear system of equations where the distribution of the elements of the equations provide a sparse matrix with some particularities, such as symmetric, no cyclic, etc. These systems are widely used for the simulation of electrical circuits and neuron morphologies.

Complete Chapter List

Search this Book:
Reset