This chapter presents the possibilities for obtaining significant performance gains based on advanced implementations of algorithms using the dataflow hardware. A framework built on top of the dataflow architecture that provides tools for advanced implementations is also described. In particular, the authors point out to the following issues of interest for accelerating algorithms: (1) the dataflow paradigm appears as suitable for executing certain set of algorithms for high performance computing, namely algorithms that work with big data, as well as algorithms that include a lot of repetitions of the same set of instructions; (2) dataflow architecture could be configured using appropriate programming tools that can define hardware by generating VHDL files; (3) besides accelerating algorithms, dataflow architecture also reduces power consumption, which is an important security factor with edge computing.
TopBackground
The 1906 Nobel Prize in Physiology and Medicine was awarded to Camillo Golgi and Ramón y Cajal for having visualized and identified the neuron, the structural and functional unit of the nervous system (Grant, 2007). Since then, it has been discovered that the human brain contains roughly 100 billion neurons and 1000 trillion synapses. Neurons interact through electrochemical signals, also known as actional potentials (AP) or spikes, transmitted from one neuron to the next through synaptic junctions, forming functional and definable circuits which can be organized into larger ‘neuronal’ networks and anatomical structures. These networks integrate information from multiple brain regions as well as incoming information about the external environment (e.g., sound, light, smell, taste). The result is how we perceive the world, and produce complex behavior and cognitive processes including decision-making and learning (Kandel, 2012); also, with time, these processes modify the structure and function of networks through a process called neuroplasticity (Fuchs & Flugge, 2014).
Understanding how the brain works with the ultimate goal of developing treatments for neurological disease remains one of the greatest scientific challenges of this century. In fact, there is a substantial social and economic burden associated with neurological diseases (Wynford-Thomas & Robertson, 2017). In the US alone, the overall cost of neurological diseases (e.g., stroke, dementia, movement disorders, traumatic brain injury) amounts to nearly $1T, and will dramatically increase in the next few years due to population ageing. Alarmingly, the cost of just dementias and stroke is expected to exceed $600B by 2030 (Gooch, Pracht, & Borenstein, 2017). To tackle this challenge, neuroscientists have developed a battery of increasingly complex tools, which have amplified data storage and computational speed requirements to an unprecedented level, making the use of big data techniques and HPC such as supercomputers a necessity.
In the remainder of this chapter we will briefly review current and future applications of supercomputers in neuroscience, with a focus on computational neural models, brain imaging, and models for brain stimulation. These areas were chosen not only because their advances have been particularly driven by computational approaches, but also because they are highly interconnected. Thereby, understanding how each area is evolving aids prediction of future research trends in the other areas. We also briefly discuss how the next generation of supercomputers might enable further advancements in these areas.