The BioDynaMo Project: Experience Report

The BioDynaMo Project: Experience Report

Roman Bauer, Lukas Breitwieser, Alberto Di Meglio, Leonard Johard, Marcus Kaiser, Marco Manca, Manuel Mazzara, Fons Rademakers, Max Talanov, Alexander Dmitrievich Tchitchigin
DOI: 10.4018/978-1-5225-1947-8.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Computer simulations have become a very powerful tool for scientific research. Given the vast complexity that comes with many open scientific questions, a purely analytical or experimental approach is often not viable. For example, biological systems comprise an extremely complex organization and heterogeneous interactions across different spatial and temporal scales. In order to facilitate research on such problems, the BioDynaMo project aims at a general platform for computer simulations for biological research. Since scientific investigations require extensive computer resources, this platform should be executable on hybrid cloud computing systems, allowing for the efficient use of state-of-the-art computing technology. This chapter describes challenges during the early stages of the software development process. In particular, we describe issues regarding the implementation and the highly interdisciplinary as well as international nature of the collaboration. Moreover, we explain the methodologies, the approach, and the lessons learned by the team during these first stages.
Chapter Preview
Top

Code Modernization

High performance and high scalability are the prerequisites to address ambitious research questions like modeling epilepsy. Our efforts in code modernization were driven by the goal to remove unnecessary overhead and update the software design to tap the unused potential caused by the paradigm shift to multi- and many-core systems. Prior to 2004, since performance was clock speed driven, buying a processor of the next generation automatically increased application performance without changing a single line of code. Physical limitations forced the CPU vendors to change their strategy to push the edge of the performance envelope (Sutter, 2005). Although this paradigm shift helped to improve theoretical throughput, sequential applications benefit only marginally from new processor generations. This puts additional burden on the application developers who have to refactor existing applications and deal with the increased complexity of parallel programs. As core counts are constantly increasing, the portion of unused computing resources will grow in the future if these changes are not applied. Furthermore, a modern processor offers multiple levels of parallelism that goes beyond the number of cores: it features multiple threads per core, has multiple execution ports able to execute more than one instruction per clock cycle, is pipelined and provides vector instructions also referred to as SIMD (Single Instruction Multiple Data).

Last year’s Intel Modern Code Development Challenge was about optimizing sequential C++ brain simulation code provided by the Newcastle University in the UK. The task for participating students was to improve the run-time as much as possible. Using data layout transformations (array of structures AoS to structure of arrays SoA), parallelization with OpenMP, a custom memory allocator and Intel Cilk Plus array notation, the winner was able to improve the run-time by a factor of 320. This reduced the initial execution time on a Intel Xeon Phi Knights Corner Coprocessor with 60 cores from 45 hours down to 8.5 minutes. This clearly shows the economic potential of code modernization efforts. These findings still have to be integrated into the entire code base, since this result has been obtained on a small code sample.

Complete Chapter List

Search this Book:
Reset