Cellular Automata and GPGPU: An Application to Lava Flow Modeling

Cellular Automata and GPGPU: An Application to Lava Flow Modeling

Donato D’Ambrosio (University of Calabria, Italy), Giuseppe Filippone (University of Calabria, Italy), Rocco Rongo (University of Calabria, Italy), William Spataro (University of Calabria, Italy) and Giuseppe A. Trunfio (University of Sassari, Italy)
Copyright: © 2012 |Pages: 18
DOI: 10.4018/jghpc.2012070102


This paper presents an efficient implementation of the SCIARA Cellular Automata computational model for simulating lava flows using the Compute Unified Device Architecture (CUDA) interface developed by NVIDIA and carried out on Graphical Processing Units (GPU). GPUs are specifically designated for efficiently processing graphic data sets. However, they are also recently being exploited for achieving excellent computational results for applications non-directly connected with Computer Graphics. The authors show an implementation of SCIARA and present results referred to a Tesla GPU computing processor, a NVIDIA device specifically designed for High Performance Computing, and a Geforce GT 330M commodity graphic card. Their carried out experiments show that significant performance improvements are achieved, over a factor of 100, depending on the problem size and type of performed memory optimization. Experiments have confirmed the effectiveness and validity of adopting graphics hardware as an alternative to expensive hardware solutions, such as cluster or multi-core machines, for the implementation of Cellular Automata models.
Article Preview


Nowadays, parallel computing is seen as a cost-effective method for the fast and efficient resolution of computationally large and data-intensive problems (Grama et al., 2003). The great expansion of High Performance Computing (HPC) in different scientific and engineering fields has permitted the use of numerical simulations as a tool for solving complex equation systems which rule the dynamics of complex real phenomena, through which researchers can study the modelling of, for instance, a lava flow, fire spreading or traffic simulation. Usually, the modeler has to implement proper optimization strategies and when possible, parallelize the program. The type of parallelization needed in this latter phase depends on the kind of available parallel architecture. For instance, in the case of a distributed memory machine (such as Beowulf clusters), this can be accomplished by means of MPI - Message Passing Interface (Snir et al., 1995). On the contrary, in the case of a multicore architecture (such as Intel's Core i7 processor), a shared-memory or multithread implementation based on OpenMP (Chapman et al., 2007) can result in a better and more efficient solution. In recent years however, parallel computing has undergone a significant revolution with the introduction of GPGPU technology (General-Purpose computing on Graphics Processing Units), a technique that uses the graphics card processor (the GPU) for purposes other than graphics. Currently, GPUs outperform CPUs on floating point performance and memory bandwidth, both by a factor of roughly 100. As a confirmation of the increasing trend in the power of GPUs, leading companies such as Intel have already integrated GPUs into their latest products to better exploit the capabilities of their devices, such as in some releases of the Core i5 and Core i7 processing units. Although the incredible processing power of graphic processors may be used for general purpose computations, a GPU may not be suitable for every computational problem: only a parallel program that results optimized for GPU architectures can fully take advantage of the performance of GPUs. In fact, the performance of a GPGPU program that does not sufficiently exploit a GPU's capabilities can often be worse than that of a simple sequential one running on a CPU, such as when data transfer from main memory to video memory results crucial. Nevertheless, GPU applications to the important field of Computational Fluid Dynamics (CFD) are increasing both for quantity and quality among the Scientific Community (e.g., Tolke & Krafczyka, 2008; Zuo et al., 2010).

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 11: 4 Issues (2019): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing