POP-C++ and Alpine3D: Petition for a New HPC Approach

POP-C++ and Alpine3D: Petition for a New HPC Approach

Pierre Kuonen, Mathias Bavay, Michael Lehning
DOI: 10.4018/978-1-61520-987-3.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the developed world, an ever better and finer understanding of the processes leading to natural hazards is expected. This is in part achieved using the invaluable tool of numerical modeling, which offers the possibility of applying scenarios to a given situation. This in turn leads to a dramatic increase in the complexity of the processes that the scientific community wants to simulate. A numerical model is becoming more and more like a galaxy of various sub-process models, each with their own numerical characteristics. The traditional approach to High Performance Computing (HPC) can hardly face this challenge without rethinking its paradigms. A possible evolution would be to move away from the Single Program, Multi Data (SPMD) approach and towards an approach that leverages the well known Object Oriented approach. This evolution is at the foundation of the POP parallel programming model that is presented here, as well as its C++ implementation, POP-C++.
Chapter Preview
Top

Introduction

In the developed world, there is a general trend of rising public expectations on protection from natural hazards. The goal is to be able to reduce the number of fatalities from natural hazards as well as contain the economic impact of such events. Moreover, long term understanding of potential trends in frequency and size of hazardous events is also sought as necessary information for public policy planning (Marty et al., 2009).

In order to satisfy these requirements, various tools have been developed by the scientific community. These tools are designed to provide answers to questions such as “what impact will climate change have on water availability for a specific place”, or “would a warmer climate lead to more frequent flooding”, that is for predictions of general trends (forecasting, see for example Bavay et al., 2009). They are also used for helping analyze a current situation (nowcasting): knowing only the situation at a discreet set of points, what is the full picture? Using the information that is provided by a limited set of sensors, can we predict the danger level?

Damage by natural hazards has an obvious economic cost. But a less obvious cost has also to be taken into account: if being overly cautious reduces the direct cost of natural hazards, it however creates a potentially massive indirect cost (lost business because of closed roads or evacuated towns, investment in unnecessary security equipment, as well as general public discontent for unnecessary safety measures). This means that the tools used for natural hazards forecasting and nowcasting have to walk a fine line between too cautious and too optimistic, struggling to get as close to the truth as possible. A lot of recent research has tried to find optimal solutions for risk management in mountains (Lehning and Wilhelm, 2005; Bründl et al., 2009).

The goal of these nowcasting and forecasting tools is to predict a danger level or expected changes based on current data or already forecasted data. Two approaches are usually represented: the first one consists of building a statistical model on past data. The second one consists of trying to describe the details of the physical processes at play.

Usual Approach and Limitations

This approach is statistical as it is based on the assumption that past events would describe future events and behavior. Therefore, a statistical correlation is looked for between some data used as inputs (for example, measurements from weather stations) and the output data of interest (for example, a catchment outflow). The outcome is a purely statistical relationship between the input and output parameters that is of very low computational complexity. For example, the degree-day hydrological approach considers that a catchment's outflow depends on the time integral of positive air temperatures. As Ohmura (2001) has shown, this basically statistical approach has a sound physical basis, however.

Practically, this method requires a calibration phase for every new setup, by looking for statistical relations between the input and output parameters in a large enough dataset. This calibration has to be redone when looking at other input or output parameters, or when looking at another geographical location. Even if the investment required to build such a model is quite small, it has to be fully redone for each new application. Moreover, it requires that no statistically extraordinary event occurs. In such a case, since the relationship between the input and the output is based on normal, regular behavior, the model could simply not be applied. These models therefore appear ill suited for studying climate scenarios (which are characterized by a departure from the normal trends) or extreme events.

Complete Chapter List

Search this Book:
Reset