Parallel and Distributed Visualization Advances

Parallel and Distributed Visualization Advances

Huabing Zhu, Lizhe Wang, Tony K.Y. Chan
DOI: 10.4018/978-1-60566-026-4.ch482
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Visualization is the process of mapping numerical values into perceptual dimensions and conveying insight into visible phenomena. With the visible phenomena, the human visual system can recognize and interpret complex patterns. One can detect meaning and anomalies in scientific data sets. Another role of visualization is to display new data in order to uncover new knowledge. Hence, visualization has emerged as an important tool widely used in science, medicine, and engineering. As a consequence of our increased ability to model and measure a wide variety of phenomena, data generated for visualization are far beyond the capability of desktop systems. In the near future, we anticipate collecting data at the rate of terabytes per day from numerous classes of applications. These applications can process a huge size of data, which are produced by more sensitive and accurate instruments, for example, telescopes, microscopes, particle accelerators, and satellites (Foster, Insley, Laszewski, Kesselman, & Thiebaux, 1999). Furthermore, the speed of the generation of data is still increasing. Therefore, to visualize large data sets, visualization systems impose more requirements on a variety of resources. For most users, it becomes more difficult to address all requirements on a single computing platform, or for that matter, in a single location. In a distributed computing environment, various resources are available, for example, large volume data storage, supercomputers, video equipment, and so on. At the same time, high speed networks and the advent of multi-disciplinary science mean that the use of remote resources becomes both necessary and feasible (Foster et al., 1999).
Chapter Preview
Top

Background

Visualization Process

The process of visualization is decided by the choice of representation. The process of visualization with 3-D, real-time interaction is much more complicated than one with still images. No matter how a specific process behaves, it can be considered in three different but interrelated semantic contexts (see Figure 1). They are making displayable by a computer, making visible to one’s eyes and making visible to one’s mind (Brodlie et al., 2004). This section provides an overview of the process of creating a 3-D, real-time, interactive visualization.

Figure 1.

Interactive visualization process

978-1-60566-026-4.ch482.f01

Interactive means that the system responds quickly enough for users to adjust the controlling inputs in rapid response to the output (Mueller, 2001). Real time means that the system must always keep updating outputs within a certain small fixed amount of time (Mueller, 2001). Generally, real time interactive visualization is defined as the process of creating images at rates between approximately 1 and 100 frames per second. In particular, if the latency exceeds beyond approximately 1 second, humans would feel that the computer’s responses is too slow to interact continuously. However, increasing the frame rate from 100 frames per second to 1000 frames per second is of no use due to perceptual characteristics of the human brain (Igehy, 2000).

Figure 1 shows the visualization flow chart of real-time, interactive visualization. In this diagram, information flow follows the arrows. Firstly, data are generated from some simulation systems such as a mathematically based computational model or a collection of observed values. Data filtering involves a wide range of operations, such as removing noise, replacing missing values, and so on, and makes data readable to visualization software. These operations are also used to refine the data by sifting the most relevant aspects and removing unnecessary values.

After the data are filtered, the representation procedure maps data to some geometric form. At this point, a geometric scene graph will be setup. The scene graph involves the geometric objects, the color value of the objects, the materials, and so on. These parameters can also be driven by computational models within the constraints imposed by visualization software.

The rendering procedure takes the information of scene graph and computes the 2-D image for human eyes to see. These images will be stored in a color buffer temporal for display. The resultant images will be displayed on computer screens or other output devices, such as project wall, Head Mounted Display (HMD).

Using graphical user interfaces (GUI), users can steer visualization systems. One can adjust the visualization system by modifying variables and data in the simulation, data filtering, representation, and rendering stages and get real-time response. It is important for real-time interaction that both simulations and graphics systems should provide real-time performance and allow for user input.

Key Terms in this Chapter

Scene Graph: Scene graph is a data structure used to hierarchically organize and manage the contents of spatially oriented scene data.

Graphics Pipeline: Graphics pipeline has two major steps. Starting with primitives (polygons) in object space, a geometry processing step transforms the primitives into screen space. This is followed by a rasterization step to convert the primitives into a set of screen pixels. They finish with a set of appropriately colored pixels in the frame buffer. Each step includes several computationally intensive procedures.

Z-Buffer: Z-buffer is an area in graphics memory reserved for storing the Z-axis value of each pixel.

Computational Grid: Computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. (Foster & Kesselman, 1998 AU12: The in-text citation "Foster & Kesselman, 1998" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. ).

Distributed Memory: Distributed memory means the memory is associated with individual processors and a processor is only able to address its own memory. Some authors refer to this type of system as a multicomputer , reflecting the fact that the building blocks in the system are themselves small computer systems complete with processor and memory.

Rasterization: Rasterization is the process of converting a vertex representation to a pixel representation; rasterization is also called scan conversion .

Rendering: Rendering is the computational process of generating an image from the abstract description of a scene ( Crockett, 1994 ).

Complete Chapter List

Search this Book:
Reset