Efficient Hair Rendering with a GPU Cone Tracing Approach

Efficient Hair Rendering with a GPU Cone Tracing Approach

Jorge R. Martins, Vasco S. Costa, João M. Pereira
DOI: 10.4018/IJCICG.2017010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Rendering human hair can be a hard task because of the required high super-sampling rate to render thin hair fibers without noticeable aliasing. Additionally, the current state-of-the-art bounding volume hierarchies (BVHs) are not suitable to hair rendering. In fact, the axis-aligned bounding boxes (AABBs) do not tightly bind hair primitives which impacts negatively the intersection tests activity. Both limitations can degrade severely the rendering performance so described in this article, a cone tracing GPU approach coupled with a hybrid bounding volume hierarchy to tackle these problems. The hybrid BVH makes use of both oriented and axis aligned bounding boxes. It is shown that the experiment is able to drastically reduce the super-sampling required to produce aliasing free images while minimizing the number of intersection tests and achieving speedups of up to 4, depending on the scene.
Article Preview
Top

Our work is most related to techniques for accelerating the rendering process through the use of acceleration structures, techniques for handling hair transparency and simulating the hair's properties and ray bundle techniques.

Studies for accelerating the rendering process have focused on structures such as grids (Amanatides & Woo, 1987; Cleary & Wyvill, 1988; Wald et al., 2006), kd-trees (Shevtsov et al., 2007) and bounding volume hierarchies (Kay & Kajiya, 1986). More recently, we have seen an increase in techniques for fast GPU construction of BVHs (Karras, 2012; Karras & Aila, 2013) as well as techniques for reducing the number of intersection tests through the use of a hybrid BVH of AABBs and OBBs (Woop et al., 2014).

Techniques for handling hair transparency and simulating the hair's properties have also been developed, falling in one of two categories: on the one hand, we have pure physical models, such as path tracing (Kajiya, 1986) and bi-directional path tracing (Lafortune, 1993) and photon mapping (Jensen, 1996a; Jensen, 1996b; Jensen & Christensen, 1998; Moon & Marschner, 2006). Several techniques have also been proposed to simulate the hair's properties, with scattering models for single hair fibers (Marschner et al., 2003; d'Eon et al., 2013; Pekelis et al., 2015), multiple scattering (Zinke et al., 2008) and natural illumination (Ren et al., 2010). Recently, we have also witnessed an increase in the studies focusing on the animation of hair scenes in real time, with the AMD TressFx technique (Lacroix, 2013; Martin et al., 2014) and techniques for the animation of hair scenes with data-driven interpolation of guide strands (Chai et al., 2014) and by bundling hair fibers into hair meshes (Wu et al., 2016).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 2 Issues (2022): 1 Released, 1 Forthcoming
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing