Contour Based High Resolution 3D Mesh Construction Using HRCT and MRI Stacks

Contour Based High Resolution 3D Mesh Construction Using HRCT and MRI Stacks

Ramakrishnan Mukundan (University of Canterbury, Christchurch, New Zealand)
DOI: 10.4018/IJMDEM.2017100104
OnDemand PDF Download:


In this paper, we consider the problem of extracting shape contours from High Resolution Computed Tomography (HRCT) and Magnetic Resonance Imaging (MRI) stacks and using them to construct a three-dimensional mesh surface of the underlying geometry to a high level of detail. While many reconstruction algorithms adopt volumetric approaches and ray casting methods, we propose a novel algorithm for automatic segmentation of large volume sets, and a contour-based construction of a mesh representation that could be used in any rendering application or combined with larger meshes of anatomical parts. Several acceleration structures to reduce the complexity of the algorithm are also presented. Experimental results show that the proposed method provides a high level of detail and quality of rendering that may find useful applications in the field of visualization and graphics.
Article Preview


The recovery of a 3D shape from its multiple views or projections has been an active area of research in the field of computer vision for decades. In the field of biomedical image analysis, extensive work has been done in the development of algorithms for 3D reconstructions from 2D projection data. Medical imaging modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) combine multiple projections to form highly detailed 3D datasets (Birkfellner, 2014; Natarajan, 2006). The geometrical information of shapes and structures contained in these datasets can be extracted, combined and mapped to a three-dimensional volume. Such volumetric reconstructions become extremely valuable in a wide range of application domains such as computer aided diagnostics, surgical simulations, virtual reality systems and computer graphics.

Medical data visualization techniques are becoming increasingly important as both medical practitioners and researchers use ever-larger numbers of images and scans in day-to-day applications. Several image processing steps such as edge detection, skeletonization, identification of connected components or homogeneous regions, noise and artefact removal, image enhancement and shape segmentation are commonly used in the preprocessing stage of a visualization algorithm (Meyer-Baese & Schmid, 2014; Birkfellner, 2014). Surface based volume rendering techniques use voxel representations of 3D surface segments, and methods such as the marching cubes algorithm for triangulation (Preim & Bartz, 2007; Schroeder, Martin & Lorensen, 2006). In this paper, we focus on the problem of automatically segmenting very large 3D data sets, and efficiently extracting the contour data obtained from serial sections for the reconstruction of the complete mesh geometry at the highest possible resolution. The pre-processing part of the pipeline containing methods for segmentation and identification of region of interest provide the capability to automatically extract contours from each slice without any manual intervention. Considering the large volume of contours, points and triangles that are processed, we also give importance to data structures and methods for minimising storage requirements and reducing computational complexity. The workings of the proposed method at various stages are demonstrated using data from an HRCT stack consisting of 210 transverse scans, each of size 512x512 pixels. This is the commonly generated size of the 3D dataset where a full stack is imaged with a narrow beam collimation and inter-slice spacing of <1.5mm (Stern, 2010). The three-dimensional mesh reconstruction of the lung is rendered using the OpenGL-4 pipeline. We also show the working of the proposed algorithm on an MRI stack consisting of 150 scans, each of size 512x512 pixels. One of the main advantages of generating a triangular mesh model at the highest resolution is that it could be stored and reused in any graphics rendering application, and models at lower levels of detail can be easily obtained by applying mesh simplification algorithms (Mukundan, 2014).

This paper is organised as follows. The next section gives a brief review of related work and explains how the proposed method advances the existing techniques. Section 3 details the preprocessing stage consisting of automatically identifying the region of interest, segmenting the image and extracting the two-dimensional contours from each slice. Section 4 discusses the problem of identifying the correspondence between the points on a contour belonging to a given slice and the points present on the previous slice. Methods for generating a proper mesh surface between two slices without any triangle overlap are also given in this section. Important aspects related to the complexity of the computational model are also discussed. Section 5 outlines the rendering aspects and presents the rendering of the generated mesh model from different viewpoints. Section 6 summarises the work presented in this paper and outlines future directions.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing