Article Preview
TopIntroduction
The recovery of a 3D shape from its multiple views or projections has been an active area of research in the field of computer vision for decades. In the field of biomedical image analysis, extensive work has been done in the development of algorithms for 3D reconstructions from 2D projection data. Medical imaging modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) combine multiple projections to form highly detailed 3D datasets (Birkfellner, 2014; Natarajan, 2006). The geometrical information of shapes and structures contained in these datasets can be extracted, combined and mapped to a three-dimensional volume. Such volumetric reconstructions become extremely valuable in a wide range of application domains such as computer aided diagnostics, surgical simulations, virtual reality systems and computer graphics.
Medical data visualization techniques are becoming increasingly important as both medical practitioners and researchers use ever-larger numbers of images and scans in day-to-day applications. Several image processing steps such as edge detection, skeletonization, identification of connected components or homogeneous regions, noise and artefact removal, image enhancement and shape segmentation are commonly used in the preprocessing stage of a visualization algorithm (Meyer-Baese & Schmid, 2014; Birkfellner, 2014). Surface based volume rendering techniques use voxel representations of 3D surface segments, and methods such as the marching cubes algorithm for triangulation (Preim & Bartz, 2007; Schroeder, Martin & Lorensen, 2006). In this paper, we focus on the problem of automatically segmenting very large 3D data sets, and efficiently extracting the contour data obtained from serial sections for the reconstruction of the complete mesh geometry at the highest possible resolution. The pre-processing part of the pipeline containing methods for segmentation and identification of region of interest provide the capability to automatically extract contours from each slice without any manual intervention. Considering the large volume of contours, points and triangles that are processed, we also give importance to data structures and methods for minimising storage requirements and reducing computational complexity. The workings of the proposed method at various stages are demonstrated using data from an HRCT stack consisting of 210 transverse scans, each of size 512x512 pixels. This is the commonly generated size of the 3D dataset where a full stack is imaged with a narrow beam collimation and inter-slice spacing of <1.5mm (Stern, 2010). The three-dimensional mesh reconstruction of the lung is rendered using the OpenGL-4 pipeline. We also show the working of the proposed algorithm on an MRI stack consisting of 150 scans, each of size 512x512 pixels. One of the main advantages of generating a triangular mesh model at the highest resolution is that it could be stored and reused in any graphics rendering application, and models at lower levels of detail can be easily obtained by applying mesh simplification algorithms (Mukundan, 2014).
This paper is organised as follows. The next section gives a brief review of related work and explains how the proposed method advances the existing techniques. Section 3 details the preprocessing stage consisting of automatically identifying the region of interest, segmenting the image and extracting the two-dimensional contours from each slice. Section 4 discusses the problem of identifying the correspondence between the points on a contour belonging to a given slice and the points present on the previous slice. Methods for generating a proper mesh surface between two slices without any triangle overlap are also given in this section. Important aspects related to the complexity of the computational model are also discussed. Section 5 outlines the rendering aspects and presents the rendering of the generated mesh model from different viewpoints. Section 6 summarises the work presented in this paper and outlines future directions.