Concepts of efficiency, approximation, and efficacy are discussed in this chapter while referencing the existing literature. Frameworks grouping classes of interpolation functions are acknowledged. Also, properties of the unifying theory are addressed along with the characteristics of the methodology employed to derive the SRE-based interpolation functions. Properties of the unifying theory are also discussed and the reasoning pertaining to the methodology is provided. As it can be seen in this book, both properties and characteristics featuring the methodological development of the SRE-based interpolation function are quite consistent across interpolators and it is true that the empirical observations provided through this works have set the grounds for the legitimate assertion that there exist a unifying theory for the improvement of the interpolation error. Therefore, such consistency across the spectrum of interpolation functions embraced through the unifying theory shows that the concepts embedded in both properties and characteristics of the methodology employed to derive the SRE-based interpolation functions remain the same across the diversity of model interpolators. It is also due to acknowledge that Section V of the book is devoted to Lagrange and Sinc functions and will further expand on the consistency just mentioned. The last section of this chapter addresses the Fourier properties of the Sub-pixel Efficacy Region.
Efficiency, Approximation And Efficacy
When the efficiency of algorithms for alignment of brain images was studied, a comparative evaluation of the interpolation function computational demand was reported (Joyeux & Penczeck, 2002; Meijering et al., 2001; Ostuni et al., 1997). Several interpolation methods have been analyzed at the aim of finding the trade off between accuracy and computational cost, which is defined as the function having time and number of multiplications and additions as dependent and independent variables respectively (Thevenaz et al., 2000). Extensive evaluation of accuracy and efficiency of interpolation paradigms was reported (Lehmann et al., 1999) through the analysis of: (i) the Fourier properties, (ii) the visual quality, (iii) the quantitative error estimation, and (iv) the computational demand. An approach was designed to equate the effects of an ideal interpolator for the linear paradigm (Wang, 2001), while focusing on error estimations and minimizations over intervals rather than over the entire interpolation curve.
This book introduces the concept of interpolation function efficacy as the capability to generate minimal approximation. For the interpolation functions seen in the preceding chapters (bivariate and trivariate linear, quadratic and cubic B-Spline) it was shown that the capability of the interpolation function to determine minimal approximation is dependent on: (i) the magnitude of the misplacement, (ii) the relationships between intensity values at the target pixel to re-sample and the intensity values at the neighboring pixels, and (iii) the local curvature of the function.
This constitutes of a confirmation and also of an extension of the findings reported while studying the Newton-based description of the interpolation error given within the context of mathematical analysis (Agarwal & Wong, 1993; Thevenaz et al., 2000). According to these studies the interpolation error depends: (i) the on step size (resolution), (ii) the location of the re-sampled point relative to the initial coordinate system (misplacement) and (iii) the value of the interpolation function at the node points. It was also reported a specific expression of the Hermite polynomial interpolation error formulated as dependent on optimized constants (Agarwal & Wong, 1993). These constants minimize the approximation of the kernel and are dependent on both resolution and misplacement.