## Preface

*“If I am anything, which I highly doubt, I have made myself so by hard work.”*

- Isaac Newton
This book presents a novel theoretical framework for the improvement of the interpolation error from which is derived a unified methodology applied to several interpolation paradigms: two and three-dimensional linear, one-dimensional quadratic and cubic B-Splines, Lagrange, and Sinc. Advances are made to derive a framework that is unifying in its purpose and that thus achieves improvement of the approximation properties of the interpolation function regardless to its dimensionality or degree. The framework proposes a mathematical formulation, which consequentially to its developmental approach formulates interpolation error improvement as dependent from the joint information content of node intensity and the second order derivative of the interpolation function. From the theory herein developed, the mathematical formulation is derived by two main intuitions called the Intensity-Curvature Functional and the Sub-pixel Efficacy Region, and these are such to set the grounds for the improvement of the interpolation error. From the theory it descends that given a location where the interpolation function ought to estimate the value of the unknown signal, a novel sub-pixel (intra-node) re-sampling location is determined which varies locally pixel-by-pixel (node-by-node). The novel theory asserts and demonstrates by *a-posteriori* knowledge, that the interpolation error improvement can be achieved based on the joint information content of the node intensity and the second order derivative of the interpolation function. Consequentially to the formulation, re-sampling is performed locally at locations that are variable depending on the signal intensity at the neighborhood and local curvature of the interpolator.

The book initiates presenting two mathematical intuitions and given that absolute truth cannot be reached by them, the intuitions serve the purpose to derive novel conceptions of interpolation error improvement. These conceptions are determined during a process driven through deduction and thus a theoretical framework is presented. The boundary of the truth determined through the mathematical intuitions in explaining interpolation error improvement are empirically tested. The purpose of the theory is that of unifying under the same approach interpolators of diverse degree and dimensionality such to determine novel interpolation functions so called: SRE-based interpolation functions, which possesses improved approximation characteristics. The bridging concept between SRE-based and classic interpolation functions is the local curvature of the function as expressed by the second order derivative. There are extended mathematical descriptions of the theory with detailed formulations to conceptualize the steps of the approach. Also, improved interpolation functions are validated experimentally by a motion correction paradigm.

The motion correction paradigm shows both the error reduction and the resulting image quality after improvement. Spectral power analyses are performed and results are presented to show that local re-sampling changes the band-pass characteristics of the interpolation function. Validation is then extended to the estimation of signals at unknown locations to simulate the estimation beyond the Nyquist’s frequency in common experimental settings. The book cites the most current relevant and compelling literature in the field of signal-image interpolation and explains the need for advancing to the novel theory it proposes. Also, the book literally explains scope and methodological approach and details the mathematical characterization. Illustrations are given for the validation method and benefits and limitations of the theory are shown quantitatively and qualitatively. Outlines are given of the implications in image and signal processing with particular attention to biomedical imaging applications such as anatomical and functional Magnetic Resonance Imaging (MRI).

Given the level of details that the book provides to the reader while furnishing proofs to the mathematics and also because of the inclusion of the source code, the intended audience is within academic level among students interested in exploring the innovative approach that the treatise proposes, and within disciplines like signal-image processing, medical imaging and applied mathematics. It is known that students appreciate and benefit from detailed explanations (Hopcroft & Ullman, 1969).

The alternative approach that the book presents can be viewed with the intent of (i) devising novel mathematics for signal-image interpolation applications, and (ii) implement them in software. Thus the purpose of the book is targeted to complementing current textbooks for the exploration of the alternative approach that is proposed. At the aim of further research and exercise and because of the unifying challenge of this work, students can undertake the application of the methodology to interpolation paradigms that are not presented in the book.

While the book is appropriate for the junior-level undergraduate, particularly in image processing and biomedical imaging courses within computer science and biomedical engineering programs, it can also be a compelling source of research material for graduate courses in the same majors.

This book is also dedicated to whoever believes in science as combined effort of disciplines and which ultimate goal is that of the quest for truth. As within the course of the process of acquiring knowledge, doubt arises it is of necessity to remind that knowledge ultimate goal is that of improving understanding. Intuitions lead to the development of general conceptions, derived in the light of a process determined through deduction, which subsequently became a unifying theory. As absolute truth cannot be derived from intuitions, conceptions must undergo testing and thus by means of the process of empirically validating the theory, the boundaries of the truth were derived.

**THE CHALLENGES**

Titles of distinguished books in the field of signal-image processing are available in literature (Agarwal & Wong, 1993; Castleman, 1996; De Boor, 1978; Sonka et al., 1999). Mathematical formulations of the problem of signal interpolation with in-depth coverage are provided (Agarwal & Wong, 1993; De Boor, 1978) and they convey currently adopted and recognized protocols that are currently being studied in colleges at undergraduate and graduate level. These authors have contributed enormously to the effort of the research community in the development of signal processing applications that employ interpolation within the context of quite large and diverse web of tasks. For a comprehensive review of on interpolation procedures and their applications, the reader is referred recent surveys (Amidror, 2002; Meijering, 2002). The knowledge provided through these two books (Agarwal & Wong, 1993; De Boor, 1978) is highly specific as much as remarkable and focuses on signal-image interpolation with several approaches dedicated to the improvement of the approximation characteristics of the interpolation functions. Whereas the other two books (Castleman, 1996; Sonka et al., 1999) have presented works that are compelling because of their comprehensive coverage of topics related to signal-image processing. These other authors devote specific as well as remarkable focus to the solution of problems in computer vision and medical imaging contexts.

The challenges that the book addresses are herein listed: (i) derivation through deduction, (ii) unifying theoretical approach, (iii) extendibility of theory and methodology.

- Derivation through deduction. Theory and methodology are derived through deduction, thus through a process of posing the question and answering to it with the appropriate mathematical tool. It descends that the book introduces novel mathematics and the correspondent novel conceptions for which an appropriate name has been given.
- Unifying theoretical approach. In the book it is presented a unifying methodology for the improvement of the interpolation error and thus the improvement of the approximation characteristics of the interpolators. This is done through a unique and novel approach that is applied to interpolation functions regardless to their degree or dimensionality.
- Extendibility of theory and methodology. The book sets the grounds for a unifying theory under which interpolation functions of diverse degree and dimensionality are embraced under the same methodological approach. It is possible to extend the theory to other interpolation paradigms and consequentially the book serves the purpose to allowing the reader to produce knowledge that can go beyond the one presented herein.

**SEARCHING FOR A SOLUTION**

This book presents a novel approach while in the quest to finding the solution to the problem of devising a unifying theory and methodological approach for the improvement of the interpolation error which is extendible to diverse mathematical functions regardless to their degree and dimensionality.

Along the quest for the solution, the book delivers a message to the reader. The message is that: “*there exists a unifying approach that regardless of the dimensionality or the degree of the interpolation function achieves interpolation error improvement*”. The book and the approach is original and unique, thus both the theory and the methodology that are presented are absent from other signal processing text books. Some of the distinguished features of the book are listed here to follow.

- The mathematics are novel and original.

- The methodology is derived through deduction.

- Novel terminology is introduced specifically within the developmental effort.

- The validation paradigm is consistently applied to all of the interpolation

functions.
- Results are discussed within the context of the most relevant literature in signal-image interpolation published in scholarly written papers that appeared in leading journals concerned with signal (image) interpolation.

- Consistent application of theory and methodology across interpolation functions determines a unifying approach to the improvement of the interpolation error.

Once the solution is found the book provides to the reader with knowledge. Hereto follow it is anticipated an explanation of the meaning of the basic conception: the Sub-pixel Efficacy Region (SRE) which allowed the deduction of the mathematics that are the constituents of the unifying theory.

Hence in explaining the meaning of the spatial set of points named SRE is due considering their effect on the interpolation function. Given a classic interpolation function, the values of the independent variable are re-organized within their domain through the projection determined by means of the SRE such to obtain novel values of the independent variable called novel re-sampling locations. Re-calculation of the classic interpolation function at the novel re-sampling location consists of the SRE-based interpolation function. Thus, the effect of the SRE on the classic interpolation function is that of changing the function in that the model to the data changes because of the re-organization of the independent variable within its domain.

**ORGANIZATION OF THE BOOK**

The book is organized in 20 chapters and a brief description of each chapter is given in the following. A prelude of philosophical nature precedes the chapters of the book.

Chapter I provides the reader with an introduction aimed to acknowledge the basic issue of interpolation for signal and image processing. The Magnetic Resonance Imaging (MRI) database is introduced. It is presented as an outline on how the review of the literature progresses throughout the book along with an introduction to classic and SRE-based interpolation, and the most relevant signal processing techniques developed for the specific purpose to undertake the research presented in the book. The unifying theory is introduced parallel to a discussion on its significance and its implications for signal and image processing and for motion correction in functional MRI.

Chapter II opens Part I of the book and introduces to the reader the intuition that has set the roots for the development of the work presented in the book. The intuition consists of a math process that starts with definitions, yields to observations, derives the definition of the Sub-pixel Efficacy Region (SRE) and on its basis a claim is made that the interpolation function assumes most accurate approximation within the spatial extent of the SRE. This claim is deemed to be a truth foreseen in the intuition and will be analyzed in the following chapters with the purpose of deriving the true notion.

Chapter III provides the reader with one conception named the Intensity-Curvature Functional, which consists of a math formulation derived on the basis of two intensity-curvature terms. The two intensity curvature terms consist of the measure of the energy level that the signal (image) possess before interpolation (in its original form) and after interpolation. It is explained that the Intensity-Curvature Functional is well suited to measure the change of energy level of the signal (image) caused by the model interpolation function.

Chapter IV formulates another conception: *the Sub-pixel Efficacy Region*. As the Sub-pixel Efficacy Region was an intuition in chapterII; it becomes a conception in chapter IV. Along this process the concept of Intensity-Curvature Functional is refined in that it becomes the measure of the energy level change determined by the interpolation function for the particular re-sampling location where the signal is calculated.

Chapter V assesses the truth foreseen in the intuition presented in chapter II. The definition of the SRE given in chapter II is evaluated on the basis of the results of chapter IV. The purpose is to ascertain how big is the degree of match between the theoretical definition of SRE given in chapter II and the conception of SRE derived in chapter IV which embeds practical implications that derive from the conception of the Intensity-Curvature Functional. The deduction carried out in chapter IV show that the SRE consists of the extremes (either minimum or maximum) of the Intensity-Curvature Functional. The degree of match is found to be the concept of curvature of the interpolation function and this concept ascends to the role of the truth foreseen in the intuition as well as it embeds practical implications for the estimation of signals of unknown nature.

Chapter VI concludes Part I of the book deriving the notion on the basis of the evaluation of the truth foreseen in the intuition. Chapter IV asserts that the Sub-pixel Efficacy Region is that spatial set of intra-nodal points where the energy level change of the signal is minimum or maximum. Chapter VI clarifies that the Sub-Pixel Efficacy Region targets the spatial set of intra-nodal points for which the local second order partial derivatives are smaller than the increment or decrement of the function measured on the tangent to the first order derivative. The notion is therefore that the curvature of the model interpolation function is quite relevant to the extent of improving the interpolation error.

Chapter VII opens Part II of the book while introducing, studying, and solving the problem of the improvement of the bivariate linear interpolation function by the use of the SRE. The two intensity-curvature terms, calculated at the grid point and at the generic intra-pixel location (x, y) respectively, are employed to formulate the Intensity-Curvature Functional (∆E). The solution of the polynomial system of the first order derivatives of ∆E furnishes a sub-pixel set of points located in the domain of existence of the signal: the so called SRE. The chapter also explains that on the basis of the theory proposed in the book re-sampling can vary on a pixel-by-pixel basis at the image space locations called novel re-sampling locations. The chapter concludes presenting a math formulation that attempts resilient interpolation. The validation of such formulation is left to the willing audience of this book.

Chapter VIII presents results of the performance of the SRE-based bivariate linear interpolation function starting with a detailed description of the validation paradigm. The validation paradigm is employed on Magnetic Resonance Imaging (MRI) data within the context of three different modalities: T1-MRI, T2-MRI, and functional MRI. Additionally, the validation paradigm is employed on signals of known nature such as trigonometric functions. Quantitative and qualitative analyses are presented of the reduction of the interpolation error and of the improved approximation properties of the SRE-based bivariate linear interpolation function.

Chapter IX opens Part III of the book and reports on relevant literature that classifies interpolation procedures on the basis of the image processing task to perform and on the basis of the information content employed at the aim to reconstruct the continuous signal. The chapter does inform the reader as to what is the relevance of the interpolation error bounds characterization forms existing in literature, and how they relate to the mathematical foundations of the theory presented in the book.

Chapter X outlines, studies, and solves the problem of the improvement of the trivariate linear interpolation function by means of the Sub-pixel Efficacy Region. This is carried out with detailed description leading to a comprehensive coverage of the mathematics.

Chapter XI presents the results of the Sub-pixel Efficacy Region based trivariate linear interpolation function. The chapter focuses on functional MRI data and the presentation is given quantitatively through the root-mean-square-error (RMSE) of the two classes of trivariate linear interpolation paradigms: (i) classic and (ii) SRE-based. Also, they are presented quantitative results obtained through the use of the signal processing tool called spectral power evolution. This signal processing tool complements the spectral information provided through the Fast Fourier Transform (FFT) showing the differences existing in terms of frequency components between the three types of images: (i) original, (ii) processed with classic interpolation, and (iii) processed with SRE-based interpolation.

Chapter XII follows the path traced by the last section of chapter VII in attempting resilient interpolation for the case of the trivariate linear function. The chapter introduces the novel concept of signal intensity value resilient to interpolation and this concept is accompanied with the corresponding math formulation. This concept has theoretical implications in the meaning that the signal resilient to interpolation would be that one reconstructed through the model function under the constraint that the intensity-curvature content does not change either with or without interpolation. The reconstructed signal attempts to be the one that could not been sampled because of the limitations imposed by the Nyquist’s theorem. The validation of this chapter is intended for the willing audience of this book available to undertake this research path. It might be possible to determine resilient interpolation.

Chapter XIII opens up Part IV of the book, informs the reader of the quite extensive literature existing for the piecewise polynomial interpolation functions and devotes particular attention to the quadratic and cubic B-Splines. This chapter reinforces on the two conceptions that were object of experimentation and studies in the previous chapters. These conceptions are: (i) interpolation error improvement can be formulated as dependent on the joint information content of the node intensity and the second order derivative of the interpolation function, and (ii) re-sampling is an issue of local relevance, which therefore depends on the neighboring signal intensity values.

Chapter XIV reports meticulously on the mathematics employed to devise the one dimensional quadratic and cubic SRE-based interpolation functions. The methodological approach undertaken descends from the novel theory as an extension of the unifying approach that has already characterized the SRE-based linear interpolation paradigms. The chapter reinforces on the fact that the second order derivative of the function incorporates the curvature information content, which is joined by the signal intensity values to form the mathematical tools that are able to set the grounds for the improvement of the B-Spline functions’ approximation properties. Consistently with chapters VII and XII, this chapter presents also the math formulation that continues to outline the attempt to device resilient interpolation.

Chapter XV presents results for quadratic and cubic B-Splines SRE-based interpolation functions obtained through the experimentation conducted on the basis of the motion correction validation paradigm employed consistently throughout the book. Data employed consists of T1-MRI, T2-MRI, and functional MRI. Qualitative and quantitative investigation is conducted through Fast Fourier Transform (FFT) analysis complemented with the analysis of the spectral power evolutions of the resulting images and with the analysis of the error images obtained after processing. The analysis embraces the two classes of interpolators: (i) classic and (ii) SRE-based. This investigative method reveals the difference between classic and SRE-based B-Spline interpolation in terms of interpolation error and spectral frequency content.

Chapter XVI concludes Part IV of the book. There is specific reference to compelling literature and mention to the dependency of the mathematical formulation on the joint information content of node intensity and curvature of the interpolation function. It is finally reinforced that the mathematics derived through the unifying theory are capable to determine a common ground for the improvement of the interpolation error for functions of diverse degree and dimensionality. Emphasis is given to the Fourier properties of the SRE-based interpolation functions which are featured with the capability to retain the spectral components of the original images.

Chapter XVII opens Part V of the book and introduces to the reader with the intent to improve the performance of two one-dimensional interpolation functions (Lagrange and Sinc) that have been shown through Fourier analysis to present excellent pass-band characteristics. The chapter reports due reference to compelling literature and also emphasizes on the main innovation introduced through the SRE-based interpolation functions.

Chapter XVIII presents the Sub-pixel Efficacy Region of one-dimensional cubic form of Lagrange and one-dimensional Sinc interpolation functions. Following the approach of the unifying theory, the deduction of the mathematics yields, consistently with what seen for the other functions treated in this book, to the determination of the novel re-sampling locations for the two interpolation functions. Additionally, the chapter provides reference to the characterization of the interpolation error and interpolation error improvement bounds, which in their formulation descend from the unifying approach that the theory presents throughout the book. Consistently with chapters VII, XII, and XIV, the chapter concludes the presentation of the math formulation that attempts to device resilient interpolation. The validation of this math is left to the willingness of the reader.

Chapter XXIX presents results of the application of the Sub-pixel Efficacy Region to the cubic form of Lagrange interpolation and Sinc interpolation. In this chapter specific focus is given to the potentiality of these two SRE-based interpolation functions in improving the interpolation error and also in preserving spectral components of the images. Quantitative and qualitative evaluations are presented employing the following three modalities of MRI data: T1, T2, and T2* (functional MRI).

Chapter XX reports concluding remarks recalling to the reader of the intent of the book. Characteristics of the methodology outlined in the unifying theory are referenced to the current literature pointing out to the flexibility of the mathematics and their capability to group interpolators under same methodological approach regardless to their degree and dimensionality. Among the implications of the SRE are: (i) preservation of spectral frequencies into the interpolated images and (ii) reduction of the smoothing effect of the interpolation function. It is shown that the SRE-based interpolation functions can outperform classic interpolation functions in the estimation of signals at locations that are not sampled because of the Nyquist’s theorem constraint. Also, quantitative evaluation is conducted across the SRE-based polynomial interpolation functions and the Sinc function to elucidate which one performs the best in terms of interpolation error. Finally, qualitative evaluation is conducted in order to ascertain the influence of an important parameter on (i) the calculation of the novel re-sampling locations and (ii) the capability to preserve the spectral content of the original images. This parameter is used to scale the convolutions of the polynomial interpolation functions and the sums of cosine and sine functions of the Sinc interpolation function.

Due to the uniqueness of the theory and methodology and the originality of the mathematics that the book presents, it would not be appropriate to consider the present book as competing with any other in the field of signal-image processing. The book is intended as vehicle to add to the current state-of-the-art knowledge in signal-image interpolation and would be more suitable for complementariness rather than being an alternative to current text books. *The keyword for this book is thus complement and not competitiveness*. Furthermore, while in the process of validating the theory and methodology, this book devotes specific focus to the case of Magnetic Resonance Imaging (MRI) and thus adds to the existing literature a source of reference specifically oriented to MRI applications.