To Support Customers in Easily and Affordably Obtaining the Latest Peer-Reviewed Research,
**Receive a 20% Discount on ALL Publications and Free Worldwide Shipping on Orders Over US$ 295**

Additionally, Enjoy an Additional 5% Pre-Publication Discount on all Forthcoming Reference BooksBrowse Titles

Additionally, Enjoy an Additional 5% Pre-Publication Discount on all Forthcoming Reference BooksBrowse Titles

C. T. J. Dodson (University of Manchester, UK)

Copyright: © 2012
|Pages: 29

DOI: 10.4018/978-1-61350-116-0.ch013

Chapter Preview

TopA question: *“We already use statistical modeling, why should we bother with information geometry? “*

Information geometry is concerned with the natural geometrization of smoothly parametrized families of discrete probability or continuous probability density functions; the naturality stems from the fact that the metric structure arises from the covariance matrix of gradients of probability. This metric yields a smooth Riemannian structure on the space of parameters, so adding the geometric concepts of curvature and arc length to the analytic tools for studying trajectories through probability distributions as statistical models evolve with time or during changes of system conditions. The development of the subject over the past 65 years has been substantially due to the work of C.R. Rao and S-I. Amari and coworkers; see for example (Rao, 1945; Amari, 1963; Amari, 1968; Amari, 1985; Amari et al, 1987; Amari & Nagaoka, 2000) and references therein. Information geometry and its applications remain vigorous research areas, as witness for example the series of international conferences of the same name (IGA Conference, 2010). In phenomenological modeling applications, information geometric methods complement the standard statistical tools with techniques of representation similar to those used in physical field theories where the analysis of curved geometrical spaces have contributed to the understanding of phenomena and development of predictive models.

In many statistical models of practical importance there is a small range of probability density functions that has very wide application as a result of general theorems, and the spaces of these families have just a small number of dimensions. For example, the families of Gaussian and gamma distributions and their bivariate versions are widely applied and moreover their information geometry is easily tractable, (Arwini & Dodson, 2008). In particular, the family of gamma distributions is ubiquitous in modeling natural processes that involve scatter of a positive random variable around a target state, such as for inhomogeneous populations or features of elements in a collection. The reason for this ubiquity is that a defining characteristic of the gamma distribution is for the sample standard deviation to be proportional to the sample mean. In practice, that property is commonly found to varying degrees of approximation; the case when the standard deviation *equals* the mean corresponds to the exponential distribution associated with a Poisson process, which is the fundamental reference process for statistical models. Sums of independent gamma random variables (hence also sums of independent exponential random variables) follow a gamma distribution and products of gamma random variables have distributions closely approximated by gamma distributions. We shall provide below more details about the properties of the gamma family and its associated families which include the uniform distribution, approximations to truncated Gaussians and a wide range of others.

Our case studies will show that gamma distributions model well the spacings between successive occurrences of each of the 20 different amino acids which with differing abundances lie along a protein chain (Cai et al, 2002). Figure 6illustrates the information distance in the space of gamma distributions for amino acid spacings along protein chains, measured from the exponential (Poisson) case intuitively we might expect that they would be scattered around the reference exponential case. In fact they all lie on the clustered side of the distribution, all have more variance than that expected by chance—the exponential case.

Random Variable: A variable that follows a well-defined discrete probability distribution, or continuous probability density distribution.

Pseudorandom Number Generator: An algorithm that generates numbers from a given set with each interval of the set having a probability of occurrence in proportion to its total length, so approximating a uniform distribution.

Information Entropy: The negative of the expectation of the logarithm of a probability density function.

Integral Curve of a Gradient Field: A curve for which the rate of change with time at a point is equal to the gradient vector of a field at that point.

Inhomogeneous Rate Process: A first order differential process defined on a population where the rate of change of a cohort is proportional to the local density of that cohort.

Constrained Disordering: A structure defined by a probability distribution of its features may be perturbed but the structural rules control the degree to which total disorder may be approached.

Information Metric: Riemannian distance structure and hence arc length function, defined by the covariance matrix function of a smooth family of probability density functions.

Voronoi Cells: Given a set of nodes in the plane R2, for each point in the plane there is one node closest to it or at most two closest nodes equidistant from it. The Voronoi cell for a node is the interior of the convex polygon of nearest points. This definition extends to higher dimensions—leading to convex polyhedra in R3.

Search this Book:

Reset

Copyright © 1988-2021, IGI Global - All Rights Reserved