Article Preview
Top1. Introduction
Information Theory is the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Information Theory consider the uncertainty of randomness perfectly. The first person; Harry Nyquist (1924) and (1928), Hartley (1928) had discovered the logarithmic nature of measure of information. Harry Nyquist published the paper 'Certain Factors Affecting Telegraph Speed' in which discussed the relation where W is the speed of transmission of intelligence, m is the number of difference voltage levels to choose from at each time step and K is a constant. Its impact has been crucial to the success of the voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes and numerous other fields. As a generalization of the uncertainty theory based on the notion of possibility. In 1948, The new idea in the direction of information theory given by Shannon which called Shannon entropy. In his seminal paper ‘A Mathematical Theory of Communication’, provided the beginning of a separate branch of learning namely the Information Theory. Shannon entropy play the central role in the Information Theory and fuzzy mathematics. Entropy is mentioned to as the uncertainty measure which is defined on probability distribution and can be shown to be a good measure of randomness or uncertainty. The applications of entropy are broadly used in dissimilar areas like as decision making, quantum information theory, communication theory, image registration, finance, pattern recognition etc. (Shannon, 1948) introduced the following measure of information:
(1.1)where:
denote the set of all finite discrete complete and generalized probability distributions, respectively. The equation (1.1) is Shannon’s entropy. The function
represents the expected value of uncertainty associated with the given probability distributions. It is uniquely determined by some rather natural postulates.
Divergence measure is a distance or affinity between two probability distribution which is also called inaccuracy of information. Divergence measures involving two discrete probability distributions calling discrimination function, after that the various authors called the named as cross entropy, relative information etc. (Kullback et al., 1951) developed the very important information and divergence measure which is given by:
(1.2) where
and as a generalization of the uncertainty theory based on the notion of possibility, information theory considers the uncertainty of randomness perfectly. (Jain & Saraswat, 2012 and 2013) introduced new f-divergence measure which is given by:
(1.3) where
is a convex function and
. The idea of ‘probabilistic divergence’, which in some sense assesses how ‘close’ two probability distributions are from one another, has been widely applied in probability, statistics, and information theory.