Frequentist Probability

Frequentist Probability

Copyright: © 2021 |Pages: 36
DOI: 10.4018/978-1-7998-3871-5.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Frequentist probability is historically presented as an attempt both to overcome the limitations of classical conception and to take into account the impressive development of experimental sciences and statistics. It is precisely because of this close link with the statistical sciences that it finds a significant place in teaching. It is also an area in which the use of IT tools is crucial. This approach also makes it possible to calculate the probability of events and from it we derive the same rules examined in the classical definition, since it is sufficient to replace the ratio between the number of favorable cases and the number of possible cases with the limit of the ratio when the repeated tests tend to infinity. One of the fundamental concepts appears in the chapter, that of a random variable capable of describing events and their distribution.
Chapter Preview
Top

Introduction

When the facts change, I change my mind. ~ John Maynard Keynes

The classical definition of probability is based on the assumption that all cases of the event are comparable. In reality this condition occurs in relatively few cases and therefore it is necessary to overcome this limit on the basis of a different conception. In an attempt to overcome the problems inherent in the classical definition, Von Mises (1883-1953) believes that probability should be evaluated on the basis of a large mass of observations and formulates the frequentist definition: it is measured on the basis of statistical observations, i.e. the frequency with which certain phenomena are recorded.

If you toss a fair coin a number of times, experience says that as the number of tests made all under the same conditions increases, the frequency of the number of heads (or tails) tends to stabilize around a value that corresponds to the a priori theoretical probability. Large fluctuations are increasingly rare, in this consists the so-called Empirical law of chance.

In practice we can say that, called P the probability of an event, it is given by the ratio between the number s of times in which the event occurs, that is the number of successes, and the number n of tests done, to the tendency of the number to infinity:

In order to have a reliable number to give to the probability it is necessary that the frequency is calculated on a large number of cases (in theory infinite), therefore it is not possible to use this type of definition in cases where the number of repeated tests is low or where conditions have varied.

In the frequentist conception the probability is derived a posteriori, based on the examination of the data. It is known the case of the statistician Karl Pearson (1857-1936) who in an experiment launched 24000 times a coin obtaining 12012 heads; the frequency is equal to 0.5005 approaching the “theoretical” value. As well as the case of the mathematician prisoner of war during the Second World War who counted 10000 flips, almost equally divided between 5067 heads and 4993 tails.

This type of probability, also known as statistics, has an enormous field of application. For example, in order to construct life and death probability tables for the calculation of premiums, it is first necessary to know the probability of the event, which is estimated on the basis of the frequencies observed in the past on a large mass of cases.

By indicating with x the whole age of an individual and with lx the number of living persons of age x, it can be noted that there is an empirical relationship lx = f(x). This relation, linked to age, constitutes a biometric function.

A table containing the lx data, obtained with appropriate processing using the method of censuses or deaths, is called the survival demographic table. In it, starting from a fictitious group of 100000 individuals of age x = 0, the number of individuals who survived the various ages up to the extreme age is considered. A life table is a concise way of showing the probabilities of living or dying for a member of a particular population at a particular age.

The table is used by actuaries, demographers etc. to present the mortality experience of a population. This method is applicable to the analysis of many measurable processes and mainly in the insurance field. For detailed analysis, see Namboodiri & Suchindran (2013). From the tables it is possible to obtain information to answer questions such as: what is the probability that a 70-year-old will survive for ten years? What is the average number of years of life remaining for men who have just reached their 50th birthday?

The idea is that the probability is not calculated as the ratio between the number of heads and coin flips, but as the ratio between the survivors of a cohort.

The first survival table can be dated to the third century AD, but it is necessary to wait until the end of the 17th century to have sufficiently reliable mortality studies. We remember in particular the study of the astronomer Edmond Halley (1656-1742), who determined the probability of death on the basis of observations made on the population records of a Polish city.

Key Terms in this Chapter

Empirical Law of Chance: In a series of repeated tests each of the possible events manifests itself with a relative frequency that is approximately equal to its probability.

Aleatory Variable: A variable whose values depend on outcomes of a random phenomenon.

Discrete Random Variable: A particular range of discrete values that the variable is permitted to take on.

Law of Large Numbers: Describes the result of performing the same experiment a large number of times.

Expected Value: The average amount one expects to win.

Complete Chapter List

Search this Book:
Reset