Review of Probability Distributions

Review of Probability Distributions

Copyright: © 2018 |Pages: 49
DOI: 10.4018/978-1-5225-5264-2.ch002
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

In Chapter 2, probability distributions are presented; the distributions exposed are those with more relation to the analysis and study of waiting lines; discrete distributions: binomial, geometric, Poisson; continuous distributions: uniform, exponential, erlang, and normal. Confidence intervals are calculated for some of the parameters of the distributions. A brief example of the generation of pseudorandom exponential times using a spreadsheet is presented. The chapter closes with the goodness-of-fit tests of probability distributions, especially the Anderson-Darling test. The statistical language of programming R is used in the exercises performed. Several codes are proposed in R Language to perform calculations automatically.
Chapter Preview
Top

Probability Distributions

When an experiment with random results is made. The set of possible events is called Sample Space. A function or mapping that identifies the events of the Sample Space with the set of real numbers is called Random Variable.

In general terms, a probability distribution could be called a rule for assigning probability values to random variables. Random variables can be classified in two types: discrete and continuous. Continuous variables are real numbers represented by X, while categorical variables are integers represented by N. Integers are a subset of real numbers.

  • Cumulative Distribution Function: One particularly useful way of expressing probability distribution is by means of mathematical expressions: one is called probability density function, represented by f(x). The other type is called cumulative distribution function (CDF), represented by FX(x) or F(x), and assigns probabilities to intervals of the random variable.

A cumulative distribution function can be expressed as:

978-1-5225-5264-2.ch002.m01
(1) where: F(x) is the cumulative distribution function; f(x) is the probability density function; X is a continuous random variable; “x” (lower case x) is a numeric value of interest. N is a discrete random variable and finally K is a continuous numeric value of interest.

Probability functions contain parameters that give every distribution different properties and behavior. These parameters are classify into three types: location parameters, scale parameters and shape parameters.

Every distribution has a different combination of parameters that, together with the functional form of the mathematical expression, gives different properties to every probability function.

Additional functions have been defined that help us to understand the behavior of a variable random. These functions are particular to each random variable. The ones most used in queues, are: expected value, variance and coefficient of variation:

  • Expected Value: Expected value gives the value where the “center of gravity” of the probability distribution is. This is the point of equilibrium for distribution and is expressed as

    978-1-5225-5264-2.ch002.m02
    (2)

  • Variance: Variance is a measure of dispersion of the random variable distribution and is defined as:

    978-1-5225-5264-2.ch002.m03
    (3)

  • Coefficient of Variation: The coefficient of variation is the division by the square root of the variance, called the standard deviation, and the expected value of the random variable is equally defined as a discrete or a continuous variable:

    978-1-5225-5264-2.ch002.m04
    (4)

It is often more practical to use the coefficient of variation squared, which in this case is:

978-1-5225-5264-2.ch002.m05
(5)
  • Quantile: Assuming that we have a cumulative probability value, p = F(x), for a numeric value x, the value of x for the random variable that corresponds to that cumulative probability is called quantile (p). A quantile) or q(p), complies with:

    978-1-5225-5264-2.ch002.m06
    (6)

If p=F(x), then q(p)= F-1(p)= x.

Given a sample of n randomly observed data, x1, x2, …, xn

Complete Chapter List

Search this Book:
Reset