Encyclopedia of Information Science and Technology, Fourth Edition (10 Volumes) Extended Offer

Receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book.

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." Formalized Probability Theory and Applications Using Theorem Proving. IGI Global, 2015. 65-86. Web. 22 Jan. 2018. doi:10.4018/978-1-4666-8315-0.ch006

APA

Hasan, O., & Tahar, S. (2015). Discrete-Time Markov Chains in HOL. In Formalized Probability Theory and Applications Using Theorem Proving (pp. 65-86). Hershey, PA: IGI Global. doi:10.4018/978-1-4666-8315-0.ch006

Chicago

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." In Formalized Probability Theory and Applications Using Theorem Proving, 65-86 (2015), accessed January 22, 2018. doi:10.4018/978-1-4666-8315-0.ch006

Download link provided immediately after order completion

$30.00

List Price: $37.50

Current Promotions:

Take 20% Off All Publications Purchased Directly Through the IGI Global Online Bookstore: www.igi-global.com/

Abstract

In this chapter, the authors describe the formalization of discrete-time Markov chains and the formal verification of some of its most important properties using the probability theory presented in the last chapter. In order to illustrate the usefulness of this work, a binary communication channel model and a collection process model involved in the Automatic Mail Quality Measurement (AMQM) protocol are formally analyzed in HOL.

Given a probability space, a stochastic process { X_{t}: Ω → S }is defined as a sequence of random variables X, where t represents the time that can be discrete (represented by non-negative integers) or continuous (represented by real numbers). The set of values taken by each X_{t}, commonly called states, is referred to as the state space. The sample space of the process consists of all the possible state sequences based on a given state space S. Now, based on these definitions, a Markov process can be defined as a stochastic process with Markov property. If a Markov process has finite or countably infinite state space, then it is called a Markov chain and satisfies the following Markov property: for 0 ≤ t_{0}≤ … ≤ t_{n} and x_{0}, …, x_{n+1}in the state space then:

This mathematical equation expresses that the future state X_{t+1} only depends on the current state X_{t} and is independent of the passed state X_{i}. This chapter mainly focuses on discrete-time Markov chains (DTMC) (Bhattacharya & Waymire (1990); Kannan, 1979)) with discrete state in a finite state space.

According to the Markov property, given in Equation (6.1) the future state is only dependent on the current state and is independent of all the other past states (Chung, 1960). The Markov property can be formalized as (Liu, 2013):

Definition 6.1

Ⱶ∀ X p s. mc_property X p s =
(∀t. random variable (X t) p s) ˄
∀ f t n. increasing_seq t˄P (∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0
(prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n } ∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) =
prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n }))

where increasing_seq t defines the notion of increasing sequence, i.e.,∀ i j. i < j ⇒ t i < t j. The first conjunct indicates that the Markov property is based on a random process { X_{t}: Ω → S }. The quantified variable X represents a function of the random variables associated with time t which has the HOL4 type num or positive integer. This ensures the process is a discrete time random process. The random variables in this process are the functions built on the probability space p and a measurable space s. The conjunct P(∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

We also have to explicitly mention all the usually implicit assumptions stating that the states belong to the considered space. The assumption P () ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

In this chapter, P { x | X t x = i } represents prob p { x | (X t x = I) ˄ (x IN p_space p) }, where the term x IN p_space ensures that x is in the sample space of the considered probability space p. Similarly, the conditional probability P ({ x | X (t + 1) x = j } | { x | X t x = I }) represents cond_prob p ({ x | (X (t + 1) x = j) ˄ (x IN p_space p) } | { x | (X t x = I) ˄ (x IN p_space p) }) based on the formalization presented in the previous section.