Discrete-Time Markov Chains in HOL: Computer Science & IT Book Chapter | IGI Global

×

Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.
Additionally, libraries can receive an extra 5% discount. Learn More

Subscribe to the latest research through IGI Global's new InfoSci-OnDemand Plus

InfoSci®-OnDemand Plus, a subscription-based service, provides researchers the ability to access full-text content from over 100,000 peer-reviewed book chapters and 26,000+ scholarly journal articles covering 11 core subjects. Users can select articles or chapters that meet their interests and gain access to the full content permanently in their personal online InfoSci-OnDemand Plus library.

Purchase the Encyclopedia of Information Science and Technology, Fourth Edition and Receive Complimentary E-Books of Previous Editions

When ordering directly through IGI Global's Online Bookstore, receive the complimentary e-books for the first, second, and third editions with the purchase of the Encyclopedia of Information Science and Technology, Fourth Edition e-book.

Create a Free IGI Global Library Account to Receive a 25% Discount on All Purchases

Exclusive benefits include one-click shopping, flexible payment options, free COUNTER 4 and MARC records, and a 25% discount on all titles as well as the award-winning InfoSci^{®}-Databases.

InfoSci^{®}-Journals Annual Subscription Price for New Customers: As Low As US$ 5,100

This collection of over 175 e-journals offers unlimited access to highly-cited, forward-thinking content in full-text PDF and HTML with no DRM. There are no platform or maintenance fees and a guarantee of no more than 5% increase annually.

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." Formalized Probability Theory and Applications Using Theorem Proving. IGI Global, 2015. 65-86. Web. 16 Nov. 2018. doi:10.4018/978-1-4666-8315-0.ch006

APA

Hasan, O., & Tahar, S. (2015). Discrete-Time Markov Chains in HOL. In Formalized Probability Theory and Applications Using Theorem Proving (pp. 65-86). Hershey, PA: IGI Global. doi:10.4018/978-1-4666-8315-0.ch006

Chicago

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." In Formalized Probability Theory and Applications Using Theorem Proving, 65-86 (2015), accessed November 16, 2018. doi:10.4018/978-1-4666-8315-0.ch006

In this chapter, the authors describe the formalization of discrete-time Markov chains and the formal verification of some of its most important properties using the probability theory presented in the last chapter. In order to illustrate the usefulness of this work, a binary communication channel model and a collection process model involved in the Automatic Mail Quality Measurement (AMQM) protocol are formally analyzed in HOL.

Given a probability space, a stochastic process { X_{t}: Ω → S }is defined as a sequence of random variables X, where t represents the time that can be discrete (represented by non-negative integers) or continuous (represented by real numbers). The set of values taken by each X_{t}, commonly called states, is referred to as the state space. The sample space of the process consists of all the possible state sequences based on a given state space S. Now, based on these definitions, a Markov process can be defined as a stochastic process with Markov property. If a Markov process has finite or countably infinite state space, then it is called a Markov chain and satisfies the following Markov property: for 0 ≤ t_{0}≤ … ≤ t_{n} and x_{0}, …, x_{n+1}in the state space then:

This mathematical equation expresses that the future state X_{t+1} only depends on the current state X_{t} and is independent of the passed state X_{i}. This chapter mainly focuses on discrete-time Markov chains (DTMC) (Bhattacharya & Waymire (1990); Kannan, 1979)) with discrete state in a finite state space.

According to the Markov property, given in Equation (6.1) the future state is only dependent on the current state and is independent of all the other past states (Chung, 1960). The Markov property can be formalized as (Liu, 2013):

Definition 6.1

Ⱶ∀ X p s. mc_property X p s =
(∀t. random variable (X t) p s) ˄
∀ f t n. increasing_seq t˄P (∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0
(prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n } ∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) =
prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n }))

where increasing_seq t defines the notion of increasing sequence, i.e.,∀ i j. i < j ⇒ t i < t j. The first conjunct indicates that the Markov property is based on a random process { X_{t}: Ω → S }. The quantified variable X represents a function of the random variables associated with time t which has the HOL4 type num or positive integer. This ensures the process is a discrete time random process. The random variables in this process are the functions built on the probability space p and a measurable space s. The conjunct P(∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

We also have to explicitly mention all the usually implicit assumptions stating that the states belong to the considered space. The assumption P () ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

In this chapter, P { x | X t x = i } represents prob p { x | (X t x = I) ˄ (x IN p_space p) }, where the term x IN p_space ensures that x is in the sample space of the considered probability space p. Similarly, the conditional probability P ({ x | X (t + 1) x = j } | { x | X t x = I }) represents cond_prob p ({ x | (X (t + 1) x = j) ˄ (x IN p_space p) } | { x | (X t x = I) ˄ (x IN p_space p) }) based on the formalization presented in the previous section.