Discrete-Time Markov Chains in HOL: Computer Science & IT Book Chapter | IGI Global

You are using a new version of the IGI Global website.
If you experience a problem, submit a ticket to
helpdesk@igi-global.com,
and continue your work on the old website.

×

To Support Customers in Easily and Affordably Obtaining Titles in Electronic Format
IGI Global is Now Offering a 50% Discount on ALL E-Books and E-Journals Ordered Directly Through IGI Global’s Online Bookstore Additionally, Enjoy a 20% Discount on all Other Products and FormatsBrowse Titles

As Part of Our Efforts to Assist Customers with More Easily and Affordably Obtaining Titles in Electronic Format, IGI Global is Now Offering a 50% Discount on All E-Books and E-Journals Ordered Through IGI Global’s Online Bookstore*

To support customers with accessing online resources, IGI Global is offering a 50% discount on all e-book and e-journals. This opportunity is ideal for librarian customers convert previously acquired print holdings to electronic format at a 50% discount.

*The 50% discount is offered for all e-books and e-journals purchased on IGI Global’s Online Bookstore. E-books and e-journals are hosted on IGI Global’s InfoSci® platform and available for PDF and/or ePUB download on a perpetual or subscription basis. This discount cannot be combined with any other discount or promotional offer. Offer expires June 30, 2020.

Convert Your IGI Global Print Holdings to Electronic Format through IGI Global’s InfoSci® Platform or ProQuest’s Ebook Central, or EBSCOhost

To assist you during the COVID-19 pandemic, IGI Global will convert libraries previously acquired print holdings to electronic formats directly through our InfoSci® platform, ProQuest’s E-Book Central, or EBSCOhost at a 50% discount. Send us a list of IGI Global publications you would like to convert, and we’ll promptly facilitate the set-up and access.

View Over 900+ IGI Global Titles Related to COVID-19 Pandemic

IGI Global offers a rich volume of content related to treatment, mitigation, and emergency and disaster preparedness surrounding epidemics and pandemics such as COVID-19. All of these titles are available in electronic format at a 50% discount making them ideal resources for online learning environments.

New Product! InfoSci-Knowledge Solutions Databases

IGI Global is now offering a new collection of InfoSci-Knowledge Solutions databases, which allow institutions to affordably acquire a diverse, rich collection of peer-reviewed e-books and scholarly e-journals. Ideal for subject librarians, these databases span major subject areas including business, computer science, education, and social sciences.

Create a Free IGI Global Library Account to Receive an Additional 5% Discount on All Purchases

Exclusive benefits include one-click shopping, flexible payment options, free COUNTER 5 reports and MARC records, and a 5% discount on single all titles, as well as the award-winning InfoSci^{®}-Databases.

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." Formalized Probability Theory and Applications Using Theorem Proving. IGI Global, 2015. 65-86. Web. 9 Apr. 2020. doi:10.4018/978-1-4666-8315-0.ch006

APA

Hasan, O., & Tahar, S. (2015). Discrete-Time Markov Chains in HOL. In Formalized Probability Theory and Applications Using Theorem Proving (pp. 65-86). Hershey, PA: IGI Global. doi:10.4018/978-1-4666-8315-0.ch006

Chicago

Hasan, Osman and Sofiène Tahar. "Discrete-Time Markov Chains in HOL." In Formalized Probability Theory and Applications Using Theorem Proving, 65-86 (2015), accessed April 09, 2020. doi:10.4018/978-1-4666-8315-0.ch006

In this chapter, the authors describe the formalization of discrete-time Markov chains and the formal verification of some of its most important properties using the probability theory presented in the last chapter. In order to illustrate the usefulness of this work, a binary communication channel model and a collection process model involved in the Automatic Mail Quality Measurement (AMQM) protocol are formally analyzed in HOL.

Given a probability space, a stochastic process { X_{t}: Ω → S }is defined as a sequence of random variables X, where t represents the time that can be discrete (represented by non-negative integers) or continuous (represented by real numbers). The set of values taken by each X_{t}, commonly called states, is referred to as the state space. The sample space of the process consists of all the possible state sequences based on a given state space S. Now, based on these definitions, a Markov process can be defined as a stochastic process with Markov property. If a Markov process has finite or countably infinite state space, then it is called a Markov chain and satisfies the following Markov property: for 0 ≤ t_{0}≤ … ≤ t_{n} and x_{0}, …, x_{n+1}in the state space then:

This mathematical equation expresses that the future state X_{t+1} only depends on the current state X_{t} and is independent of the passed state X_{i}. This chapter mainly focuses on discrete-time Markov chains (DTMC) (Bhattacharya & Waymire (1990); Kannan, 1979)) with discrete state in a finite state space.

According to the Markov property, given in Equation (6.1) the future state is only dependent on the current state and is independent of all the other past states (Chung, 1960). The Markov property can be formalized as (Liu, 2013):

Definition 6.1

Ⱶ∀ X p s. mc_property X p s =
(∀t. random variable (X t) p s) ˄
∀ f t n. increasing_seq t˄P (∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0
(prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n } ∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) =
prob p ({ x | X t_{n+1} x = f (n + 1) } | { x | X t_{n} x = f n }))

where increasing_seq t defines the notion of increasing sequence, i.e.,∀ i j. i < j ⇒ t i < t j. The first conjunct indicates that the Markov property is based on a random process { X_{t}: Ω → S }. The quantified variable X represents a function of the random variables associated with time t which has the HOL4 type num or positive integer. This ensures the process is a discrete time random process. The random variables in this process are the functions built on the probability space p and a measurable space s. The conjunct P(∩_{k ∈ [ 0,n-1 ]} { x | X t_{k} x = f k }) ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

We also have to explicitly mention all the usually implicit assumptions stating that the states belong to the considered space. The assumption P () ≠ 0 ensures that the corresponding conditional probabilities are well-defined, where f k returns the k^{th} element of the state sequence.

In this chapter, P { x | X t x = i } represents prob p { x | (X t x = I) ˄ (x IN p_space p) }, where the term x IN p_space ensures that x is in the sample space of the considered probability space p. Similarly, the conditional probability P ({ x | X (t + 1) x = j } | { x | X t x = I }) represents cond_prob p ({ x | (X (t + 1) x = j) ˄ (x IN p_space p) } | { x | (X t x = I) ˄ (x IN p_space p) }) based on the formalization presented in the previous section.