Exact Markov Chain Monte Carlo Algorithms and Their Applications in Probabilistic Data Analysis and Inference

Exact Markov Chain Monte Carlo Algorithms and Their Applications in Probabilistic Data Analysis and Inference

Dominic Savio Lee (University of Canterbury, New Zealand)
DOI: 10.4018/978-1-59904-982-3.ch010
OnDemand PDF Download:
$37.50

Abstract

This chapter describes algorithms that use Markov chains for generating exact sample values from complex distributions, and discusses their use in probabilistic data analysis and inference. Its purpose is to disseminate these ideas more widely so that their use will become more widespread, thereby improving Monte Carlo simulation results and stimulating greater research interest in the algorithms themselves. The chapter begins by introducing Markov chain Monte Carlo (MCMC), which stems from the idea that sample values from a desired distribution f can be obtained from the stationary states of an ergodic Markov chain whose stationary distribution is f. To get sample values that have distribution f exactly, it is necessary to detect when the Markov chain has reached its stationary distribution. Under certain conditions, this can be achieved by means of coupled Markov chains—these conditions and the resulting exact MCMC or perfect sampling algorithms and their applications are described.
Chapter Preview
Top

Markov Chain Monte Carlo

MCMC is based on the observation that the state of an ergodic Markov chain (see Appendix to this chapter for a glossary of Markov chain properties) will eventually converge to a stationary distribution, no matter which state the chain starts in. Thus, to obtain a sample from a desired distribution f, an ergodic Markov chain with f as its stationary distribution can be constructed and then run till it is stationary. The required sample values are given by the states of the stationary chain. Let represent a sequence of states for an ergodic Markov chain and suppose that it reaches its stationary distribution f after transition T, then a dependent sample with distribution f is given by . This sample can be used in the same way as an independent sample for the estimation of expectations; by the strong law of large numbers, the sample average for a measurable function h will converge, almost surely, to the expectation under f:

Note, however, that the sample variance cannot simply be used as an estimate of Monte Carlo standard error because of the dependency within the sample (Kass, Carlin, Gelman, & Neal, 1998). Instead, the autocorrelation in the sample must be estimated and used to estimate the standard error. If an independent sample is required, multiple chains with independent starting states must be used.

Generic algorithms are available that allow an ergodic Markov chain with a specified stationary distribution to be constructed easily. Many of these algorithms can be regarded as variants of the Metropolis-Hastings algorithm, commonly attributed to Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953) and Hastings (1970). Let represent the uniform distribution on the interval (0, 1).

Complete Chapter List

Search this Book:
Reset
Editorial Advisory Board
Table of Contents
Preface
Hsiao-Fan Wang
Acknowledgment
Hsiao-Fan Wang
Chapter 1
Martin Spott, Detlef Nauck
This chapter introduces a new way of using soft constraints for selecting data analysis methods that match certain user requirements. It presents a... Sample PDF
Automatic Intelligent Data Analysis
$37.50
Chapter 2
Hung T. Nguyen, Vladik Kreinovich, Gang Xiang
It is well known that in decision making under uncertainty, while we are guided by a general (and abstract) theory of probability and of statistical... Sample PDF
Random Fuzzy Sets: Theory & Applications
$37.50
Chapter 3
Gráinne Kerr, Heather Ruskin, Martin Crane
Microarray technology1 provides an opportunity to monitor mRNA levels of expression of thousands of genes simultaneously in a single experiment. The... Sample PDF
Pattern Discovery in Gene Expression Data
$37.50
Chapter 4
Erica Craig, Falk Huettmann
The use of machine-learning algorithms capable of rapidly completing intensive computations may be an answer to processing the sheer volumes of... Sample PDF
Using "Blackbox" Algorithms Such AS TreeNET and Random Forests for Data-Ming and for Finding Meaningful Patterns, Relationships and Outliers in Complex Ecological Data: An Overview, an Example Using G
$37.50
Chapter 5
Eulalia Szmidt, Marta Kukier
We present a new method of classification of imbalanced classes. The crucial point of the method lies in applying Atanassov’s intuitionistic fuzzy... Sample PDF
A New Approach to Classification of Imbalanced Classes via Atanassov's Intuitionistic Fuzzy Sets
$37.50
Chapter 6
Arun Kulkarni, Sara McCaslin
This chapter introduces fuzzy neural network models as means for knowledge discovery from databases. It describes architectures and learning... Sample PDF
Fuzzy Neural Network Models for Knowledge Discovery
$37.50
Chapter 7
Ivan Bruha
This chapter discusses the incorporation of genetic algorithms into machine learning. It does not present the principles of genetic algorithms... Sample PDF
Genetic Learning: Initialization and Representation Issues
$37.50
Chapter 8
Evolutionary Computing  (pages 131-142)
Thomas E. Potok, Xiaohui Cui, Yu Jiao
The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage... Sample PDF
Evolutionary Computing
$37.50
Chapter 9
M. C. Bartholomew-Biggs, Z. Ulanowski, S. Zakovic
We discuss some experience of solving an inverse light scattering problem for single, spherical, homogeneous particles using least squares global... Sample PDF
Particle Identification Using Light Scattering: A Global Optimization Problem
$37.50
Chapter 10
Dominic Savio Lee
This chapter describes algorithms that use Markov chains for generating exact sample values from complex distributions, and discusses their use in... Sample PDF
Exact Markov Chain Monte Carlo Algorithms and Their Applications in Probabilistic Data Analysis and Inference
$37.50
Chapter 11
J. P. Ganjigatti, Dilip Kumar Pratihar
In this chapter, an attempt has been made to design suitable knowledge bases (KBs) for carrying out forward and reverse mappings of a Tungsten inert... Sample PDF
Design and Development of Knowledge Bases for Forward and Reverse Mappings of TIG Welding Process
$37.50
Chapter 12
Malcolm J. Beynon
This chapter considers the role of fuzzy decision trees as a tool for intelligent data analysis in domestic travel research. It demonstrates the... Sample PDF
A Fuzzy Decision Tree Analysis of Traffic Fatalities in the US
$37.50
Chapter 13
Dymitr Ruta, Christoph Adl, Detlef Nauck
In the telecom industry, high installation and marketing costs make it six to 10 times more expensive to acquire a new customer than it is to retain... Sample PDF
New Churn Prediction Strategies in the Telecom Industry
$37.50
Chapter 14
Malcolm J. Beynon
This chapter demonstrates intelligent data analysis, within the environment of uncertain reasoning, using the recently introduced CaRBS technique... Sample PDF
Intelligent Classification and Ranking Analyses Using CARBS: Bank Rating Applications
$37.50
Chapter 15
Fei-Chen Hsu, Hsiao-Fan Wang
In this chapter, we used Cumulative Prospect Theory to propose an individual risk management process (IRM) including a risk analysis stage and a... Sample PDF
Analysis of Individual Risk Attitude for Risk Management Based on Cumulative Prospect Theory
$37.50
Chapter 16
Francesco Giordano, Michele La Rocca, Cira Perna
This chapter introduces the use of the bootstrap in a nonlinear, nonparametric regression framework with dependent errors. The aim is to construct... Sample PDF
Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors
$37.50
Chapter 17
Lean Yu, Shouyang Wang, Kin Keung Lai
Financial crisis is a kind of typical rare event, but it is harmful to economic sustainable development if occurs. In this chapter, a... Sample PDF
Financial Crisis Modeling and Prediction with a Hilbert-EMD-Based SVM Approachs
$37.50
Chapter 18
Chun-Jung Huang, Hsiao-Fan Wang, Shouyang Wang
One of the key problems in supervised learning is due to the insufficient size of the training data set. The natural way for an intelligent learning... Sample PDF
Virtual Sampling with Data Construction Analysis
$37.50
About the Contributors