Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Markov Chain

Encyclopedia of Information Science and Technology, Second Edition
A model that determines probabilities for the next event, or “state,” given the result of the previous event.
Published in Chapter:
Evaluating Computer Network Packet Inter-Arrival Distributions
Dennis Guster (St. Cloud State University, USA), David Robinson (St. Cloud State University, USA), and Richard Sundheim (St. Cloud State University, USA)
DOI: 10.4018/978-1-60566-026-4.ch232
Abstract
The past decade could be classified as the “decade of connectivity”; in fact, it is commonplace for computers to be connected to an LAN, which in turn is connected to a WAN, which provides an Internet connection. On an application level this connectivity allows access to data that even five years earlier were unavailable to the general population. This growth has not occurred without problems, however. The number of users and the complexity/size of their applications continue to mushroom. Many networks are over-subscribed in terms of bandwidth, especially during peak usage periods. Often network growth was not planned for, and these networks suffer from poor design. Also, the explosive growth has often necessitated that crisis management be employed just to keep basic applications running. Whatever the source of the problem, it is clear that proactive design and management strategies need to be employed to optimize available networking resources (Fortier & Desrochers, 1990). This is especially true in today’s world of massive Internet usage (Zhu, Yu, & Doyle, 2001).
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Modeling Processes and Outcomes From Cybersecurity Talent Gaps in Global Labor Markets
Full Text Chapter Download: US $37.50 Add to Cart
Model-Based Multi-Objective Reinforcement Learning by a Reward Occurrence Probability Vector
A stochastic model describing a sequence of possible states in which the probability of each state depends only on the previous state. It is an intension of Markov decision processes, the difference is the subtraction of actions and rewards.
Full Text Chapter Download: US $37.50 Add to Cart
Modeling and Performance Evaluation
A mathematical system containing at least two states and the system's state transits from one state to another memorylessly, i.e. the next state only depends on the current state and is irrelevant to the previous states.
Full Text Chapter Download: US $37.50 Add to Cart
Bayesian Variable Selection
A random process where the next state only depends on the current state but not on the states preceding the current state.
Full Text Chapter Download: US $37.50 Add to Cart
Cyber Security Model of Artificial Social System Man-Machine
Is a mathematical system that undergoes transitions from one state to another, among a finite or countable number of possible states.
Full Text Chapter Download: US $37.50 Add to Cart
Inevitable Battle Against Botnets
Markov chain is a sequence of possible events and the probability of each event happening is determined by the state in the previous events.
Full Text Chapter Download: US $37.50 Add to Cart
A Multiscale Computational Model of Chemotactic Axon Guidance
Sequence of random objects X1, X2, X3, ... taking values in a set of states with the Markov property, namely that, given the present state, the future and past states are independent. Discrete-time (resp. continuous-time) homogeneous Markov chains are characterized by their transition probability matrix (resp. intensity matrix).
Full Text Chapter Download: US $37.50 Add to Cart
Formalizing Model-Based Multi-Objective Reinforcement Learning With a Reward Occurrence Probability Vector
A stochastic model describing a sequence of possible states in which the probability of each state depends only on the previous state. It is an intension of Markov decision processes; the difference is the subtraction of actions and rewards.
Full Text Chapter Download: US $37.50 Add to Cart
Prediction of Uncertain Spatiotemporal Data Based on XML Integrated With Markov Chain
A Markov chain is a random process that undergoes transitions from one state to another on a state space.
Full Text Chapter Download: US $37.50 Add to Cart
Probability and Simulations
A model in which the probability of each event depends only on the state attained in the previous event.
Full Text Chapter Download: US $37.50 Add to Cart
Stochastic Models for Cash-Flow Management in SME
Markov chain is a stochastic process or a random process which the probabilities of the next states depend only on the current state or the immediately preceding state. It is used for analyzing the likelihood of the next event which depends on information of the current event.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR