Use of Finite Markov Chains in Business Problems Involving Decision Making and Case-Based Reasoning

Use of Finite Markov Chains in Business Problems Involving Decision Making and Case-Based Reasoning

Michael Voskoglou
DOI: 10.4018/978-1-7998-5077-9.ch016
(Individual Chapters)
No Current Special Offers


Artificial intelligence (AI) is the branch of computer science focusing on the creation of intelligent machines that mimic human reasoning and behaviour. Probability theory is among the mathematical tools used in AI applications to deal with situations of uncertainty caused by randomness. In particular, the Markov chain (MC) theory is a smart combination of probability and linear algebra that offers ideal conditions for modelling such situations. International business is about the trade of goods, services, technology, capital, and knowledge at a global level, while decision making (DM) and case-based reasoning (CBR) are among the processes that are frequently used in this field. In this chapter, an absorbing and an ergodic MC model are developed on the steps of DM and CBR respectively for representing mathematically those two processes, thus providing valuable information about their evolution. The examples presented are connected to international business applications.
Chapter Preview


Artificial Intelligence (AI) is the branch of Computer Science that focuses on the creation of intelligent machines which work and react like humans. The term AI was first coined by John McCarthy (Figure 1) in 1956 when he held the first academic conference on the subject. in Dartmouth college, USA (Moor, 2006) However, the journey to understand if machines can truly think began much before, even before the Alan Turing’s theoretical “learning machine” invention in 1936, which has been proved of being capable to simulate the logic of any computer’s algorithm (Hodges, 2012).

Figure 1.

J. McCarthy (1927-2011)


AI has roots in mathematics, engineering, technology and science and as a synthesis of ideas from all those fields has created a new situation that is only just beginning to generate enormous changes and benefits to the human society.

Probability theory that deals with situations of uncertainty characterized by randomness is one of the main mathematical tools used in AI applications. In particular, the Markov Chain (MC) theory, being a smart combination of Probability and Linear Algebra, offers ideal conditions for modelling situations characterized by randomness (Domingos et al., 2009).

The purpose of this chapter is to present applications of finite MC’s to problems of AI involving Decision Making (DM) and Case-Based Reasoning (CBR) systems. The rest of the chapter is organized as follows: In the second section the elements of theory of finite MC’s are presented which are necessary for the understanding of the rest of the work. In the third section an absorbing MC (AMC) is introduced on the phases of the DM and an application is presented illustrating the importance of the constructed model. In the fourth section the CBR process is modeled with the help of an ergodic MC (EMC) developed on its steps and through it a measure is obtained for the effectiveness of a CBR system. The chapter closes with the final conclusions presented in the fifth section.

Key Terms in this Chapter

Transition Probability pij: The probability for a MC, being in state i at a certain step, to move to state j in the next step.

Decision Making (DM): DM is the process of choosing a solution between two or more alternatives, aiming to achieve the best possible result for a given problem. Obviously, the above process has sense if, and only if, there are more than one feasible solutions and a suitable criterion that helps the decision maker to choose the best among those solutions.

Artificial Intelligence (AI): AI is the branch of computer science that focuses on the creation of intelligent machines which work and react like humans.

Probability Vector Pk: The row- matrix with entries the probabilities for a FMC to be in each of its states at the k-th step of the corresponding stochastic process.

Finite Markov Chain (FMC): A MC with a finite number of states.

Markov Chain (MC): A MC is a stochastic process that moves in a sequence of steps (phases) through a set of states and has a one-step memory. This means that the probability of entering a certain state in a certain step depends on the state occupied in the previous step and not in earlier steps (Markov property).

Ergodic Markov Chain (AMC): A MC is said to be an EMC, if it is possible to go between any two states, not necessarily in one step. It is well known that, as the number of its steps tends to infinity (long run), an EMC tends to an equilibrium situation, in which the probability vector takes a constant price called the limiting probability vector of the EMC.

Absorbing Markov Chain (AMC): A state of a MC is called absorbing if, once entered, it cannot be left and a MC is said to be an AMC if it has at least one absorbing state and if from every state it is possible to reach an absorbing state, not necessarily in one step. Working with an AMC one brings its transition matrix to its canonical (or standard) form by listing the absorbing states first and.with the help of it calculates its fundamental matrix, the entries of which give the mean number of steps before the absorption when the chain starts from anyone of its non-absorbing states.

Stochastic Process: A process depending on a family of random variable that take values from probability distributions. Usually, the random variables are associated with or indexed by a set of numbers, viewed as points in time and giving the interpretation of the stochastic process.

Transition Matrix: The square matrix with entries the transition probabilities of a FMC.

Case-Based Reasoning (CBR): CBR is the process of solving problems based on the solutions of previously solved analogous problems (past cases). The use of computers enables a CBR system to preserve a continuously increasing “library” of past cases and to retrieve each time the suitable one for solving a new problem.

Complete Chapter List

Search this Book: