Reference Hub3
A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models

A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models

Kai Lampka, Markus Siegle
Copyright: © 2014 |Pages: 26
ISBN13: 9781466646599|ISBN10: 1466646594|EISBN13: 9781466646605
DOI: 10.4018/978-1-4666-4659-9.ch009
Cite Chapter Cite Chapter

MLA

Lampka, Kai, and Markus Siegle. "A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models." Theory and Application of Multi-Formalism Modeling, edited by Marco Gribaudo and Mauro Iacono, IGI Global, 2014, pp. 170-195. https://doi.org/10.4018/978-1-4666-4659-9.ch009

APA

Lampka, K. & Siegle, M. (2014). A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models. In M. Gribaudo & M. Iacono (Eds.), Theory and Application of Multi-Formalism Modeling (pp. 170-195). IGI Global. https://doi.org/10.4018/978-1-4666-4659-9.ch009

Chicago

Lampka, Kai, and Markus Siegle. "A Symbolic Approach to the Analysis of Multi-Formalism Markov Reward Models." In Theory and Application of Multi-Formalism Modeling, edited by Marco Gribaudo and Mauro Iacono, 170-195. Hershey, PA: IGI Global, 2014. https://doi.org/10.4018/978-1-4666-4659-9.ch009

Export Reference

Mendeley
Favorite

Abstract

When modelling large systems, modularity is an important concept, as it aids modellers to master the complexity of their model. Moreover, employing different modelling formalisms within the same modelling project has the potential to ease the description of various parts or aspects of the overall system. In the area of performability modelling, formalisms such as stochastic reward nets, stochastic process algebras, stochastic automata, or stochastic UML state charts are often used, and several of these may be employed within one modelling project. This chapter presents an approach for efficiently constructing a symbolic representation in the form of a zero-suppressed Binary Decision Diagram (BDD), which represents the Markov Reward Model underlying a multi-formalism high-level model. In this approach, the interaction between the submodels may be established either by the sharing of state variables or by the synchronisation of common activities. It is shown that the Decision Diagram data structure and the associated algorithms enable highly efficient state space generation and different forms of analysis of the underlying Markov Reward Model (e.g. calculation of reward measures or asserting non-functional system properties by means of model checking techniques).

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.