Structural Intervention and External Control for Markovian Regulatory Network Models

Structural Intervention and External Control for Markovian Regulatory Network Models

Xiaoning Qian, Ranadip Pal
DOI: 10.4018/978-1-5225-0353-8.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In order to derive system-based methods to control dynamic behavior of biological systems of interest for future gene-based intervention therapeutics, two basic categories of intervention strategies have been studied based on the Markov chain theory and Markov decision processes: Structural intervention by function perturbation and external control based on state perturbation. The chapter reviews the existing network analysis and control methods in these two categories and discusses their extensions for more robust and clinically relevant intervention strategies considering collateral damages from intervention.
Chapter Preview
Top

1. Introduction

Mathematical modeling and dynamic analysis of gene regulatory networks enable mathematical avenues to study potential systems therapeutics to intervene regulatory networks so that the modeled system dynamics can be altered for desired systems behavior, which is one of the utmost important goals of systems biology from mathematical modeling of biological networks. Network intervention till today has been mostly investigated in the context of probabilistic Boolean networks (PBNs) (Shmulevich & Dougherty, 2007; Shmulevich & Dougherty, 2010; Dougherty, Pal, Qian, Bittner, & Datta, 2010). Two basic categories of intervention strategies have been studied based on the Markov chain theory and Markov decision processes: Structural intervention and external control based on state perturbation.

Structural intervention or function perturbation involves changing regulatory relationships or wiring among genes in networks (Shmulevich, Dougherty, & Zhang, 2002a; Xiao & Dougherty, 2007; Qian & Dougherty, 2008). The wiring changes cause the changes in the rule-based structure in PBNs and hence state transition of the underlying Markov chains, which further lead to the alteration in long-run network behavior characterized by their steady-state distributions. Given a class of potential structural changes (us) in mathematical network models that simulate practical therapeutic strategies by the use of drugs to act on gene products for pathway blockage, for example using small interfering RNAs (siRNAs), the mathematical problem is to derive the structural intervention resulting in an optimal alteration of the steady-state distribution. In order to characterize the dynamic changes caused by structural perturbations and derive effective intervention strategies, we need to mathematical address two critical issues: (1) Efficient characterization of the effects caused by perturbation and (2) search for the optimal perturbation(s) that provides the most desirable steady-state distribution after intervention. We have recently adopted the classical Markov chain perturbation theory to address these problems.

External control by state perturbation is generally based on flipping the state of a control gene, hence the concentration of either its corresponding mRNA expression or protein product will be knocked down or over expressed, for beneficial network dynamical change. Corresponding gene expression changes are forced by external perturbations to modulate the dynamics. This type of network intervention models feasible medical treatments for potential therapeutics such as gene knockdown or chemotherapy. Typically, optimal external control is derived in the framework of Markov decision process (Shmulevich, Dougherty, & Zhang, 2002b; Datta, Choudhary, Bittner, & Dougherty, 2003; Bertsekas, 2005; Pal, Datta, & Dougherty, 2006; Datta, Pal, Choudhary, & Dougherty, 2007) for both finite-horizon and infinite-horizon control policies, in which the control action is based on an objective function that rewards beneficial gene activity. Mathematically, we are interested in deriving a control policy ug for a given control gene g or a set of control genes when necessary. At a time point t if the network state is at x, the expression state for g is flipped if ug(x) = 1, denoting the control is on. Otherwise when ug(x) = 0, the state of the control gene g remains unchanged. To achieve beneficial network behavior, we minimize the average cost function reflecting the undesirability of the gene activities by deriving the optimal control policy ug. For the infinite-horizon problem, stationary control policies can be solved by dynamic programming with networks of reasonable size (Pal et al., 2006; Datta et al., 2007). But for large networks, the optimal solution is often computationally prohibitive. Hence, greedy algorithms are necessary to alter the steady-state network behavior to lessen the likelihood of being in undesirable states, for instance, metastatic states in cancer.

Complete Chapter List

Search this Book:
Reset