Financial Data Modeling using a Hybrid Bayesian Network Structured Learning Algorithm

Financial Data Modeling using a Hybrid Bayesian Network Structured Learning Algorithm

Shun Li (Peking University, China), Da Shi (Peking University, China) and Shaohua Tan (Peking University, China)
DOI: 10.4018/jcini.2012010103
OnDemand PDF Download:
$37.50

Abstract

In this paper, a group of hybrid incremental learning algorithms for Bayesian network structures are proposed. The central idea of these hybrid algorithms is to use the polynomial-time constraint-based technique to build a candidate parent set for each domain variable, followed by the hill climbing search procedure to refine the current network structure under the guidance of the candidate parent sets. Experimental results show that, the authors’ hybrid incremental algorithms offer considerable computational complexity savings while obtaining better model accuracy compared to the existing incremental algorithms. One of their hybrid algorithms is also used to model financial data generated from American stock exchange markets. It finds out the predictors of the stock return among hundreds of financial variables and, at the same time, the authors’ algorithm also can recover the movement trend of the stock return.
Article Preview

Introduction

Bayesian Network and other Bayesian models are becoming increasingly prominent across a broad spectrum of the cognitive sciences (Perfors, Tenenbaum, Griffiths, & Xu, 2011) and cognitive informatics (Wang, 2003, 2007, 2009, 2011; Wang & Kinsner, 2006; Wang, Kinsner, & Zhang, 2009; Wang et al., 2009). In recent years, these Bayesian models have been used to address animal learning (Courville, Daw, & Touretzky, 2006), human inductive learning and generalization (Tenenbaum, Griffiths, & Kemp, 2006), visual scene perception (Yuille & Kersten, 2006), motor control (Kording & Wolpert, 2006), semantic memory (Steyvers, Griffiths, & Dennis, 2006), language processing and acquisition (Chater & Manning, 2006), symbolic reasoning (Oaksford & Chater, 2001; Wang, 2011), causal learning and inference (Steyvers, Tenenbaum, Wagenmakers, & Blum, 2003; Griffiths & Tenenbaum, 2005; Griffiths & Tenenbaum, 2007; Pacer & Griffiths, 2011; Buchsbaum, Gopnik Griffiths & Shafto, 2011), social cognition (Baker, Tenenbaum, & Saxe, 2007), and some other cognitive problems. However, all the Bayesian network models used in these researches are static models, which only can represent an average situation during special time. This paper focuses on incremental Bayesian network structure learning algorithms, which can be used to dynamically describe the changes of the cognitive process in time.

Over a decade of research in finding efficient incremental learning algorithms for Bayesian network structures has yielded quite a number of important results and computational algorithms (Buntine, 1991; Friedman & Goldszmidt, 1997; Lam, 1998; Lam & Bacchus, 1994b; Nielsen & Nielsen, 2007; Roure, 2004a, 2004b; Shi & Tan, 2007; Niinimaki, Parviaimen, & Koivisto, 2011; Pennock & Xia, 2011). It is well recognized, however, that these existing algorithms often suffer from high computational complexity which prevents them from solving complex and large-scale practical problems.

In this paper, a hybrid learning template is proposed to overcome the computational complexity deficiencies, and a group of algorithms are developed based on the template. Our template consists of polynomial-time constraint-based techniques and the hill climbing search procedure to decouple the complexity into two smaller and less complex computations. The constraint-based techniques make best use of the information in the current network structure and newly arrived datasets to build compact candidate parent sets for domain variables. The hill climbing search procedure is then employed to refine the current network structure under the guidance of the candidate parent sets.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 11: 4 Issues (2017): 3 Released, 1 Forthcoming
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing