Computer Aided Knowledge Discovery in Biomedicine

Computer Aided Knowledge Discovery in Biomedicine

Vanathi Gopalakrishnan
Copyright: © 2009 |Pages: 16
DOI: 10.4018/978-1-60566-076-9.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter provides a perspective on 3 important collaborative areas in systems biology research. These areas represent biological problems of clinical significance. The first area deals with macromolecular crystallization, which is a crucial step in protein structure determination. The second area deals with proteomic biomarker discovery from high-throughput mass spectral technologies; while the third area is protein structure prediction and complex fold recognition from sequence and prior knowledge of structure properties. For each area, successful case studies are revisited from the perspective of computer- aided knowledge discovery using machine learning and statistical methods. Information about protein sequence, structure, and function is slowly accumulating in standardized forms within databases. Methods are needed to maximize the use of this prior information for prediction and analysis purposes. This chapter provides insights into such methods by which available information in existing databases can be processed and combined with systems biology expertise to expedite biomedical discoveries.
Chapter Preview
Top

Background

Knowledge discovery in biomedicine in the current world is very often the result of computational analyses combined with interpretation by domain experts. Langley (1998) states that artificial intelligence researchers have tried to develop intelligent artifacts that replicate the act of discovery. There are distinct steps in the scientific discovery process discussed therein (Langley, 1998) during which developers or users can influence the behavior of a computational discovery system. Furthermore, Langley (1998) suggests that such intervention is the preferred approach for using discovery software. In this chapter, we present an approach to data modeling and discovery that is consistent with this viewpoint.

Jurisica and Wigle (2006) define knowledge discovery (KD) as the process of extracting novel, useful, understandable and usable information from large data sets. The authors review knowledge discovery in proteomics and present examples of such algorithms in the literature that aid protein crystallization. The case studies presented in this chapter reflect state-of-the-art challenges in proteomics along with computer-aided solutions. Quantitative and qualitative discoveries are described along with the methods by which they are arrived at. The KD process in complex real-world domains requires multi-disciplinary methods involving both artificial intelligence and statistics applied to databases (Jurisica & Wigle, 2006).

Proteomics can be defined simply as the study of protein composition in a protein complex, organelle, cell or entire organism (Russell, Old, Resing, & Hunter, 2004). Current high-throughput proteomic technologies require robotics and computational techniques to decipher signals within multitudes of data. It is becoming clear that the high dimensionality poses a serious challenge to existing artificial intelligence tools for knowledge discovery and reasoning (Jurisica & Wigle, 2006). The unavailability of large numbers of samples combined with the high dimensionality of the feature space limits the usefulness of models obtained from such data. Moreover, uncertain and missing values in the data combined with evolving knowledge of the underlying mechanisms requires an intelligent information system to be flexible and scalable (Jurisica & Wigle, 2006).

Key Terms in this Chapter

Clustering: The unsupervised grouping of data items in the absence of class labels.

Metabolomics: The study of small molecule metabolites and their expression within a system or organism.

Inductive Rule Learning: The development of IF-proposition-THEN-concept rule-based models from feature vectors, which are (attribute, value) pairs that describe the training examples. The rule-based models are expected to generalize to classify test examples accurately.

Supervised Machine Learning: The use of class labels as prior knowledge to learn discriminative models from training examples consisting of feature vectors descriptive of the target class.

Feature Extraction: The process of extracting and building features from raw data such as the amino acid sequence of a protein. Feature functions are utilized to extract and process informative features that are useful for prediction.

X-Ray Crystallography: The most general method for experimental determination of protein and other macromolecule three-dimensional structure. A good quality crystal is obtained first from a purified sample and then subjected to X-ray diffraction.

Conditional Random Fields (CRFs): These are undirected discriminative graphical models that directly compute the conditional likelihood of a hidden state sequence (y) given the observation sequence (x). This P(y|x) is proportional to the product of the potential functions over all the cliques in the graph. CRFs define the clique potential as an exponential function and guarantee finding of the global optimum since the optimization function is convex ( Lafferty et al., 2001 ). Forward and backward probability calculations are derived similar to HMMs. Unlike HMMs, no assumptions are made about independence of the observed features. The feature definition can also be arbitrary, including overlapping features and long-range interactions ( Liu et al., 2006 ).

Hidden Markov Models (HMMs): These are directed chain-structured probabilistic graphical models that are generative in nature. They assume that the data are generated by a particular model and compute the joint distribution, P(x, y) of the observation sequence x, and the hidden state sequence y.

Complete Chapter List

Search this Book:
Reset