Data Intensive Computing for Bioinformatics

Data Intensive Computing for Bioinformatics

Judy Qiu (Indiana University - Bloomington, USA), Jaliya Ekanayake (Indiana University - Bloomington, USA), Thilina Gunarathne (Indiana University - Bloomington, USA), Jong Youl Choi (Indiana University - Bloomington, USA), Seung-Hee Bae (Indiana University - Bloomington, USA), Yang Ruan (Indiana University - Bloomington, USA), Saliya Ekanayake (Indiana University - Bloomington, USA), Stephen Wu (Indiana University - Bloomington, USA), Scott Beason (Computer Sciences Corporation, USA), Geoffrey Fox (Indiana University - Bloomington, USA), Mina Rho (Indiana University - Bloomington, USA) and Haixu Tang (Indiana University - Bloomington, USA)
Copyright: © 2013 |Pages: 35
DOI: 10.4018/978-1-4666-3604-0.ch016
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Data intensive computing, cloud computing, and multicore computing are converging as frontiers to address massive data problems with hybrid programming models and/or runtimes including MapReduce, MPI, and parallel threading on multicore platforms. A major challenge is to utilize these technologies and large-scale computing resources effectively to advance fundamental science discoveries such as those in Life Sciences. The recently developed next-generation sequencers have enabled large-scale genome sequencing in areas such as environmental sample sequencing leading to metagenomic studies of collections of genes. Metagenomic research is just one of the areas that present a significant computational challenge because of the amount and complexity of data to be processed. This chapter discusses the use of innovative data-mining algorithms and new programming models for several Life Sciences applications. The authors particularly focus on methods that are applicable to large data sets coming from high throughput devices of steadily increasing power. They show results for both clustering and dimension reduction algorithms, and the use of MapReduce on modest size problems. They identify two key areas where further research is essential, and propose to develop new O(NlogN) complexity algorithms suitable for the analysis of millions of sequences. They suggest Iterative MapReduce as a promising programming model combining the best features of MapReduce with those of high performance environments such as MPI.
Chapter Preview
Top

Introduction

Overview

Data intensive computing, cloud computing, and multicore computing are converging as frontiers to address massive data problems with hybrid programming models and/or runtimes including MapReduce, MPI, and parallel threading on multicore platforms. A major challenge is to utilize these technologies and large scale computing resources effectively to advance fundamental science discoveries such as those in Life Sciences. The recently developed next-generation sequencers have enabled large-scale genome sequencing in areas such as environmental sample sequencing leading to metagenomic studies of collections of genes. Metagenomic research is just one of the areas that present a significant computational challenge because of the amount and complexity of data to be processed.

This chapter builds on research we have performed (Ekanayake, Gunarathne, & Qiu, Cloud Technologies for Bioinformatics Applications, 2010) (Ekanayake J., et al., 2009) (Ekanayake, Pallickara, & Fox, MapReduce for Data Intensive Scientific Analyses, 2008) (Fox, et al., 2009) (Fox, Bae, Ekanayake, Qiu, & Yuan, 2008) (Qiu, et al., 2009) (Qiu & Fox, Data Mining on Multicore Clusters, 2008) (Qiu X., Fox, Yuan, Bae, Chrysanthakopoulos, & Nielsen, 2008) (Twister, 2011) on the use of Dryad (Microsoft’s MapReduce) (Isard, Budiu, Yu, Birrell, & Fetterly, 2007) and Hadoop (open source) (Apache Hadoop, 2009) to address problems in several areas, such as particle physics and biology. The latter often have the striking all pairs (or doubly data parallel) structure highlighted by Thain (Moretti, Bui, Hollingsworth, Rich, Flynn, & Thain, 2009). We discuss here, work on new algorithms in “Innovations in Algorithms for Data Intensive Computing” section, and new programming models in “Innovations in Programming Models Using Cloud Technologies” and “Iterative MapReduce with Twister” sections.

We have a robust parallel Dimension Reduction and Deterministic Annealing clustering, and a matching visualization package. We also have parallel implementations of two major dimension reduction algorithms – the SMACOF approach to MDS and Generative Topographic Mapping (GTM) described in “Innovations in Algorithms for Data Intensive Computing” section. MDS is O(N2) and GTM O(N) but, since GTM requires the points to have (high dimensional) vectors associated with them, only MDS can be applied to most sequences. Also, since simultaneous multiple sequence alignment MSA is impractical for interesting biological datasets, MDS is a better approach to dimension reduction for sequence samples, because it only requires sequences to be independently aligned in pairs to calculate their dissimilarities. On the other hand, GTM is attractive for analyzing high dimension data base records, where well defined vectors are associated with each point – in our case each database record. Distance calculations (Smith-Waterman-Gotoh) MDS and clustering are all O(N2), and will not properly scale to multi-million sequence problems and hierarchical operations to address this are currently not supported for MDS and clustering except in a clumsy manual fashion. In the final part of “Innovations in Algorithms for Data Intensive Computing” section, we propose a new multiscale (hierarchical) approach to MDS that could reduce complexity from O(N2) to O(NlogN) using ideas related to approaches already well understood in O(N2) particle dynamics problems.

In “Innovations in Programming Models Using Cloud Technologies” and “Iterative MapReduce with Twister” sections, we chose to focus on the MapReduce frameworks, as these stem from the commercial information retrieval field, which is perhaps currently the world’s most demanding data analysis problem. Exploiting commercial approaches offers a good chance that one can achieve high-quality, robust environments, and MapReduce has a mixture of commercial and open source implementations. In particular, we have looked at MapReduce and MPI, and shown how to analyze biological samples with modest numbers of sequence on a modern 768 core 32 node cluster. We have learnt that currently MapReduce cannot efficiently perform clustering and MDS (Multidimensional Scaling) steps, even though the corresponding MPI implementation only needs reduction and broadcast operations and so fit architecturally functions supported in MapReduce. In addition, since we need to support iterative operations, we propose the use of a modified MapReduce framework called Twister. An early prototype described in “Iterative MapReduce with Twister” section has been run on kernels but, as of this time, not on complete bioinformatics applications. Research issues include fault tolerance, performance, and support for existing MPI programs with the Twister run time supporting the subset of MPI calls. We also demonstrate in “Innovations in Programming Models Using Cloud Technologies” section how we take “all-pairs” or “doubly data parallel” computations in two important bioinformatics sequencing applications, and use the results to compare two implementations of MapReduce (Dryad and Hadoop) with MPI. We describe an interesting technology developed to support rapid changes of operating environment of our clusters. We focus on the effects of inhomogeneous data and set the scene for discussion of Twister in “Iterative MapReduce with Twister” section. One of the biological applications – sequence assembly by Cap3 – is purely “Map” and has no reduction operation. The other – calculation of Smith-Waterman dissimilarities for sequences – has a significant reduction phase to concentrate data for later MDS and Clustering.

Complete Chapter List

Search this Book:
Reset