Scalable Data Mining, Archiving, and Big Data Management for the Next Generation Astronomical Telescopes

Scalable Data Mining, Archiving, and Big Data Management for the Next Generation Astronomical Telescopes

Chris A. Mattmann, Andrew Hart, Luca Cinquini, Joseph Lazio, Shakeh Khudikyan, Dayton Jones, Robert Preston, Thomas Bennett, Bryan Butler, David Harland, Brian Glendenning, Jeff Kern, James Robnett
Copyright: © 2014 |Pages: 26
DOI: 10.4018/978-1-4666-4699-5.ch009
(Individual Chapters)
No Current Special Offers


Big data as a paradigm focuses on data volume, velocity, and on the number and complexity of various data formats and metadata, a set of information that describes other data types. This is nowhere better seen than in the development of the software to support next generation astronomical instruments including the MeerKAT/KAT-7 Square Kilometre Array (SKA) precursor in South Africa, in the Low Frequency Array (LOFAR) in Europe, in two instruments led in part by the U.S. National Radio Astronomy Observatory (NRAO) with its Expanded Very Large Array (EVLA) in Socorro, NM, and Atacama Large Millimeter Array (ALMA) in Chile, and in other instruments such as the Large Synoptic Survey Telescope (LSST) to be built in northern Chile. This chapter highlights the big data challenges in constructing data management systems for these astronomical instruments, specifically the challenge of integrating legacy science codes, handling data movement and triage, building flexible science data portals and user interfaces, allowing for flexible technology deployment scenarios, and in automatically and rapidly mitigating the difference in science data formats and metadata models. The authors discuss these challenges and then suggest open source solutions to them based on software from the Apache Software Foundation including Apache Object-Oriented Data Technology (OODT), Tika, and Solr. The authors have leveraged these solutions to effectively and expeditiously build many precursor and operational software systems to handle data from these astronomical instruments and to prepare for the coming data deluge from those not constructed yet. Their solutions are not specific to the astronomical domain and they are already applicable to a number of science domains including Earth, planetary, and biomedicine.
Chapter Preview

1. Introduction

The next generation of astronomical telescopes including MeerKAT/KAT-7 in South Africa (Jonas 2009), the Low Frequency Array (LOFAR) in Europe (De Vos, 2009), the Expanded Very Large Array (EVLA) in Socorro, New Mexico (Perley, 2011), the Atacama Large Millimeter Array (ALMA) in Chile (Wootten, 2003) and eventually over the next decade the cross-continental Square Kilometre Array (SKA) (Hall, 2004), and the Large Synoptic Survey Telescope (LSST) in northern Chile (Tyson, 2002) will generate unprecedented volumes of data, stretching from the near terabyte (TB) of data/day range for EVLA on the lower bounds to the 700 TB of data per second range for the SKA. These ground-based instruments will push the boundaries of Big Data (Lynch, 2008) (Mattmann, 2013) in several dimensions shown in Table 1. Table 1 represents the common challenges that users, educators, scientists, and other discipline users face when leveraging astronomical data, namely its size (volume, velocity); variety of formats (complexity); the geographically distributed nature of these telescopes, and the limitations in bandwidth that prevents the wide dissemination of the information throughout the world’s users who desire access to it. Big data is the buzzword of the day, used to define data sets so large and complex that traditional data management systems have difficulties handling them. There are three main challenges when dealing with big data: the amount of data collected (volume), the speed at which data must be analyzed (velocity), and the array of different data formats that is collected (complexity).

Table 1.
Big data challenges and their mappings to upcoming or current astronomical instruments. Challenges are labeled as C1, C2 and C3.
Big Data ChallengeDescription
C1VolumeAcross all science domains, the SKA will set the precedent in many ways when it sees first light in 2020 in terms of data volume. For example, it will generate exabytes (1018 bytes) in days, eclipsing the size of the current Internet in that same time span. LOFAR is already at the petabyte (1015 bytes) per day range. EVLA is generating hundreds of terabytes per experiment, and per month. ALMA will generate similar volumes.
C2VelocityNot only are these astronomical instruments generating large volumes, but also they are doing so in a rapid fashion. For example, the SKA will generate 700Tb/sec; LOFAR is already generating 138Tb/day, other instruments such as EVLA are generating on the order of terabytes per day. Some processing stages have larger data rates (e.g., data staging of raw instrument measurements), while others (data reduction) may have comparatively smaller rates.
C3ComplexityEach of these ground-based instruments stores data in a number of different formats, and metadata models, for example, the EVLA and ALMA store data in a custom binary and metadata directory-based format called Measurement Sets (MS), and also in the FITS format (Hanisch, 2001). Some of these communities, e.g., LOFAR and the SKA South Africa project have already made the transition to HDF-5 (Fortner, 1998) for their image cubes. The ability to automatically facilitate transformations between these different formats is also a characteristic of these projects as Big Data.

Complete Chapter List

Search this Book: