Data Science and Distributed Intelligence

Data Science and Distributed Intelligence

Alfredo Cuzzocrea (ICAR-CNR and University of Calabria, Italy) and Mohamed Medhat Gaber (Robert Gordon University, Aberdeen, UK)
DOI: 10.4018/978-1-4666-5888-2.ch166
OnDemand PDF Download:
List Price: $37.50

Chapter Preview



The two terms Big Data (Stonebraker & Hong, 2012) and MapReduce (Dean & Ghemawat, 2008) have dominated the scene in the intelligent data analysis field during the last two years. They are in fact the cause and effect of the rapid growth in data observed in the digital world. The phenomenon of very large databases and very high rate streaming data has been coined recently as Big Data. The largest two databases for Amazon account for 42 terabytes of data in total, and YouTube receives at least 65,000 new videos per day. Such figures increase every day and people are literally drowning in high waves of data. Making sense out of this data has become more important than ever in the knowledge era. With the birth of learning from data streams, Mutukrishnan in his later published book (Muthukrishnan, 2005) has defined data streams as “data arriving in a high rate that challenges our computation and communication capabilities.” In fact, this definition is now more true than back then. In spite of the continuous advances in our computation and communication capabilities, the data growth has been much faster, and the problem has become even more challenging. As a natural reaction to this worsening, a number of advanced techniques for data streams have been proposed, ranging from compression paradigms (e.g., (Cuzzocrea et al., 2004a, 2004b, 2005; Cuzzocrea & Chakravarthy, 2010)), mainly inherited by previous experiences in OLAP data cube compression (e.g., (Cuzzocrea, 2005; Cuzzocrea & Serafino, 2009)) to intelligent approaches that successfully exploit the nature of such data sources, like their multidimensionality, to gain in effectiveness and efficiency during the processing phases (e.g., (Cuzzocrea, 2009)), and recent initiatives that are capable of dealing with complex characteristics of such data sources, like their uncertainty and imprecision, as dictated by modern stream applicative settings (e.g., social networks, Sensor Web, Clouds – (Cuzzocrea, 2011)).

Addressing such challenges has kept Data Mining and Machine Learning practitioners and researchers busy with exploring the possible solutions. MapReduce has come as a potentially effective solution when dealing with large datasets, by enabling the breakdown of the main process into smaller tasks. Each of these tasks could be performed either in a parallel or distributed processing mode of operation. This allows the speed-up of performing complex data processing tasks, in an attempt to catch up with high speed large volume of data generated by scientific applications (Jiang et al., 2010), such as the promising contexts of analytics over large-scale multidimensional data (e.g., (Cuzzocrea et al., 2011)) and large-scale sensor network data processing (e.g., (Yu et al., 2012)). With Big Data and MapReduce at the front of the scene, a new term describing the process of dealing with very large dataset has been coined, Data Science.

In line with this, when these kind of dataset are processed on top of a service-oriented infrastructure like the novel Cloud Computing one (Agrawal et al., 2011), the terms “Database as a Service” (DaaS) (Hacigumus et al., 2002) and “Infrastructure as a Service” (IaaS) arise, and it is become critical to understand how Data Science can be coupled with distributed, service-oriented infrastructures, with novel and promising computational metaphors. Hence, due to the inherent distributed nature of computational infrastructures like Clouds (but also Grids (Foster et al., 2001)), it is natural to view Distributed Intelligence as the most natural underlying paradigm for novel Data Science challenges.

Key Terms in this Chapter

MapReduce: A programming model that uses a divide and conquer method to speed-up processing large datasets, with a special focus on semi-structured data.

Big Data: A collection of models, techniques and algorithms that aim at representing, managing, querying and mining large-scale amounts of data (mainly semi-structured data) in distributed environments (e.g., Clouds).

Cloud Computing: A computational paradigm that aims at supporting large-scale, high-performance computing in distributed environments via innovative metaphors such as resource virtualization and de-location.

Distributed Intelligence: A model paradigm that defines models, techniques and algorithms for supporting intelligent representation, management, querying and mining of large-scale amounts of data in distributed environments.

Data Science: A collection of models, techniques and algorithms that focus on the issues of gathering, pre-processing, and making sense-out of large repositories of data, which are seen as “data products.”

Data Warehousing: A collection of models, techniques and algorithms for storing, managing and processing large amounts of data according to a global and multidimensional vision of data.

OLAP (Online Analytical Processing): A collection of models, techniques and algorithms for supporting multidimensional, multi-level and multi-resolution analysis of large amounts of data.

Complete Chapter List

Search this Book: