Survey of the Different Type of Data Analytics Algorithms

Survey of the Different Type of Data Analytics Algorithms

Michael Kevin Hernandez (Boeing, Charleston, SC, USA)
DOI: 10.4018/IJSITA.2017010104
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

From as early as 1854 to today, society has been gathering, processing, transforming, modeling and visualizing data to help drive data-driven decisions. The qualitative definition of big data can be defined more conclusively as data that has high volume, velocity, and variety. Whereas, the quantitative definition of big data does vary with respect to time due to the dependence of the time's technology and processing capabilities. However, making use of that big data to facilitate data-driven decisions, one should employ either descriptive, predictive, or prescriptive analytics. This article has discussed and summarized the advantages and disadvantages of the algorithms that fell under descriptive and predictive analytics. Given the sheer number of the different types of algorithms and the amount of versatile data mining software available sometimes, the best big data analytics results can come from mixing two to three of the mentioned algorithms.
Article Preview

History Of Data Analytics

Data analytics has existed before 1854. Snow (1854) had a theory on how cholera outbreaks occur, and he was able to use that theory to remove the pump handle off of a water pump, where that water pump had been contaminated in the summer of 1854. He had set out to prove that his hypothesis on how cholera epidemics originated from was correct, so he then drew his famous spot maps for the Board of Guardians of St. James’ parish in December 1854. These maps were showed in his eventual 2nd edition of his book “On the Mode of Communication of Cholera” (Brody, Rip, Vinten-Johansen, Paneth, & Rachman, 2000; Snow, 1855). As Brody et al. (2000) stated, this case was one of the first famous examples of the theory being proven by data, but the earlier usage of spot maps has existed.

However, the use of just geospatial data analytics can be quite limiting in finding a conclusive result if there is no underlying theory as to why the data is being recorded (Brody et al., 2000). Through the addition of subject matter knowledge and subject matter relationships before data analytics, context can be added to the data for which it can help yield better results (Garcia, Ferraz, & Vivacqua, 2009). In the case of Snow’s analysis, it could have been argued by anyone that the atmosphere in that region of London was causing the outbreak. However, Snow’s original hypothesis was about the transmission of cholera through water distribution systems, the data then helped support his hypothesis (Brody et al., 2000; Snow 1854). Thus, the suboptimal results generated from the outdated Edisonian-esque, which is a test-and-fail methodology, can prove to be very costly regarding Research and Development, compared to the results and insights gained from text mining and manipulation techniques (Chonde & Kumara, 2014).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2017): 1 Released, 3 Forthcoming
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing