Big Data Analytics: Academic Perspectives

Big Data Analytics: Academic Perspectives

Muhammad D. Abdulrahman, Nachiappan Subramanian, Hing Kai Chan, Kun Ning
Copyright: © 2017 |Pages: 12
DOI: 10.4018/978-1-5225-0956-1.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter discusses the scholarly views on big data analytics with respect to the challenges in terms of visualization and data driven research in smart cities and ports. The prominent challenges and emerging research on structuring data, data mining algorithms and visualization aspects are shared by academic experts based on their ongoing research experience. Scholars agreed that being able to analyze huge data at once is highly critical for the embracement and success of big data research and the utilization of its findings particularly for entities with highly dynamic and complex demands such as cities and ports. It was noted that developing robust ways of handling and clean qualitative social media data as well as getting well-trained and highly skilled human resources in all aspects of big data analysis and interpretation remains a major challenge.
Chapter Preview
Top

Introduction

This chapter brings together the academic perspectives on Data-Driven Research for Supply Chain Management. It includes scholars’ presentations from The 9th International Conference on Operations and Supply Chain Management that took place in Ningbo China from 12-15 July 2015. Prominent challenges identified in data analytics process include structuring data, visualization, data analytics and data driven research in smart cities and ports.

A unique area with significant big data utilization is smart city development. Smart city are being planned all over the world. Smart city basically means a city that provides a high quality and efficient life to its in habitants through resource optimization (Calvillo et al. 2016). A key aspect of smart city project is accurate and wide-ranging data collection. The expansion of big data and the evolution of Internet of Things (IoT) technologies have played an important role in the feasibility of smart city initiatives. Data collection requirement in smart city projects are in many areas such as security, transportation, logistics, shipping and IoT. Once collected, the data needs to be analyzed. Data analysis and accurate interpretation is highly critical for a successful use of big data application and utilization. A key aspect of big data analysis this is the data structure.

A data structure is a term which is usually used to refer to systematic forms of organizing data (Tsitchizris & Lochovsky, 1982). As a concept, a data structure is important while analysing the information and it has been used within the research and practise of the information disciplines (Beynon-Davies, 2016). The Computer Sciences Corporation (CSC) lists five tops trends that are going to affect future supply chains. These are: mobile communication, cloud computing, intelligent robots, manufacturing digitalization and deep learning. Firstly, computer science which has changed the world in terms of how we do things and communicate from the early days to now will be affected by man-made physical objects. In the near future most things, if not all, will be computerised. The second step is statistics and its applications for understanding key issues and decision makings. There is an ongoing convergence between existing technologies. Company C is sells expensive game cards to students, as part of its business portfolio. The company engages some the students in creating the graphic cuts for game players. It discovered in 2004 that someone at Stanford University found that the graphics processing unit (GPU) can be used for computations, which the company immediately turned into a business. The graphics processing unit (GPU) has now become a computational engine, used in the area of simulation, computational finance, and weather modelling. Company C is familiar with high performance computing and is a top 500. A Top500 basically means top 500 most powerful non-distributed computer systems on this planet. TOP500 supercomputers consist of thousands of control display unit (CDU) in one single system. In this case, graphics processing/processor unit (GPU) is basically used. So there is tremendous growth in terms of using the GPU as a simulation engine.

GPU is used for MapReduce which is getting wide publicity and usage in today's big data processing. A MapReduce program is a filtering and sorting programme with the associated summary of the operation such as counting the number items in a queue and yielding items frequencies, amongst others. Basically it is a framework that allows for massive data analyses. One of challenges in large data, with millions of unstructured data, is the difficulty of pooling and acquiring such data. It is now possible to analyse complex and dynamic situations based on MIT student’s development. For example, during Arab’s spring (revolutionary waves of demonstrations and protests which started in December 2010 in Tunisia) MIT research student had wanted to analyze the twitters of those in the Arab’s Spring uprising but was frustrated because he found that the database was extremely slow. To overcome this, the student simply developed his own database that allow for the type of processing speed he wanted. The student, as expected, immediately became popular and his database company grew and attracted offers of million dollars by both Google and Facebook to buy his database programme. The student is reported to be waiting to get a billion dollars offer before selling.

Complete Chapter List

Search this Book:
Reset