Article Preview
Top1. Introduction
In this era big data makes a new revolution in day to day activities of social media, health care, banking sector, Military division, and industries. But the problem of big data reveals in the form of accessing their volume, variety, and velocity. Because now a day’s data generated by humans as well as machines controlling is not an easy job with old techniques. The place where it has a lot of formats deals with a major problem. Moreover, the speed of the data retrieval and accessing from the data warehouse seems tremendous challenges and issues for stream processing. Big Data can be processed like batch, periodic, Near to real-time and real-time makes conflict on cluster configuration. Batch processing doesn’t support iterative and multi pass operations. Digital data like structured, semi-structured and unstructured formats storage is typical challenge environment. While extracting the data from those clusters the time of retrieval is high using a map-reduce method. All the input sends as Single Pass which means a set of smaller files group. Here the issues are multiple passes and real-time data integration is not possible in map-reduce for data processing using old methods. Hadoop Distributed File System storage allows a huge volume of data storage in the form of scale-out architecture. The following Figure 1 shows the challenges and issues in big data while doing the data processing.
Figure 1. Big Data challenges and Solutions
To solve the problem of latency delay in big data many tools and frameworks have been used like Apache Hadoop, Apache Spark. However, Map Reduce and HDFS methods are used to provide solutions for the data retrieval problem in big data sets (McCreadie et al., 2012). But the Map-Reduce phase has taken more time to complete multiple jobs. Because of that, store and retrieve data from the HDFS is also taking time. Most of the data mining concepts and algorithms are provided different solutions to cover this problem. Perhaps, it provides near to real-time processing of data. When real-time data retrieval scenario comes, none of the techniques give solutions for time consumption in data retrieval. Eventually, big data analytics can be done by CAP (Consistency, Availability, and Partition) theorem and Shared Nothing Architecture (SNA) (Duggal & Paul, 2013). When big data can be processed by Map Reduce concept Machine Learning Algorithms are used to segregate the tasks based on its Metadata. The complete tasks in the Map phase is divided into smaller tasks and it will be processed by mapper () function (Alfonseca et al., 2013). Separate keys are assigned to the tasks given by the clients and that has to be shared among the mapped function elements. Different algorithms (Velusamy et al., 2013), (R. Somula & Sasikala, 2019) were used to share those keys with computational logic and other concepts for security purposes. Using these scenarios data processing in HDFS with Map Reduce concepts is very clumsy. Data mining techniques association rules, K means, and Nearest Neighbor cluster algorithms and other concepts are not provide real-time data retrieval processing. Apache Spark will provide a solution for real-time data processing with in-memory analytics techniques. In Apache Spark, both databases and data warehouse engines are located on the same block so that it can perform very fast when compared with ancient techniques (R. Somula et al., 2019).