Big Data Computation Model for Landslide Risk Analysis Using Remote Sensing Data

Big Data Computation Model for Landslide Risk Analysis Using Remote Sensing Data

Venkatesan M. (National Institute of Technology Karanataka, India) and Prabhavathy P. (VIT University, India)
DOI: 10.4018/978-1-5225-3643-7.ch002
OnDemand PDF Download:
No Current Special Offers


Effective and efficient strategies to acquire, manage, and analyze data leads to better decision making and competitive advantage. The development of cloud computing and the big data era brings up challenges to traditional data mining algorithms. The processing capacity, architecture, and algorithms of traditional database systems are not coping with big data analysis. Big data are now rapidly growing in all science and engineering domains, including biological, biomedical sciences, and disaster management. The characteristics of complexity formulate an extreme challenge for discovering useful knowledge from the big data. Spatial data is complex big data. The aim of this chapter is to propose a multi-ranking decision tree big data approach to handle complex spatial landslide data. The proposed classifier performance is validated with massive real-time dataset. The results indicate that the classifier exhibits both time efficiency and scalability.
Chapter Preview


Very large amount of Geo-spatial data leads to definition of complex relationship, which creates challenges in today data mining research. Current scientific advancement has led to a flood of data from distinctive domains such as healthcare and scientific sensors, user-generated data, Internet and disaster management. Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. For instance, big data is commonly unstructured and require more real-time analysis. This development calls forms system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. Hadoop is a platform for distributing computing problems across a number of servers. First developed and released as open source by Yahoo, it implements the MapReduce approach pioneered by Google in compiling its search indexes. Hadoop’s MapReduce involves distributing a dataset among multiple servers and operating on the data: the “map” stage. The partial results are then recombined: the “reduce” stage. To store data, Hadoop utilizes its own distributed file system, HDFS, which makes data available to multiple computing nodes.

Natural disasters like hurricanes, earthquakes, erosion, tsunamis and landslides cause countless deaths and fearsome damage to infrastructure and the environment. Landslide is the one of the major problem in hilly areas. Landslide Risk can be identified using different methods based on the GIS technology. In Ooty, Nilgiri district, landslide was happened due to the heavy rainfall and frequent modification of land use features. Landslide disaster could have been reduced, if more had been known about forecasting and mitigation. So far, few attempts have been made to predict these landslides or prevent the damages caused by them. In the previous studies, various approaches were applied to such problems which show that it is difficult to understand and tricky to predict accurately. In order to analyze these landslides, various factors, such as Rainfall, Geology, Slope, land-use/land cover, soil and Geomorphology are considered and the relevant thematic layers are prepared in GIS for landslide susceptibility mapping. The data collected from various research institutes related to land slide helped to predict and analyze the land slide susceptibility. The spatial landslide data is one of the complex big data. To handle such as large amount of landslide data, the previous study weighted decision tree approach is improvised and Multi Ranking Decision Tree Classifier is proposed using map reduce programming model,.

Complete Chapter List

Search this Book: