Modelling and Assessing Spatial Big Data: Use Cases of the OpenStreetMap Full-History Dump

Modelling and Assessing Spatial Big Data: Use Cases of the OpenStreetMap Full-History Dump

Alexey Noskov, A. Yair Grinberger, Nikolaos Papapesios, Adam Rousell, Rafael Troilo, Alexander Zipf
Copyright: © 2019 |Pages: 29
DOI: 10.4018/978-1-5225-7927-4.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Many methods for intrinsic quality assessment of spatial data are based on the OpenStreetMap full-history dump. Typically, the high-level analysis is conducted; few approaches take into account the low-level properties of data files. In this chapter, a low-level data-type analysis is introduced. It offers a novel framework for the overview of big data files and assessment of full-history data provenance (lineage). Developed tools generate tables and charts, which facilitate the comparison and analysis of datasets. Also, resulting data helped to develop a universal data model for optimal storing of OpenStreetMap full-history data in the form of a relational database. Databases for several pilot sites were evaluated by two use cases. First, a number of intrinsic data quality indicators and related metrics were implemented. Second, a framework for the inventory of spatial distribution of massive data uploads is discussed. Both use cases confirm the effectiveness of the proposed data-type analysis and derived relational data model.
Chapter Preview
Top

Background

Nowadays, processing of big data files is often focused on log data. Kreps et al. (2011) discussed this problem and proposed their solutions. They noticed that a vast amount of log data produced by large internet services. For instance, every day, China Mobile collects up to 8TB of phone call records, while Facebook harvests about 6TB of data related to user activity. For this, distributed log aggregators (Facebook’s Scribe, Yahoo’s Data Highway, and Cloudera’s Flume) are delivered by various companies.

Such solutions can be described as traditional enterprise messaging systems. They play a role as an event bus for processing asynchronous data flows. For instance, IBM Websphere MQ allows applications to insert messages into multiple queues atomically. Some systems do not enable batching numerous messages into a single request; this raises performance issues. To resolve this, solutions like Facebook’s Scribe aggregate log separately and, then, periodically dump them to HDFS. Moreover, various similar solutions are offered by Cloudera, Yahoo, and Linkedin. All these applications can be described as “messaging systems.” Modern messaging systems support asynchronous distributed logging and processing.

Key Terms in this Chapter

Data Mining: Is the process of discovering patterns in large datasets involving methods at the intersection of machine learning, statistics, and database systems.

IDE: Integrated development environment.

Hadoop (Apache Hadoop): An operating system developed in the frame of the Apache project; the system allows distributed calculation among various detached virtual and physical servers called nodes.

GUI: Graphic user interface.

MapReduce: Is a programming model for processing and generating large datasets in a parallel and distributed manner on a cluster.

GIS: Geographic information systems.

D3 (or D3.js): Is a JavaScript library generating interactive data visualizations, mainly charts, according to SVG, HTML5, and CSS web standards.

Log Data (Log Files): Data files containing the information registered consequently from oldest to newest (e.g., debug information provided by an application, users’ requests registered by a web server, etc.).

HDFS: A distributed file system developed in the frame of the Apache Hadoop system.

Complete Chapter List

Search this Book:
Reset