Storage and Query Processing Architectures for RDF Data

Storage and Query Processing Architectures for RDF Data

Tanvi Chawla
Copyright: © 2023 |Pages: 16
DOI: 10.4018/978-1-7998-9220-5.ch019
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The escalation in demand of RDF format for knowledge representation and information management can be attributed to its flexible nature. The RDF data model is increasingly being used for sharing and integration of information and knowledge across several domains. Some of the domains and applications where RDF model is increasingly being used include bioinformatics and search engines. As the amount of RDF data continues to increase, the management of such large amount of data becomes challenging. Thus, scalability is a major concern while handling large-scale RDF data. Hence, it becomes necessary to opt for scalable solutions while managing large RDF data. As a solution to this, many researchers opt for distributed data management systems. In this article, the authors provide a detailed analysis of RDF data management techniques used to make a RDF system more scalable. The objective of this article is to provide a brief description of the centralized and distributed RDF frameworks.
Chapter Preview
Top

Introduction

Resource Description Framework (RDF) also known as the Semantic Web data model represents data in the form of triples (subject, predicate, object). The RDF data can also be represented in form of a graph where subjects and objects depict the vertices of this graph. A RDF triple is formed by an edge connecting these vertices and the predicate (or property) depicting the edge labels (Peng et al., 2016). RDF is a W3C proposed standard that is used to model objects and represent Semantic Web data. Some of the characteristics of RDF data that are similar to Big data are Velocity, Volume, Veracity and Variety (Özsu, 2016). As the domain of RDF data usage is broadening with its use now not just being limited to Semantic Web, it is becoming quite difficult to handle such large scale RDF data. RDF model is used to represent resources on the web and develop detailed descriptions (also called metadata) for these resources. SPARQL is the W3C proposed query language for querying RDF data. The databases specifically used for storing and querying RDF data are known as triplestores or RDF stores. One of the most popular RDF stores is RDF-3x (Neumann & Weikum, 2008). These triplestores unlike relational databases are optimized to store only RDF data and not any other type of data. The SPARQL query language can be used for querying RDF data from these stores (Banane & Belangour, 2019).

Some of the main challenges in large scale RDF management include storage and query processing. The application of existing SPARQL query optimization techniques on large RDF data is also a huge concern. Many SPARQL queries contain complex joins need to be efficiently executed on this huge RDF data. Some of the popular RDF engines focusing on query performance include RDF-3x (Neumann & Weikum, 2008), Virtuoso (Erling & Mikhailov, 2010), Hexastore (Weiss et al., 2008), TripleBit (Yuan et al., 2013) etc. These engines support RDF storage and SPARQL querying (Yuan et al., 2014). These RDF engines are centralized and thus, store RDF data on a single node. These systems are insufficient for large scale RDF storage and handling complex queries on such huge amount of data. As a result distributed RDF management systems came into existing for improving query performance on large RDF data. These systems partition RDF data among nodes in a cluster and execute SPARQL queries on partitioned RDF data in a distributed manner. One of the main issues faced by distributed RDF systems is the cost of partitioning large RDF data (Harbi et al., 2015). Also, these systems face some bottlenecks while loading or query processing such large RDF data (Cheng & Kotoulas, 2015). Virtuoso Cluster Edition (Erling & Mikhailov, 2010), Clustered TDB (Owens et al., 2009), 4store (Harris et al., 2009) are some of the examples of distributed RDF systems that are built on a specialized computer cluster. The disadvantage with these specialized cluster distributed systems is that they require a dedicated infrastructure. But this limitation can be overcome with distributed RDF systems that use cloud-based solutions.

Key Terms in this Chapter

RDF: Resource Description Framework (RDF) is a popular model recommended by the World Wide Web Consortium (W3C) to represent resource information on the web.

MapReduce: MapReduce is a programming model used to process large datasets using a parallel and distributed algorithm in a reliable manner.

Cloud: A cloud supports storage and management of data on remote servers. It provides these computer system resources mainly computing power and data storage on demand to the users. The cloud services are provided by many companies nowadays to provide convenience to the customers and to save money.

Big Data: Big data is any data that is difficult to handle, and cannot be managed by the standard data processing software. The three popular terms associated with Big Data are Volume, Velocity and Variety known as the three V’s.

SPARQL: Simple Protocol and RDF Query Language (SPARQL) is a W3C recommended query language for processing queries over RDF data. A SPARQL query comprises of a Basic Graph Pattern (BGP).

NoSQL Databases: These are the not only SQL databases that store data different from the relational databases. They support scalable storage and management of large data.

Hadoop: Hadoop is a framework used for processing massive datasets across a cluster of computers in a distributed manner by using a simple programming model.

Complete Chapter List

Search this Book:
Reset