Article Preview
TopIntroduction
Nearly eight decades ago, Z1 was born with 64×22 bits of memory. Eighty years on, a modern desktop computer can easily hold an average of 500 Gigabytes and provides instant access to Petabytes, even Exabytes, of data through the Internet, exposing us to an amount much more than an individual human being can consume over her entire life time. Data, one of the fiercest “monsters” created by mankind, has overpowered its creator and yet, more and more evidently, the word “Big” seems failing to deliver a precise image of the size that we are trying to tackle – it is still growing in an unprecedented rate attribute to for example the high throughput scientific instruments (e.g. Large Hadron Collider at CERN and the widely used DNA microarrays). More recently, with the advances in Information Technology, the ungovernable nature of Data has started to demonstrate itself in our everyday business and personal life. Let’s take for example the overwhelming size of enterprise data accumulated by ERP and BI systems or the sheer number of emails that we send and receive on a day to day basis.
Yet, volume is only one challenging characteristic possessed by Big Data, along coming data velocity and data variety. In this paper, we concentrate on the variety aspect and envisage the merit of linking together heterogenous data sources to reveal knowledge that does not manifest in single data silos. Unveiling knowledge that lies in the interactions among data sets is, however, not easy. To many, Big interconnected Data sets are no more than a messy swap while trudging through it requires not only trained minds but also tools that can assist to weave the intertwined data into an orderly and easily-traversable knowledge network.
Linked Open Data (LOD) (Bizer et al., 2009) paradigm, an approach to cope with the variety of Big Data, has gained an increasing popularity in the past a few years and started to reach beyond academia (Hu & Svensson, 2010). LOD is underpinned by the idea of imposing a machine-readable semantic layer upon data so as to allow computers taking over some of the data analysis tasks exclusive to humans. At the heart of LOD is the Resource Description Framework, RDF1, a simple graph-based data modelling language enabling semantic mark-up of data. With RDF, LOD tries to piece together data silos and transform the current archipelagic data landscape into a connected data graph upon which complicated data analytics and business intelligence applications can be built.
The LOD vision, even though opening up great potentials, brought along with it tremendous challenges that were not previously seen in the Big Data field (Hausenblas et al., 2012). The challenges include: extending the current storage to accommodate semantics, distributing semantic data operations, automating semantic data analysis, etc. The present work aims to address the most fundamental one: large-scale semantic data (or triplified data) storage.
Large-scale semantic data storage is facilitated by large-scale data storage, while the latter has been extensively investigated by the Database, Internet, and other relevant communities (c.f. Toad World2 for a comprehensive survey). Effectively, thus far, the size issue is attacked by either scaling up with more high-end servers or scaling out taking advantage of low-cost commodity hardware available in massive numbers through the cloud. The inherent flexibility, plus low acquisition and operating costs, has rendered the latter a nearly perfect storage solution to large amounts of data. The only missing piece of the envisaged Big Linked Data jigsaw is a semantic layer that explicitly captures the meaningful relationships among data items so as to express data semantics.