Doc2KG: Transforming Document Repositories to Knowledge Graphs

Doc2KG: Transforming Document Repositories to Knowledge Graphs

Nikolaos Stylianou, Danai Vlachava, Ioannis Konstantinidis, Nick Bassiliades, Vassilios Peristeras
Copyright: © 2022 |Pages: 20
DOI: 10.4018/IJSWIS.295552
Article PDF Download
Open access articles are freely available for download

Abstract

Document Management Systems (DMS) are used for decades to store large amounts of information in textual form. Their technology paradigm is based on storing vast quantities of textual information enriched with metadata to support searchability. However, this exhibits limitations as it treats textual information as black box and is based exclusively on user-created metadata, a process that suffers from quality and completeness shortcomings. The use of knowledge graphs in DMS can substantially improve searchability, providing the ability to link data and enabling semantic searching. Recent approaches focus on either creating knowledge graphs from document collections or updating existing ones. In this paper, we introduce Doc2KG (Document-to-Knowledge-Graph), an intelligent framework that handles both creation and real-time updating of a knowledge graph, while also exploiting domain-specific ontology standards. We use DIAVGEIA (clarity), an award winning Greek open government portal, as our case-study and discuss new capabilities for the portal by implementing Doc2KG.
Article Preview
Top

Introduction

A huge amount of new data is created and stored every minute by users in order to be retrievable and discoverable. In modern organisations, both in the private and public sector, textual information in electronic documents is stored in big volumes in Document Management Systems (DMS). DMS were first introduced in enterprise environments both in the private and the public sector over 30 years ago to receive, track, manage and store documents. Over time, along with the dramatic increase in the pace of data creation and increasing storage needs, these systems saw little improvement concerning information retrieval functionalities. This resulted in difficulties in locating, identify, retrieve information in collections that often expand to millions of documents. That is due to the fact that these systems cannot “look into” the textual information they store, but rather treat it as a black-box described by user-provided metadata. Inevitably, this human-created metadata often suffers from low quality.

With the rise of open Government and open data rhetorics and practices, some of these public sector DMS publish their content as open data to the Web to improve transparency and accessibility. Benefits of open data besides increased transparency also include democratic control, improved or new public products and services, improved government services, innovation and new knowledge creation from combined data sources and the possibility to identify patterns in large data volumes, among others (Pereira et al., 2017). For these reasons, in the last decade, open government policies have started to gain ground in an increasing number of countries globally, while several projects based on open data are executed all over the world (Mohamed et al., 2020; Zuiderwijk & Janssen, 2014).

This is the case of the Greek portal DIAVGEIA1 (in English: Clarity) in which all public sector administrative decisions are published, as mandated by law, forming a huge and fast-growing collection of more than 43 million documents. Essentially, DIAVGEIA is providing access to the governmental DMS, which stores all the documents. The huge volume of this textual information, combined with the lack of high quality and standardised metadata, poses several problems and processing challenges, justifying the use of the term “big data” to describe such a corpus of information.

Open (big) data must be available in a convenient and modifiable form, in order to be easy to exploit, i.e., to increase data interoperability, be able to combine different datasets together. Towards improving information and knowledge extraction, Semantic Web technologies like RDF and OWL were developed and standardized in the form of (meta-)data graphs consisting of elementary vertice-edge-vertice triples (subject, predicate, object) (Zaveri et al., 2016). Tim Berners-Lee (2010), the inventor of the Web and linked data initiator, suggested a 5-star deployment scheme for open data quality that constitutes the status quo in Semantic Web best practices (Hasnain & Rebholz-Schuhmann, 2018). This scheme proposes publishing machine-readable structured data and using open standards from W3C that are also linked to other linked open data. These linked data principles can also provide the basis for complying data to other recommendations employed by the research community like the Findable Accessible Interoperable Reusable (FAIR) principles indicating that data resources should support discovery and reusability by different stakeholders (Garijo & Poveda-Villalón, 2020).

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing