Deep Embedding Learning With Auto-Encoder for Large-Scale Ontology Matching

Deep Embedding Learning With Auto-Encoder for Large-Scale Ontology Matching

Meriem Ali Khoudja, Messaouda Fareh, Hafida Bouarfa
Copyright: © 2022 |Pages: 18
DOI: 10.4018/IJSWIS.297042
Article PDF Download
Open access articles are freely available for download

Abstract

Ontology matching is an efficient method to establish interoperability among heterogeneous ontologies. Large-scale ontology matching still remains a big challenge for its long time and large memory space consumption. The actual solution to this problem is ontology partitioning which is also challenging. This paper presents DeepOM, an ontology matching system to deal with this large-scale heterogeneity problem without partitioning using deep learning techniques. It consists on creating semantic embeddings for concepts of input ontologies using a reference ontology, and use them to train an auto-encoder in order to learn more accurate and less dimensional representations for concepts. The experimental results of its evaluation on large ontologies, and its comparison with different ontology matching systems which have participated to the same test challenge, are very encouraging with a precision score of 0.99. They demonstrate the higher efficiency of the proposed system to increase the performance of the large-scale ontology matching task.
Article Preview
Top

Introduction

Ontologies are the cornerstone of the semantic web. They provide representing, sharing and reuse of knowledge as a communication tool for applications developed in different ways. An ontology is an explicit description of the concepts, properties, relationships and individuals that may exist for a particular domain. It reflects knowledge from a certain domain of discourse (Zamazal, 2020). However, most applications require access to multiple ontologies. In addition, the construction of ontologies by various experts leads to heterogeneity at different levels, due to the rapid development of the semantic web.

Therefore, it is very interesting to identify correspondences between semantically related entities of heterogeneous ontologies. That allows agents using different ontologies to inter-operate. These correspondences, called alignment or mapping, are the backbone of the ontology matching task, which is the promising solution to this ontology semantic heterogeneity problem. In addition, automatic and semi-automatic ontology matching techniques should be developed in order to reduce the burden of its manual creation and maintenance (Khiat & Benaissa, 2015).

Likewise, ontologies of most applications (like in medicine and astronomy) are of big size. And, large ontologies include a high conceptual heterogeneity. That could decrease the efficiency of ontology matching systems facing other challenges as shortage of memory and long-time processing. Such issue makes scaling up the ontology matching process a very interesting problem.

Deep learning techniques have been recently used to address important problems in many research axes, such as image processing, natural language processing, information retrieval, signal processing and many others as in (Gupta et al., 2019; Sedik et al., 2021; Al-Smadi et al., 2018). Sophisticated artificial intelligence systems use deep learning to solve computational tasks and complex problems quickly (Fiorini, 2020). These techniques are very appropriate for dealing with large datasets. They have the ability to analyse and interpret massive amounts of data, that require efficient and effective computational tools.

Although deep learning techniques are very appropriate for dealing with large datasets, they have limited use in ontology matching. Even the few approaches that employ these computational models aim at enhancing the performance of the ontology matching task, and not at handling the large-scale heterogeneity problem (Portisch & Paulheim, 2018; Chang et al., 2019; Hertling & Paulheim, 2018; Monych et al., 2020; Roussille et al., 2018). Then, they tested their methods on ontologies of small sizes. The commonly promising solution for dealing with the large-scale ontology matching issue is partitioning (Tran et al., 2012; Laadhar et al., 2019; Laadhar et al., 2018; Jiménez-Ruiz et al., 2018; Balachandran et al., 2019). It consists on dividing the input ontologies into several sub-ontologies. The overall result of matching is obtained after combining the individual results of matching sub-ontologies. However, the partitioning phase is also challenging; The method of dividing input ontologies, the number of resulted partitions, sizes of partitions, and all parameters of the partitioning process are delicate to define as well. Moreover, several semantic links inside ontologies are expected to be lost when dividing input ontologies. That affects the matching quality.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing