Article Preview
TopIntroduction
Short Text Semantic Similarity (STSS) measures have become a very important topic for many types of research and applications. They play a fundamental role in different tasks such as information retrieval (AlMousa et al., 2021), questions generation and answering (Gao et al., 2021; Jiang et al., 2019), automatic essay scoring (Zhang, 2021), automatic short answer grading (Chaturvedi & Basak, 2021; Henderi & Winarno, 2021), machine translation (Wang et al., 2019), text summarization (Prudhvi et al., 2021; Magdum & Rathi, 2021), Sentiment Analysis (Al-Smadi et al., 2018) and others.
Finding the semantic similarity or the relation between two Arabic snipped texts has many challenges such as morphological inflections and orthographic ambiguity due to optional diacritization. This leads to a larger number of homographs that add more ambiguity than found in English (Habash, 2010). For example, the Arabic word “كتب” “Ktb” can be the verb “wrote” “Ktb” or the noun “books” “Kutub” with different diacritical marks which are also called Tashkil or Harakat (Farghaly & Shaalan, 2009). It is harder to resolve this ambiguity for the available texts on the web that are published without any diacritical marks. Using a standard morphological tool can distinguish between the two meanings according to the part of speech tagging and the context of the word.
In this article, the lemmatization technique is applied to the Arabic words to overcome the diacritization problem and its effects. Lemmatization transforms the inflected word form to its dictionary lemma look-up form (Ismail et al., 2016). The lemma form represents the smallest form that captures all semantic features of the word. For the mentioned example of the Arabic word “كتب” “Ktb”, it will be represented as “wrote” “Ktb” for all verb forms and represented as “book” “KitAb” for all noun forms of the word.
There are two types of measuring the similarity techniques between sentences either lexical or semantic similarity. Lexical-based similarity considers the sentence as a sequence of characters and hints it measures the similarity between these characters. Therefore, it does not take into consideration any semantic aspects. On the other hand, the semantic-based similarity depends on the meaning of the sentences. To measure the semantic similarity, it is required to find the degree of relatedness between sentences. According to the adopted methodology, semantic-based similarity approaches are classified into three categories: Alignment-based approaches, vector space-based approaches, and machine learning-based approaches (Abo-Elghit, 2020). In this work, a combined approach between the alignment-based and vector space-based is presented.
Word embedding becomes one of the most widespread approaches that is used for many text tasks such as textual semantic similarity. It is concerned with the distributed representation of words in a vector space. This vector space depends on a handcrafted semantic net for words that concludes the meaning of words and relations between them (Dzone, 2018). The traditional measure to calculate the degree of similarity is the cosine of the angle or the Euclidean distance between the vectors representing the words. While intuitive, this approach has at least one significant shortcoming: cosine and Euclidean distance cannot capture the observed asymmetries in human similarity judgments because they are inherently symmetric measures (Nematzadeh et al., 2017). In this article, one of the word representation models as semantic vector space is used to measure the similarity between two snipped Arabic texts using another measurement tool to eliminate the cosine similarity drawbacks.