Recent Trends in the Use of Graph Neural Network Models for Natural Language Processing

Recent Trends in the Use of Graph Neural Network Models for Natural Language Processing

BURCU YILMAZ, Hilal Genc, Mustafa Agriman, Bugra Kaan Demirdover, Mert Erdemir, Gokhan Simsek, Pinar Karagoz
DOI: 10.4018/978-1-7998-1192-3.ch016
(Individual Chapters)
No Current Special Offers


Graphs are powerful data structures that allow us to represent varying relationships within data. In the past, due to the difficulties related to the time complexities of processing graph models, graphs rarely involved machine learning tasks. In recent years, especially with the new advances in deep learning techniques, increasing number of graph models related to the feature engineering and machine learning are proposed. Recently, there has been an increase in approaches that automatically learn to encode graph structure into low dimensional embedding. These approaches are accompanied by models for machine learning tasks, and they fall into two categories. The first one focuses on feature engineering techniques on graphs. The second group of models assembles graph structure to learn a graph neighborhood in the machine learning model. In this chapter, the authors focus on the advances in applications of graphs on NLP using the recent deep learning models.
Chapter Preview

1 Introduction

A graph is a powerful data structure that can express complex data, while having the capacity to embody the relationships between the entities of the data, but it comprises some challenges. A graph does not have an order to allow them to be processed in a Convolutional Neural Network or Recurrent Neural Network. It does not have a fixed size of node either. Extracting features from k-hop neighbourhood of a node while preserving the complex structure of the graph and without losing information is not an easy task. In the recent years, the success of deep learning in many domains have shed light to graph neural models. A number of models using graph deep models are proposed on many tasks.

Graph structure is proven to be useful in various domains such as social media, chemistry and biology. Another interesting domain that recently benefits from graph structure is natural language processing (NLP). Conventionally, in NLP solutions, sentences are considered as sequence of tokens. Hence, sequence based representations and sequence based deep learning approaches have been popularly applied. With the use of graph models, graph embedding and graph based deep learning solutions reveal successful results for NLP solutions.

There is a rich variety of NLP problems that benefit from graph embedding and graph deep learning models (Goldberg, & Hirst, 2017). For natural language based data, such as text or voice that is converted to text, there are several possibilities to represent text as a graph, such as capturing relationships between a document and the words or semantic or co-occurrence based relationships between words. Furthermore, such a graph can be enriched with other types of entities such as authors, subject, and relationships among them. Once a text is represented as a graph, use of graph embedding facilitates applications involving finding similarity between two texts, such as news clustering and blog post recommendation.

There are also NLP applications that make use of graph deep learning models. For example, semantic parsing involves generating formal meaning representation of the given sentences so that it can be automatically processed. Recent solutions for semantic parsing use various neural models involving graphs such that, the sentence is firstly represented as a graph and then, the graph processing neural models convert it to the desired representation. Machine translation is another popular NLP application that benefits from neural model. Machine translation is the transformation of a text in one natural language, such as English, to a text in another natural language, such as Turkish, having the same meaning. This also can be considered as a problem of transforming one sequence to another sequence. However, the recent approaches involve graph representations and graph processing neural models.

Below listed are the main objectives and contributions of this chapter:

  • To the best of our knowledge, there is very limited work that presents the state-of-the-art graph deep learning models for natural language processing tasks. This survey summarizes the recent trends in the use of graph neural network models for NLP problems.

  • To provide a basis for the use of graph neural network models for NLP, a summary of embedding methods for graph and nodes are given. Additionally, we present a time performance comparison on the same benchmark data set.

  • Two basic graph deep learning models, Graph Convolutional Neural Network (GCN) and Graph-to-Sequence (Graph2Seq) models, which are frequently applied on NLP problems are described in detail.

The rest of the chapter is organized as follows. In the next section, we give the state of the art graph and node embedding methods with the benchmarks. In Section 3, recent GCN and Graph2Seq models are presented. In Section 4, we give the application areas of these recent models in Natural Language Processing. The chapter is concluded with and overview in Section 5.

Complete Chapter List

Search this Book: