Graph Classification Using Back Propagation Learning Algorithms

Graph Classification Using Back Propagation Learning Algorithms

Abhijit Bera (OmDayal Group of Institutions, India), Mrinal Kanti Ghose (OmDayal Group of Institutions, India) and Dibyendu Kumar Pal (Asansol Engineering College (AEC), India)
DOI: 10.4018/ijsssp.2020070101
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Due to the propagation of graph data, there has been a sharp focus on developing effective methods for classifying the graph object. As most of the proposed graph classification techniques though effective are constrained by high computational overhead, there is a consistent effort to improve upon the existing classification algorithms in terms of higher accuracy and less computational time. In this paper, an attempt has been made to classify graphs by extracting various features and selecting the important features using feature selection algorithms. Since all the extracted graph-based features need not be equally important, only the most important features are selected by using back propagation learning algorithm. The results of the proposed study of feature-based approach using back propagation learning algorithm lead to higher classification accuracy with faster computational time in comparison to other graph kernels. It also appears to be more effective for large unlabeled graphs.
Article Preview
Top

1. Introduction

Graphs play a pivotal role in the computational domains of various fields such as telecommunication network (Shi & Malik, 2000), social network analysis (Bouckaert et al., 2011), bioinformatics (Backstrom & Leskovec, 2011), chemical compounds (Ralaivola et al., 2005), vision analysis (Agrawal et al., 2013) etc., wherein the input data is represented by graphs to compute the levels of nodes and edges or that of graph itself. Graphs of different domains are collected and various features are computed. Thus a dataset is obtained for the collection of graphs. Each graph of a particular domain belongs to same class. For a collection of N graphs Gi = (Vi, Ei), i = 1,2,….,N, with vertices Vi = {vi1, . . ., vin} and edges Ei = {(vij, vik)|vij, vik ∈ Vi}, defined over a data set D, if Gi conforms to class yi, C = {1, . . ., k} being the set of k class labels of different categories, the purpose of classification of graph is to train a model f: D → C by selecting representative sample of known class labels called training sets, to calculate the class label for any graph. The accuracy of the classification is obtained by comparing the computed output y′i = f(Gi) obtained for the testing data set with the true class label yi. The graph labelling of Gi in respect of the nodes and the edges may be drawn with reference to a common set of labels for the entire dataset D.

Due to the propagation of graph data, several researchers have proposed various effective methods for classifying the graph object. Usually, the values of the features of each graph are computed first and then a suitable feature selection algorithm is used for identifying the relevant features from the above mention features. It is worth noting that for any classifier, the identification of relevant features is most important to yield the best graph classification result in terms of higher accuracy.

One general approach is to use support vector machines (Gartner et al., 2003) by converting the graph data into a vector representation of lower dimension. Another way is to compare two graphs by using the Graph kernels to the structural features extracted from the graph. Here, the similarities between any two graphs in D are measured by constructing a kernel matrix Κ = {κ(Gi,Gj)}Ni,j=1, where the kernel function κ(Gi,Gj) is the inner-product between the vectors of Gi and Gj in the feature space of N dimension (Scholkopf & Smola, 2001). The Support Vector Machine (SVM) can then be applied to carry the graph classification for a well-defined kernel matrix K (Vapnik, 1995).

The neural networks can also be used to find the hidden depiction by extracting structural features as a set of latent parameters that cannot be directly observed from the graph structure. Here, each node representation is learned by using a low dimensional vector representing each vertex of the graph so that the vectors of corresponding vertices are located near each other in the Euclidean space. Such low dimensional representations can therefore be used for graph clustering and classification in various algorithms.

Most of the proposed graph classification techniques are though effective but are constrained by high computational overhead. Developing an effective graph classification algorithm in terms of high accuracy and less computational complexity thus becomes important to study.

In this paper, the Back propagation learning algorithms has been used to classify the graphs for which various graph based features are extracted and the most relevant features required for future graphs classification for prediction of unknown patterns in the datasets, are selected using feature extraction and selection algorithms. The experimental results reveal that the proposed feature-based Back propagation learning classification algorithms have better accuracy and less computational time complexity in comparison to other graph kernels. As the proposed technique is capable of capturing the structural differences between classes effectively, it can offer equally good results for large unlabeled graphs.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 13: 2 Issues (2022): Forthcoming, Available for Pre-Order
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 4 Issues (2018)
View Complete Journal Contents Listing