Enhancing Clustering Performance Using Topic Modeling-Based Dimensionality Reduction

Enhancing Clustering Performance Using Topic Modeling-Based Dimensionality Reduction

T. Ramathulasi, M. Rajasekhara Babu
Copyright: © 2022 |Pages: 16
DOI: 10.4018/IJOSSP.300755
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Mainly in the present times, the description of the services and their working procedure have been established in natural text language. We have obtained service groups based on their similarities to reduce search space and time in service innovation. Major topic models such as LSA, LDA, and CTM policies have not been able to show effective performance due to the short description and limited description of services in text form, the reduction or absence of words that occur. To solve the issues created by brief text, the Dirichlet Multinomial Mixer model (DMM) with features representation using the Gibbs algorithm has been developed to reduce dimensionality in clustering and enhance performance. The launch results prove that DMM-Gibbs can give better results than all other methods with agglomerative or K-means clustering methods by sampling. Evaluations with internal and external criteria were used to calculate clustering performance based on these two values. Using this standard model, the dimensionality can be reduced to 93.13% and better clustering performance can be achieved.
Article Preview
Top

1. Introduction

The number of web API services is growing rapidly with the development of SOA (Service Oriented Architecture). They stand out as a wide variety of companies, such as Amazon, IBM, and Microsoft, offering enormous benefits to their customers through web services for their benefit. At present all Web API services are classified into two streams services based on SOAP (Simple Object Access Protocol) and REST (Representative State Transfer). The service is created by the service provider and its WSDL (Web Service Interpretation Language) file is published in UDDI (Universal Description Discovery and Integration) by SOAP-based services. From this UDDI the client finds a matching service as per his request and communicates with the service provider for the use of its services (Bhardwaj & Sharma, 2015). REST-based services provide services primarily for messaging over HTTP and for URI resource identification and interaction. Web API services are defined by WSDL, WADL as XML-based languages. But during this period the service activity description was usually given in the natural language text (Zhang, Wang, He, Li, & Huang, 2019).

More than 22,500 web API services published with descriptions by WADL, WSDL, and natural language have been crawled and collected from the popular service registry Programmable Web (PW) by 15 June 2020 (Bhardwaj & Sharma, 2015). A structured description of services described in natural language is published in PW so it has become difficult to find services as per customer request. As a result, it has become possible to find appropriate and effective services in service computing. Web service portals and search engines will be the primary resources in service innovation. The use of a keyword-based approach to service engines, primarily for matching services, has resulted in inaccurate and inaccurate results in service innovation. Semantic services can be developed by overcoming this problem by quoting the semantic meanings of services as well as services innovation. Manually quoting meanings is more laborious and time-consuming (Nisa & Qamar, 2015).

Over the past few years, various researchers have demonstrated that the clustering method can be used to improve the performance and accuracy of web APIs. The TF-IDF (Term Frequency - Inverse Document Frequency) method is used for clustering the web API so that service description files can be represented in the vector space. This results in a greater number of features. Because the TF-IDF approach usually has a very small matrix, it is necessary to obtain service-related features from this matrix. Effective topic modeling techniques that work effectively for a dimensional reduction in dealing with relevant features. Using abundant data in the transaction and selecting to reduce its size by best practices (Bukhari & Liu, A web service search engine for large-scale web service discovery based on the probabilistic topic modeling and clustering, 2018) Determining the semantic structure that is hidden in the service through statistical and mathematical methods can be achieved through topic modeling (Crain S., Zhou, Yang, & Zha, 2012). There is a fascinating relationship between topic modeling, clustering, and dimension reduction in all three. The main idea of ​​clustering is to equate similar services based on their functionality and to narrow the search space in terms of service innovation. The relationship in service with groups of soft clustering is described. Topic modeling makes it possible to plot the main purpose of soft clustering and their corresponding properties in vector space to improve with dimensional reduction. It leads to identifying effective representations of services based on domain themes.

Topic modeling has been developed through various methods and research has been done to get their benefits in web API innovation. Web APIs are referred to as topic vectors. When converting web APIs to vectors in this way, different topics are selected only by their respective set of words. It is considered to be the cause of DR (Zhao, Wang, Wang, & He, 2018). LDA (Latent Dirichlet Allocation), with its modified versions, uses the words to make domains of the web API (Cao,, Liu,, Liu, & Tang, 2017). However, traditional modeling methods such as LSA, LDA, etc., which are traditional for the interpretation of short text texts, are not accurate enough (Jipeng, Zhenyu, Yun, Yunhao, & Xindong, 2019):

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 1 Issue (2015)
Volume 5: 3 Issues (2014)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing