Automatic Folder Allocation System for Electronic Text Document Repositories Using Enhanced Bayesian Classification Approach

Automatic Folder Allocation System for Electronic Text Document Repositories Using Enhanced Bayesian Classification Approach

Wou Onn Choo (Faulty of Information Technology and Sciences, INTI International University, Nilai, Malaysia), Lam Hong Lee (School of Computing, Faculty of Science and Technology, Quest International University Perak, Ipoh, Malaysia), Yen Pei Tay (School of Computing, Faculty of Science and Technology, Quest International University Perak, Ipoh, Malaysia), Khang Wen Goh (School of Computing, Faculty of Science and Technology, Quest International University Perak, Ipoh, Malaysia), Dino Isa (Department of Electrical and Electronic Engineering, Faculty of Engineering, The University of Nottingham, Semenyih, Malaysia) and Suliman Mohamed Fati (INTI International University, Nilai, Malaysia)
Copyright: © 2019 |Pages: 19
DOI: 10.4018/IJIIT.2019040101

Abstract

This article proposes a system equipped with the enhanced Bayesian classification techniques to automatically assign folders to store electronic text documents. Despite computer technology advancements in the information age where electronic text files are so pervasive in information exchange, almost every single document created or downloaded from the Internet requires manual classification by the users before being deposited into a folder in a computer. Not only does such a tedious task cause inconvenience to users, the time taken to repeatedly classify and allocate a folder for each text document impedes productivity, especially when dealing with a huge number of files and deep layers of folders. In order to overcome this, a prototype system is built to evaluate the performance of the enhanced Bayesian text classifier for automatic folder allocation, by categorizing text documents based on the existing types of text documents and folders present in user's hard drive. In this article, the authors deploy a High Relevance Keyword Extraction (HRKE) technique and an Automatic Computed Document Dependent (ACDD) Weighting Factor technique to a Bayesian classifier in order to obtain better classification accuracy, while maintaining the low training cost and simple classifying processes using the conventional Bayesian approach.
Article Preview

1. Introduction

With meteoric growth of personal computers and rapid advancements of cloud computing technologies, the use of electronic data most notably electronic text documents are so pervasive in Internet information exchange. Living in the Big Data era today, the high-volume and high-velocity nature of electronic data exchange motivates creation of a utility to automatically categorise and assign newly created or incoming electronic text documents into the most appropriate folders on a computer. In the absence of such feature, an average computer user shall perform at least three manual steps in (1) reviewing the type of the file or filename to be stored, (2) determining the appropriate folder to store the file, and (3) moving the actual file to the desired folder on the computer. Not only does such tedious task causes inconvenience to many modern computer users, the time taken to repeatedly review, classify and allocate folder for each text document impedes productivity, especially dealing with huge number of files and deep layer of folders. In order to overcome this, an enhanced Bayesian text classifier system is built to perform automatic folder allocation by categorizing new incoming text documents based on the existing types of text documents and folders present in user’s computer hard drive.

With the automatic folder allocation technique, computers are able to recognize and classify incoming text documents and determine the most appropriate folder for storing, without requiring extensive manual interventions from the user. This can greatly reduce the time taken for human-computer interaction in allocating the appropriate folder path to store similar documents while also improving the overall information retrieval process.

Text document classification denotes the task of assigning text documents to one or more pre-defined categories. This is a direct concept from machine learning, which implies the declaration of a set of labelled categories as a way to represent the documents, and a statistical classifier trained with a labelled training set. Classification is the process in which objects are initially recognized, differentiated and understood, and implies that objects are grouped into categories, usually for some specific purposes. Ideally, a category represents a relationship between the subject and object of knowledge. Classification is fundamental in prediction, inference, and decision-making. However, there are varieties of ways to approach the classification task. An increasing number of supervised approaches have been developed for document classification, such as decision tree induction (Greiner and Schaffer, 2011), rule induction (Apte et al., 1994), k-nearest neighbour classification (Han et al., 1999), maximum entropy (Nigam el al., 1999) artificial neural network (Chen et al., 2005; Diligenti et al., 2003), support vector machines (Isa et al., 2008; Joachims, 1998; Joachims, 1999) and Bayesian classification (Androutsopoulos et al., 2000; Chen et al., 2009; Domingos and Pazzani, 1997; Eyheramendy et al., 2003; Kim et al., 2002; Lee et al., 2010; Lee et al., 2012-a; Lee and Isa, 2010; McCallum and Nigam, 2003; Rish, 2001; Sahami et al., 1998). Besides the supervised classification approaches, unsupervised clustering approaches such as k-means and self-organizing map, have also been introduced for text document segmentation (Adami et al., 2005; Isa et al., 2009; Lee and Yang, 2003; Takamura, 2003).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 16: 4 Issues (2020): Forthcoming, Available for Pre-Order
Volume 15: 4 Issues (2019): 2 Released, 2 Forthcoming
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing