Improving the Naïve Bayes Classifier

Improving the Naïve Bayes Classifier

Liwei Fan, Kim Leng Poh
Copyright: © 2009 |Pages: 5
DOI: 10.4018/978-1-59904-849-9.ch130
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A Bayesian Network (BN) takes a relationship between graphs and probability distributions. In the past, BN was mainly used for knowledge representation and reasoning. Recent years have seen numerous successful applications of BN in classification, among which the Naïve Bayes classifier was found to be surprisingly effective in spite of its simple mechanism (Langley, Iba & Thompson, 1992). It is built upon the strong assumption that different attributes are independent with each other. Despite of its many advantages, a major limitation of using the Naïve Bayes classifier is that the real-world data may not always satisfy the independence assumption among attributes. This strong assumption could make the prediction accuracy of the Naïve Bayes classifier highly sensitive to the correlated attributes. To overcome the limitation, many approaches have been developed to improve the performance of the Naïve Bayes classifier. This article gives a brief introduction to the approaches which attempt to relax the independence assumption among attributes or use certain pre-processing procedures to make the attributes as independent with each other as possible. Previous theoretical and empirical results have shown that the performance of the Naïve Bayes classifier can be improved significantly by using these approaches, while the computational complexity will also increase to a certain extent.
Chapter Preview
Top

Improving The Naïve Bayes Classifier

This section introduces the two groups of approaches that have been used to improve the Naïve Bayes classifier. In the first group, the strong independence assumption is relaxed by restricted structure learning. The second group helps to select some major (and approximately independent) attributes from the original attributes or transform them into some new attributes, which can then be used by the Naïve Bayes classifier.

Key Terms in this Chapter

Decision Trees: Decision tree is a classifier in the form of a tree structure, where each node is either a leaf node or a decision node. A decision tree can be used to classify an instance by starting at the root of the tree and moving through it until a leaf node, which provides the classification of the instance. A well known and frequently used algorithm of decision tree over the years is C4.5

Naïve Bayes Classifier: The Naïve Bayes classifier, also called simple Bayesian classifier, is essentially a simple Bayesian Network (BN). There exist two underlying assumptions in the Naïve Bayes classifier. First, all attributes are independent with each other given the classification variable. Second, all attributes are directly dependent on the classification variable. Naïve Bayes classifier computes the posterior of classification variable given a set of attributes by using the Bayes rule under the conditional independence assumption

Greedy Search: At each point in the search, the algorithm considers all local changes to the current set of attributes, makes its best selection, and never reconsiders this choice

UCI Repository: This is a repository of databases, domain theories and data generator that are used by the machine learning community for the empirical analysis of machine learning algorithms

Principal Component Analysis (PCA): PCA is a popular tool for multivariate data analysis, feature extraction and data compression. Given a set of multivariate measurements, the purpose of PCA is to find a set of variables with less redundancy. The redundancy is measured by correlations between data elements

Forward Selection and Backward Elimination: A forward selection method would start with the empty set and successively add attributes, while a backward elimination process would begin with the full set and remove unwanted ones

Independent Component Analysis (ICA): Independent component analysis (ICA) is a newly developed technique for finding hidden factors or components to give a new representation of multivariate data. ICA could be thought of as a generalization of PCA. PCA tries to find uncorrelated variables to represent the original multivariate data, whereas ICA attempts to obtain statistically independent variables to represent the original multivariate data

Complete Chapter List

Search this Book:
Reset