Making an Electronic Nose Versatile: The Role of Incremental Learning Algorithms

Making an Electronic Nose Versatile: The Role of Incremental Learning Algorithms

Nabarun Bhattacharyya (Centre for the Development of Advanced Computing (C-DAC), India), Bipan Tudu (Jadavpur University, India) and Rajib Bandyopadhyay (Jadavpur University, India)
Copyright: © 2011 |Pages: 24
DOI: 10.4018/978-1-61520-915-6.ch003
OnDemand PDF Download:
List Price: $37.50


Because of these factors, it is necessary to make the system flexible in such a way that the system is able to update an existing classifier without affecting the classification performance on old data, and such classifiers should have the property as being both stable and plastic. Conventional pattern classification algorithms require the entire dataset during training, and thereby fail to meet the criteria of being plastic and stable at the same time. The incremental learning algorithms possess these features, and thus, the electronic nose systems become extremely versatile when equipped with these classifiers. In this chapter, the authors describe different incremental learning algorithms for machine olfaction.
Chapter Preview

Introduction To Incremental Learning

In conventional supervised classifiers, the entire dataset is required during training and thus their role is severely limited in some applications of machine olfaction systems. In such applications, it may not be possible to collect the entire dataset within a short time and assembling of data may spread over multiple seasons or even years. Hence, the pattern classifier should have the following two important features:

  • 1.

    Plasticity: the property to incorporate new knowledge without accessing the old dataset

  • 2.

    Stability: the property to retain the old and acquired knowledge.

Plasticity and stability are two contradictory requirements for machine learning algorithms and this is commonly known as the stability – plasticity dilemma (Giraud-Carrier, 2000). Conventional training algorithms fail to meet both these requirements, as the models require the entire dataset before the commencement of the training session. Augmentation of new knowledge requires the old as well as new dataset, and after training with the augmented dataset, there may be some loss of knowledge. This is indeed a severe limitation while introducing a new technology employing a machine olfaction system. The situation may change significantly if the user industry starts getting some results with a few data, even though the classification may not be accurate enough initially. In such a situation, incremental learning algorithms can play a very important role. These incremental learning procedures can learn perpetually without forgetting the learned knowledge and can start classifying with very few data as well. For example, when an electronic nose is equipped with a computational model that has the feature of incremental learning, the instrument may be sent from one field or plant to another and trained with the new samples. It will try to classify the signature when subjected to a sample and at the same time, learn the new patterns without forgetting previous knowledge. Since this instrument once trained with some samples will give a result of classification, the user industry would either be satisfied with the result or, if they desire, may retrain the instrument. This feature makes the instrument versatile and thus an electronic nose instrument with an incremental classifier is more likely to be acceptable to the user industry.

To summarize, an incremental learning algorithm should meet the following criteria (Polikar et al., 2001):

  • a.

    It should be able to learn additional information using new data.

  • b.

    It should not require access to the original old data, used to train the existing classifier.

  • c.

    It should not forget previously acquired knowledge.

  • d.

    It should be able to accommodate new classes that may be introduced with new data.

There are many approaches for designing such types of classifiers and in this chapter; design of such classifiers is discussed employing both fuzzy logic and neural networks. For these classifiers, while the fuzzy logic based incremental learning is based on the principle of on-line rule generation, the radial basis function neural network model utilizes the facility of adding new kernels in the hidden layer. There are many incremental models with back-propagation multilayer perceptron (BP-MLP), and we discuss a simple model where the incremental feature is incorporated by merging multiple networks in parallel. In all the models, we have assumed that the electronic olfactory system has multiple sensors as inputs and only one output attribute.


Fuzzy Logic Based Incremental Learning

Fuzzy incremental learning techniques have been used in (Singh, 1999; Mouchaweh et al., 2002) for classification purposes and are found to be quite useful. For improving decision making in real environments, fuzzy techniques are reported to be better than the conventional non-fuzzy methods (Klir & Folger, 1989; Pal & Majumder, 1986). Here we describe the incremental fuzzy model using the Wang Mendel method (Wang & Mendel, 1992) for on-line rule generation.

Complete Chapter List

Search this Book: