Virtual Sampling with Data Construction Analysis

Virtual Sampling with Data Construction Analysis

Chun-Jung Huang (National Tsing Hua University, Taiwan, ROC), Hsiao-Fan Wang (Chinese Academy of Sciences, China) and Shouyang Wang (City University of Hong Kong, Hong Kong)
DOI: 10.4018/978-1-59904-982-3.ch018
OnDemand PDF Download:
$37.50

Abstract

One of the key problems in supervised learning is due to the insufficient size of the training data set. The natural way for an intelligent learning process to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. According to the concept of creating virtual samples, the intervalized kernel method of density estimation (IKDE) was proposed to improve the learning ability from a small data set. To demonstrate its theoretical validity, we provided a theorem based on Decomposition Theory. In addition, we proposed an alternative approach to achieving the better learning performance of IKDE.
Chapter Preview
Top

Introduction

Lack of referenced data is very often responsible for poor performances of learning. In many cases, difficulty, if not impossible, in collecting additional data often cause unsatisfactory solutions. For example, due to the shorter life cycle of a product, prior management is very important, because it acquires management knowledge in the early stages, yet the accumulated information is often not sufficient for supporting the decisions. It indeed has been a challenge for managers. Because of insufficient data in the initial period of a manufacturing system, the derived management model may not be reliable and stable.

One major contribution to the above issue has been given by Abu-Mostafa (1993) who developed a methodology for integrating different kinds of “hints” (prior knowledge) into usual learning-from-example procedure. By this way, the “hints” can be represented by new examples, generated from the existing data set by applying transformations that are known to leave the function to be learned invariant. Then, Niyogi, Girosi, and Tomaso (1998) modified “hints” into “virtual samples” and applied it to improve the learning performances of artificial neural networks such as Back- Propagation and Radial Basis Function Networks. In fact, it is evident that generating more resembling samples from the small training set can make the learning tools perform well.

Recently, Li and Lin (2006) combined the concept of virtual sample generation and the method of Intervalized Kernel Density Estimation (IKDE) to overcome the difficulty of learning with insufficient data at the early manufacturing stages. From the Li and Lin’s results, it can be noted that, when the size of virtual data increases, the average learning accuracy would decrease. This is because that when using IKDE for virtual sample generation, an actual probability density function should firstly be estimated from the original data set. Then, due to the unbounded universal discourse of the density estimation function, the more are the virtual samples generated, the larger is the Type Two error in Statistics as shown in Figure 1 (Wackerly, Mendenhall, & Scheaffer, 1996). Therefore, even though creating virtual samples by an estimated function is a way to overcome the difficulty in learning with insufficient data, the prediction capability remains a critical issue.

Figure 1.

The relationship among the population, obtained data and virtual data (Li and Lin, 2006)

In this chapter, after introducing the main concept and procedure of IKDE, we used Decomposition Theory to provide a theoretical support for using IKDE to improve the learning ability from a small data set. In addition, to overcome the possible Type Two error in prediction, an alternative method named Data Construction Method (DCM) was proposed. To compare their performance, we demonstrated a numerical case adopted from Li and Lin’s and finally, discussion and conclusions were drawn.

Top

The Procedure Of Virtual Sample Generation

Because insufficient data is very often responsible for poor performances of learning, how to extract the significant information for inferences is a critical issue. It is well known that one of the basic theories in Statistics is the Central Limit Theorem (Ross, 1987). This theorem asserts that when a sample size is large (≥25 or 30), the x-bar distribution is approximately normal without considering the population distribution. Therefore, when a given sample is less than 25 (or 30), it is regarded as a small sample. Although “small” is not well defined, it is in general related to the concept of accuracy. Huang (2002) has clarified this concept which will be adopted in our study and thus presented by the following definition:

  • Definition 2.1 Small Samples (Huang, 2002)

Complete Chapter List

Search this Book:
Reset
Editorial Advisory Board
Table of Contents
Preface
Hsiao-Fan Wang
Acknowledgment
Hsiao-Fan Wang
Chapter 1
Martin Spott, Detlef Nauck
This chapter introduces a new way of using soft constraints for selecting data analysis methods that match certain user requirements. It presents a... Sample PDF
Automatic Intelligent Data Analysis
$37.50
Chapter 2
Hung T. Nguyen, Vladik Kreinovich, Gang Xiang
It is well known that in decision making under uncertainty, while we are guided by a general (and abstract) theory of probability and of statistical... Sample PDF
Random Fuzzy Sets: Theory & Applications
$37.50
Chapter 3
Gráinne Kerr, Heather Ruskin, Martin Crane
Microarray technology1 provides an opportunity to monitor mRNA levels of expression of thousands of genes simultaneously in a single experiment. The... Sample PDF
Pattern Discovery in Gene Expression Data
$37.50
Chapter 4
Erica Craig, Falk Huettmann
The use of machine-learning algorithms capable of rapidly completing intensive computations may be an answer to processing the sheer volumes of... Sample PDF
Using "Blackbox" Algorithms Such AS TreeNET and Random Forests for Data-Ming and for Finding Meaningful Patterns, Relationships and Outliers in Complex Ecological Data: An Overview, an Example Using G
$37.50
Chapter 5
Eulalia Szmidt, Marta Kukier
We present a new method of classification of imbalanced classes. The crucial point of the method lies in applying Atanassov’s intuitionistic fuzzy... Sample PDF
A New Approach to Classification of Imbalanced Classes via Atanassov's Intuitionistic Fuzzy Sets
$37.50
Chapter 6
Arun Kulkarni, Sara McCaslin
This chapter introduces fuzzy neural network models as means for knowledge discovery from databases. It describes architectures and learning... Sample PDF
Fuzzy Neural Network Models for Knowledge Discovery
$37.50
Chapter 7
Ivan Bruha
This chapter discusses the incorporation of genetic algorithms into machine learning. It does not present the principles of genetic algorithms... Sample PDF
Genetic Learning: Initialization and Representation Issues
$37.50
Chapter 8
Evolutionary Computing  (pages 131-142)
Thomas E. Potok, Xiaohui Cui, Yu Jiao
The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage... Sample PDF
Evolutionary Computing
$37.50
Chapter 9
M. C. Bartholomew-Biggs, Z. Ulanowski, S. Zakovic
We discuss some experience of solving an inverse light scattering problem for single, spherical, homogeneous particles using least squares global... Sample PDF
Particle Identification Using Light Scattering: A Global Optimization Problem
$37.50
Chapter 10
Dominic Savio Lee
This chapter describes algorithms that use Markov chains for generating exact sample values from complex distributions, and discusses their use in... Sample PDF
Exact Markov Chain Monte Carlo Algorithms and Their Applications in Probabilistic Data Analysis and Inference
$37.50
Chapter 11
J. P. Ganjigatti, Dilip Kumar Pratihar
In this chapter, an attempt has been made to design suitable knowledge bases (KBs) for carrying out forward and reverse mappings of a Tungsten inert... Sample PDF
Design and Development of Knowledge Bases for Forward and Reverse Mappings of TIG Welding Process
$37.50
Chapter 12
Malcolm J. Beynon
This chapter considers the role of fuzzy decision trees as a tool for intelligent data analysis in domestic travel research. It demonstrates the... Sample PDF
A Fuzzy Decision Tree Analysis of Traffic Fatalities in the US
$37.50
Chapter 13
Dymitr Ruta, Christoph Adl, Detlef Nauck
In the telecom industry, high installation and marketing costs make it six to 10 times more expensive to acquire a new customer than it is to retain... Sample PDF
New Churn Prediction Strategies in the Telecom Industry
$37.50
Chapter 14
Malcolm J. Beynon
This chapter demonstrates intelligent data analysis, within the environment of uncertain reasoning, using the recently introduced CaRBS technique... Sample PDF
Intelligent Classification and Ranking Analyses Using CARBS: Bank Rating Applications
$37.50
Chapter 15
Fei-Chen Hsu, Hsiao-Fan Wang
In this chapter, we used Cumulative Prospect Theory to propose an individual risk management process (IRM) including a risk analysis stage and a... Sample PDF
Analysis of Individual Risk Attitude for Risk Management Based on Cumulative Prospect Theory
$37.50
Chapter 16
Francesco Giordano, Michele La Rocca, Cira Perna
This chapter introduces the use of the bootstrap in a nonlinear, nonparametric regression framework with dependent errors. The aim is to construct... Sample PDF
Neural Networks and Bootstrap Methods for Regression Models with Dependent Errors
$37.50
Chapter 17
Lean Yu, Shouyang Wang, Kin Keung Lai
Financial crisis is a kind of typical rare event, but it is harmful to economic sustainable development if occurs. In this chapter, a... Sample PDF
Financial Crisis Modeling and Prediction with a Hilbert-EMD-Based SVM Approachs
$37.50
Chapter 18
Chun-Jung Huang, Hsiao-Fan Wang, Shouyang Wang
One of the key problems in supervised learning is due to the insufficient size of the training data set. The natural way for an intelligent learning... Sample PDF
Virtual Sampling with Data Construction Analysis
$37.50
About the Contributors