Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Model Selection and Two Stage Implementation

Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques
It refers to select an appropriate one among a family of infinite many candidate structures with each in a same configuration but in different scales, each of which is labeled by a scale parameter k in term of one integer or a set of integers. Selecting an appropriate k means getting a structure consisting of an appropriate number of free parameters. Usually, a maximum likelihood (ML) learning is not good for model selection. Classically, model selection is made in a two stage implementation. First, enumerate a candidate set K of k and estimate the unknown set of parameters by ML learning for a solution at each . Second, use a model selection criterion to select a best . Several criteria are available for the purpose, such as AIC, CAIC, BIC, cross validation, etc.
Published in Chapter:
Learning Algorithms for RBF Functions and Subspace Based Functions
Lei Xu (Chinese University of Hong Kong and Beijing University, PR China)
DOI: 10.4018/978-1-60566-766-9.ch003
Abstract
Among extensive studies on radial basis function (RBF), one stream consists of those on normalized RBF (NRBF) and extensions. Within a probability theoretic framework, NRBF networks relates to nonparametric studies for decades in the statistics literature, and then proceeds in the machine learning studies with further advances not only to mixture-of-experts and alternatives but also to subspace based functions (SBF) and temporal extensions. These studies are linked to theoretical results adopted from studies of nonparametric statistics, and further to a general statistical learning framework called Bayesian Ying Yang harmony learning, with a unified perspective that summarizes maximum likelihood (ML) learning with the EM algorithm, RPCL learning, and BYY learning with automatic model selection, as well as their extensions for temporal modeling. This chapter outlines these advances, with a unified elaboration of their corresponding algorithms, and a discussion on possible trends.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR