Constrained Nonlinear Optimization in Information Science

Constrained Nonlinear Optimization in Information Science

Copyright: © 2018 |Pages: 13
DOI: 10.4018/978-1-5225-2255-3.ch399
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter provides an overview of constrained optimization methods. Background, theory, and examples are provided. Coverage includes Lagrange multipliers for equality constrained optimization with a Cobb-Douglass example from Information Science. We also provide Karush-Kuhn-Tucker for inequality constrained optimization and a production example for smart phones with inequalities. An overview and discussion of numerical methods and techniques is also provided. We also provide a brief list of technology available to assist in solving these constrained nonlinear optimization problems.
Chapter Preview
Top

Background

The general constrained nonlinear programming (NLP) problem is to find x* as to optimize 978-1-5225-2255-3.ch399.m06subject to the constraints of the problem shown in equation (2).

  • Maximize or Minimize 978-1-5225-2255-3.ch399.m07

subject to

978-1-5225-2255-3.ch399.m08
(2) for i = 1,2,…,m.

Classical constrained optimization appeared with equality constrains and Lagrange multiplier named for Joseph Lagrange in the late 1700’s. It was almost two hundred years later when Kuhn-Tucker (1951) presented their famous Kuhn-Tucker (KT) conditions. Scholars later found that Karsuh (1939) had done considerable work on his thesis in the area of constrained optimization and thus, was added his name was added to create the Karsh-Kuhn-Tucker (KKT) conditions. Bellman (1952; 1957) created dynamic programming in the 1950’s to handle sequential constraints in optimization.

Key Terms in this Chapter

Kuhn-Tucker: A nonlinear optimization methodology that involved inequality constraints.

Numerical Methods: When a function cannot be solved in closed form an appropriate numerical technique may be used to approximate the solution.

Decision Variables: The variables that change that affect the solution.

Initial Values: Staring values for numerical methods. Default values are usually all 0’s or all 1’s but care must be taken to insure convergence.

Feasible Region: The convex region bounded by the constraints.

Binding Constraint: A constraint that is satisfied at equality.

Optimization: The process of maximizing or minimizing an objective function.

Lagrange Multiplier: The Lagrange multiplier, ?, is used in sensitivity analysis in optimization problems. In general the objective function changes by the value of ? for each unit change in the resource.

Lagrange: A nonlinear optimization methodology that involved equality constraints.

Constraints: A resource constraint is one that either limits the amount of a resource or sets a minimum to amount of a resource requirement.

Complete Chapter List

Search this Book:
Reset