Selected Topics in Robust Optimization

Selected Topics in Robust Optimization

Ihsan Yanikoglu
Copyright: © 2018 |Pages: 27
DOI: 10.4018/978-1-5225-5091-4.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the authors give a brief introduction to important concepts of RO paradigm. The remainder of the chapter is organized as follows: Section 2 gives an introduction on optimization under uncertainty, and presents brief comparisons among the well-known sub-fields of optimization under uncertainty such as RO, stochastic programming (SP), and fuzzy optimization (FO). Section 3 presents important methodologies of RO paradigm. Section 4 gives insights about alternative ways of choosing the uncertainty set. Section 5 shows alternative methods of assessing the quality of a robust solution and presents miscellaneous topics. Finally, Section 6 summarizes conclusions and gives future research directions.
Chapter Preview
Top

Introduction

Robust optimization (RO) is an active research field that has been mainly developed in the course of last twenty years. The goal of robust optimization is to find solutions that are ‘immune’ to uncertainty of parameters in a given mathematical optimization problem. RO is well-known because it yields computationally tractable solution methods for uncertain optimization problems. RO does not suffer from the curse of dimensionality, and its methods are very useful for real-life applications and tailored to the information at hand. There have been many publications that show the value of RO in many fields of application including finance (Lobo 2000), energy (Bertsimas et al. 2013, Babonneau et al. 2010), supply chain (Ben-Tal et al. 2005, Lim 2013), inventory management (Bertsimas and Thiele 2006) healthcare (Fredriksson et al. 2011), engineering (Ben-Tal and Nemirovski 2002a), scheduling (Yan and Tang 2009), marketing (Wang and Curry 2012), etc. For a quick overview of the associated literature on RO, we refer to the survey papers by Ben-Tal and Nemirovski (2002b), Bertsimas et al. (2011), Beyer and Sendhoff (2007), Gabrel et al. (2014), and Gorissen et al. (2015).

Top

Optimization Under Uncertainty

Mathematical optimization problems often have uncertainty in problem parameters because of measurement/rounding, estimation/forecasting, or implementation errors. Measurement/rounding errors are often caused when an actual measurement is rounded to a nearest value according to a rule, e.g., the nearest tenth or hundredth, or when the actual value of the parameter cannot be measured with a high precision as it appears in reality. For example, if the reported parameter value is 1.5 according to the nearest tenth, then the actual value can be anywhere between 1.45 and 1.55, i.e., it is uncertain. Estimation/forecasting errors come from the lack of true knowledge about the problem parameter or the impossibility to estimate the true characteristics of the actual data. For example, demand and cost parameters are often subject to such estimation/forecasting errors. Implementation errors are often caused by “ugly” reals that can be hardly implemented with the same precision in reality. For example, suppose the optimal voltage in a circuit, that is calculated by an optimization tool, is 3.81231541. The decimal part of this optimal solution can be hardly implemented in practice, since you cannot provide the same precision. Aside from the above listed errors, a parameter can also be inherently stochastic or random. For example, the hourly number of customers arriving at a bank may follow a Poisson distribution.

Optimization based on nominal values often lead to “severe” infeasibilities. Notice that a small uncertainty in the problem data can make the nominal solution completely useless. A case study in Ben-Tal and Nemirovski (2000) shows that perturbations as low as 0.01% in problem coefficients result constraint violations more than 50% in 13 out of 90 NETLIB Linear Programming problems considered in the study. In 6 of these 13 problems violations were over 100%, where 210,000% being the highest (i.e., seven scale higher than the tested uncertainty). Therefore, a practical optimization methodology that proposes immunity against uncertainty is needed when the uncertainty heavily affects the quality of the nominal solution.

Complete Chapter List

Search this Book:
Reset