Heuristic Approaches for Non-Convex Problems: Application to the Design of Structured Controllers and Spiral Inductors

Heuristic Approaches for Non-Convex Problems: Application to the Design of Structured Controllers and Spiral Inductors

Rosario Toscano, Ioan Alexandru Ivan
Copyright: © 2014 |Pages: 25
DOI: 10.4018/ijaie.2014010105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper aims at solving difficult optimization problems arising in many engineering areas. To this end, two recently developed optimization method will be introduced: the heuristic Kalman algorithms (HKA) and the quasi geometric programming (QGP) problems. The principle of HKA is to consider the optimization problem as a measurement process intended to give an estimate of the optimum. A specific procedure, based on the Kalman estimator, is developed to improve the quality of the estimate obtained through a measurement process. A significant advantage of HKA against other stochastic methods lies mainly in the small number of parameters which have to be set by the user. In this paper we also introduce an extension of standard geometric programming (GP) problems which we call quasi geometric programming (QGP) problems. The consideration of this particular kind of nonlinear and possibly non smooth optimization problem is motivated by the fact that many engineering problems can be formulated as a QGP. To solve this kind of problems (QGP), an algorithm is proposed which is based on the resolution of a succession of standard GP. An interesting feature of the proposed approach is that it does not need to develop specific program solver and works well with any existing solver able to solve conventional GP. In the last part of the paper, it is to shown that HKA and QGP can be efficiently used to solve difficult non-convex optimization problems. In particular, we have addressed the problem of robust structured control and on-ship spiral inductor design. Numerical experiments exemplify the resolution of this kind of problems.
Article Preview
Top

Introduction

In all areas of engineering, physical and social sciences, one encounters problems involving the optimization of some objective function. Usually, the problem to solve can be formulated precisely but is often, difficult or impossible to solve either analytically or through conventional numerical procedures. This is the case when the problem is non-convex and so inherently nonlinear and multimodal. In fact it is now well established that the frontier between the efficiently solvable optimization problems and the others rely on its convexity (Rockafellar, 1993). This is confirmed by the fact that very efficient algorithms for solving convex problems exist (Boyd & Vandenberghe, 2004), whereas the problem of non-convex optimization remains largely open despite an enormous amount of effort devoted to its resolution.

In this context, several heuristic methods, also called metaheuristics, have been developed in the last two decades, which have demonstrated a strong ability to solve problems that were previously difficult or impossible to solve (Fogel, 2006, Kirkpatrick & Gelatt, 1983, Toscano, 2013). These metaheuristics include simulated annealing (SA), genetic algorithm (GA), particle swarm (PS), to cite only the most used in the framework of continuous optimization problems.

Simulated annealing (SA) is a random-search method introduced by S. Kirkpatrick in 1983 and by V. Cerný in 1985 (Kirkpatrick & Gelatt, 1983, Cerny, 1985). The name comes from a technique used in metallurgy, called annealing, which consists in heating and slowly cooling a metal to obtain a “well ordered” solid state of minimal energy (Dréo et al., 2006). An interesting property of SA is its ability to avoid getting stuck in a local minima. This is obtained by using a random procedure which not only accepts changes that decrease the cost function J (assuming a minimization problem), but also some changes that increase it. The latter are accepted with a probability exp(-ΔJ/T), where ΔJ is the increase in J and T is a control parameter, which by analogy with the physical annealing is known as the system temperature. The main advantage of the SA is that it achieves a good quality solution, i.e. the absolute error to the global minimum is generally lower than that obtained via other metaheuristics. Moreover, it is versatile and easy to implement. The main drawbacks of SA lie mainly in the choice of the various parameters involved by this algorithm. The results obtained are indeed very sensitive to the parameter settings. Consequently, the problem of the selection of the “good parameters” (for a given cost function) is a crucial issue, which is however not yet entirely solved. Another weakness of the method, linked to the problem of parameter setting, is its excessive computing time in most applications. More detailed developments on SA, both practical and theoretical, can be found in (Spall 2003, Dréo et al., 2006).

Complete Article List

Search this Journal:
Reset
Volume 10: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 9: 1 Issue (2023)
Volume 8: 1 Issue (2021)
Volume 7: 1 Issue (2020)
Volume 6: 2 Issues (2019)
Volume 5: 2 Issues (2018)
Volume 4: 2 Issues (2017)
Volume 3: 2 Issues (2016)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2012)
View Complete Journal Contents Listing