Penguins Huddling Optimisation

Penguins Huddling Optimisation

Mohammad Majid al-Rifaie
Copyright: © 2014 |Pages: 29
DOI: 10.4018/ijats.2014040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In our everyday life, people deal with many optimisation problems, some of which trivial and some more complex. These problems have been frequently addressed using multi-agent, population-based approaches. One of the main sources of inspiration for techniques applicable to complex search space and optimisation problems is nature. This paper proposes a new metaheuristic – Penguin Huddling Optimisation or PHO – whose inspiration is beckoned from the huddling behaviour of emperor penguins in Antarctica. The simplicity of the algorithm, which is the implementation of one such paradigm for continuous optimisation, facilitates the analysis of its behaviour and the derivation of the optimal value for its single adjustable parameter in the update equation. A series of experimental trials confirms the promising performance of the optimiser over a set of benchmarks, as well as its competitiveness when compared against few other well-known population based algorithms.
Article Preview
Top

Introduction

Studies of the behaviour of social insects and animals have suggested several new metaheuristics for use in collective intelligence and multi-agent systems. This has given rise to a concomitant increasing interest in distributed computation through the interaction of simple agents in nature-inspired optimisation techniques.

In swarm intelligence literature, search and optimisation are often used interchangeably. However, in computer science, search is defined in at least three broad (and overlapping) ways: data search, path search and solution search (Mitchell, 1996). In the first definition, search refers to finding a (target) model in a search space, and the goal of the algorithm is to find a match, or the closest match to the target in the search space. This is defined as data search and is considered a classical meaning of search in computer science; one example of search in this category is Binary search (more such methods are described in (Knuth, 1973)). In the second type, the goal is finding a path (path search) and the list of the steps leading to a certain solution is what the search algorithm tries to achieve. In this type of search, paths do not exist explicitly but are rather created during the course of the search (a simple example used in AI courses is the “8-puzzle” problem, where a set of tiles are numbered 1-8 and placed in a square leaving one space empty; classical algorithms used for solving problem of this nature are “depth-first search”, “breadth-first search”, “A*”, etc. (Russell, Norvig, Canny, Malik, & Edwards, 1995)). In the third definition, solution search, the goal is to find a solution in a large problem space of candidate solutions. Similarly to the path search, where paths do not exist explicitly, the search space consists of candidate solutions which are not stored explicitly but are rather created and evaluated during the search process. However, in contrast to the path search, the steps taken to find the solution are not the goal of the algorithm. Population-based swarm intelligence algorithms fit within this category; traditional examples are genetic algorithms, particle swarm optimisation, etc).

In optimisation, which is similar to the third definition of search, the model of the first definition is replaced with an objective or fitness function which is used to evaluate possible solutions. In both search and optimisation, the positions of the optima are not known in advance (even though the optima itself might be known a-priori). The task of the fitness function is to measure the proximity of the candidate solutions to the optima based on the criteria provided by each optimisation problem. The algorithm compares the output of the function to the output of the previously located candidate solutions and, in the case of a minimisation problem, the smaller the output the better the solution. Data search can be seen as a caste of optimisation if the objective function tests the equality of the candidate solution to the model.

Global or continuous optimisation is, however, concerned with locating the optimal, real-valued solution within the entire search space and one of the main difficulties that global optimisers face, is the existence of local optima within the problem space.

According to (Neumaier, 2004), global optimisation techniques are categorised into four groups:

  • Incomplete: This technique uses clever intuitive heuristics for searching without presenting safeguards if the search gets stuck in a local minimum;

  • Asymptotically complete: This technique reaches a global minimum with certainty or at least with probability one with the assumption of allowing to run indefinitely long, without providing means to know when a global minimum has been found;

  • Complete: This technique reaches a global minimum with certainty, with the assumption of having exact computations and indefinitely long run time, and knows after a finite time that an approximate global minimum has been found (within prescribed tolerances);

  • Rigorous: This technique reaches a global minimum with certainty and within given tolerances even in the presence of rounding errors, except in near-degenerate cases where the tolerances may be exceeded.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 9: 1 Issue (2017)
Volume 8: 1 Issue (2016)
Volume 7: 3 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing