Posterior Sampling using Particle Swarm Optimizers and Model Reduction Techniques

Posterior Sampling using Particle Swarm Optimizers and Model Reduction Techniques

J. L. Fernández Martínez (Stanford University, University of California-Berkeley, USA and University of Oviedo, Spain), E. García Gonzalo (University of Oviedo, Spain), Z. Fernández Muñiz (University of Oviedo, Spain), G. Mariethoz (Stanford University, USA) and T. Mukerji (Stanford University, USA)
DOI: 10.4018/978-1-4666-1749-0.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Inverse problems are ill-posed and posterior sampling is a way of providing an estimate of the uncertainty based on a finite set of the family of models that fit the observed data within the same tolerance. Monte Carlo methods are used for this purpose but are highly inefficient. Global optimization methods address the inverse problem as a sampling problem, particularly Particle Swarm, which is a very interesting algorithm that is typically used in an exploitative form. Although PSO has not been designed originally to perform importance sampling, the authors show practical applications in the domain of environmental geophysics, where it provides a proxy for the posterior distribution when it is used in its explorative form. Finally, this paper presents a hydrogeological example how to perform a similar task for inverse problems in high dimensional spaces through the combined use with model reduction techniques.
Chapter Preview
Top

Particle Swarm Optimization (Pso) Applied To Inverse Problems

Particle swarm optimization is a stochastic evolutionary computation technique inspired by the social behavior of individuals (called particles) in nature, such as bird flocking and fish schooling (Kennedy & Eberhart, 1995).

Let us consider an inverse problem of the form , where are the model parameters, the discrete observed data, and

is the vector field representing the forward operator and is the scalar field that accounts for the j-th data. Inverse problems are very important in science and technology and sometimes referred to as, parameter identification, reverse modeling, etc. The “classical” goal of inversion given a particular data set (often affected by noise), is to find a unique set of parameters m, such the data prediction error in a certain norm p, is minimized.

The PSO algorithm to approach this inverse problem is at first glance very easy to understand and implement:

  • 1.

    A prismatic space of admissible models, M, is defined:

where are the lower and upper limits for the j-th coordinate of each particle in the swarm, n is the number of parameters in the optimization problem and is the swarm size.

  • 2.

    The misfit for each particle of the swarm is calculated, and for each particle its local best position found so far (called ) is determined as well as the minimum of all of them, called the global best ().

  • 3.

    The algorithm updates at each iteration the positions and velocities of each model in the swarm. The velocity of each particle i at each iteration k is a function of three major components:

    • a.

      The inertia term, which consists of the old velocity of the particle, weighted by a real constant, , called inertia.

    • b.

      The social learning term, which is the difference between the global best position found so far (called ) and the particle's current position ().

    • c.

      The cognitive learning term, which is the difference between the particle's best position (called ) and the particle's current position ():

Complete Chapter List

Search this Book:
Reset