Optimization Introduced
An optimization problem exists when a decision maker seeks to maximize or minimize a function value (e.g. profit or cost) and has control of at least some independent variables affecting that function value.  If the formula mapping inputs to output is understood and relatively straightforward, the decision maker may optimize inputs mathematically.  But if the formula is unknown or mathematically complicated, an optimization algorithm would be a better approach.

The number of inputs or decision variables to be optimized constitutes the problem dimensionality, n.  The higher the dimensionality, the more complicated the optimization problem becomes.  Each position assumed as particles fly through the n-dimensional search space is a decision vector or candidate solution to the optimization problem.  Ideally, the decision maker seeks a global optimizer, which is a decision vector producing a level of optimization that cannot be outperformed by any feasible alternative.  It is often easier, however, to find a local optimizer, which is a relatively well-performing solution whose degree of optimization cannot be outperformed by any other candidate solution in the same vicinity of the n-dimensional search space.

Particle Swarm Introduced
Particle Swarm Optimization (PSO) is a member of the Swarm Intelligence family of population-based optimizers.  It was presented by James Kennedy and Russell Eberhart in 1995: one paper that year focused on the now popular global best model (Gbest PSO) [1] and another on the local or neighborhood best variant of the algorithm (Lbest PSO) [2].  Each particle follows simple position and velocity update equations; yet as particles interact, the collective behavior becomes elaborate enough for the swarm to solve complicated optimization problems.

Initialization of the Swarm (i.e. of the Search Team)
Each dimension of each particle’s initial position vector is randomly selected from a uniform distribution of feasible values (i.e. all feasible values have an equal probability of selection).  Fig. 2 of the HESTEC poster (large file: please be patient) uses n = 2 for visual illustration.  Velocity vectors are similarly initialized from a uniform distribution on [-vmax, vmax] per dimension using a reasonable maximum velocity.

Personal Bests
Iterations can be thought of as discrete samplings of continuous time—similar to turns in a turn-based strategy game.  Each particle evaluates the cost function at its initial position and at each ensuing position as it flies through the search space like a bird searching for food [1].    In terms of function value, the best position found by any particle through the current iteration is its personal best, toward which it accelerates (i.e. iteratively adjusts its velocity).

                                                                              Page 1 | Page 2 | Page 3 | Page 4

Particle Swarm Optimization (Page 1)

George Evers, MSE

Electrical Engineer

To contact me:
george [at] georgeevers.org

(@ removed to avoid email