# Particle Swarm Optimization and General Pattern Search

My post-doc supervisor is using a hybridization of particle swarm with a general pattern search for his oil well placement and parameterization optimization problem. PSO and GPS is described below:

PSO
Particle Swarm Optimization is a gradient free optimization method that uses a stochastic, meta-heuristic approach in finding the optimum value. The algorithm begins by first generating an initial array of candidate solutions, called the swarm. The individual particles of the swarm then move through the search space based on each particles knowledge of where a better solution is located, as well as the swarms knowledge, as a whole, in to where the optimum value is located. Therefore, previous successful solutions act as attractors to which the swarm will migrate towards. The position of each particle, and the swarm alike, is defined in terms of its previous position, $x$, and the velocity, $v$, of the particle (or swarm) which describes a particles tendency to move in a certain direction. We therefore define the position and velocity as: $x_{n+1} = x_{n} + v_{n}$ $v_{n+1} = v_n + c_1 r_1(x^{\*}_p - x_n) + c_2 r_2(x^{\*}_s - x_n)$ where $c_{1,2}$ is a weighting parameter specified by the used, $r_{1,2}$ are random numbers on the interval $(0,1)$ which enforce exploration of the search space, $x^{\*}_p$ denotes the best previous position of that particle, and $x^{\*}_s$ denotes the best previous position of the swarm. Qualitatively, the velocity component of PSO is specified by three parameters:
1. Inertial tendency of the particle to continue moving in it’s current direction.
2. Cognitive: tendency of the particle to move towards that particle’s best found solution.
3. Social: tendency of a particle to move towards the swarm’s best found solution.
In general, PSO handles bound constraint optimization problems, as well as linear and non-linear general constraints.

General Pattern Search
Though we will not be implementing a General Pattern Search (GPS) algorithm, the method is worth mentioning as Humphries’ has incorporated GPS along with PSO as a hybridized optimization technique. GPS begins at a user provided initial point $x_0$. Unless otherwise specified, the first iteration searches with a mesh size of 1 in all n directions, where n is the dimension of the objective function. The objective function is then computed at each of the mesh points, until it finds a value that is less than the value of the current objective function. Once it has done so, the algorithm sets to that point and begins polling again. After a successful poll, the mesh size is multiplied by a factor of 2. If a poll is unsuccessful, the algorithm remains at it’s current position, and polls once again, this time with a mesh size that is 0.5 the former mesh size. GPS can handle both bound and general constraints optimization problems. Humphries’ hybridization technique involves implementing PSO until we encounter an iteration that does not improve our solution. From there we perform one step of GPS, to poll for a better solution. If no better solution is found, we reduce the stencil size and poll again.