I have been inspired by Varadi’s latest series of posts in http://cssanalytics.wordpress.com regarding Heuristic optimization techniques (particularly Genetic Algorithms, GAs, and Particle Swarm Optimization, PSO) to perform a quick analysis comparing said techniques. I highly recommend you read the http://cssanalytics.wordpress.com blog posts from Sept. 3-6, 2013, for a detailed explanation of GAs and PSO before viewing my analysis.

**A Quick Analysis of Heuristic Optimization by Stochastic Genetic Algorithms and Particle Swarm**

We investigate the optimization performance of both Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). Particularly we are interested in comparing the efficiency and accuracy of these techniques in optimizing a variety of well known test functions. The test functions used are outlined below.

We consider three classes of GAs:

- Matlab’s Built in Traditional GA (MatlabGA).
- A traditional GA as outlined by Varadi in
*http://cssanalytics.wordpress.com/*(GA) - A simplistic Micro-GA (mGA) as outlined by Varadi in
*http://cssanalytics.wordpress.com/*.

**A Brief Review of Traditional GAs**

Recall, like most evolutionary algorithms, a GA is a stochastic sampling method that repeatedly refines a population or set of candidate solutions. The method is based on the Darwinian notion of “Survival of the Fittest” in which only the elite solutions, (those that return the most optimal objective function values) are kept, while sub-optimal solutions are discarded.

A typical GA begins by randomly generating an initial population candidate solutions, called the parents. The initial population of each generation is then enlarged through production of a combination of *cross-over children* and/or *mutation children*. Cross-over children are derived by mixing and matching individual components of the parents, in what Varadi describes as Mating. Formally, at each component of the child vector, the default crossover function randomly selects a value, , from one of the two parents and assigns it to the component of the cross-over child. Mutation children are created by randomly perturbing the parent solutions, often via a Gaussian distribution.

Next we evaluate the corresponding objective function value at each of the potential solution in the current population. Only a fraction of the individuals in the current population that provide the most optimal objective function value will continue onward to form the parent population of the next generation.

The mGA, on the other hand, is a simplified version of the traditional GA and involves a Tournament phase for determining which parent solutions are worthy to mate. Those parents that are worthy to mate produce cross-over children, as described above and in *http://cssanalytics.wordpress.com/*.

For consistency amongst the varying GAs we have kept each population size and maximum number of generations fixed at 100 and 100d, respectively, where d is the number of input dimensions to the test function. Convergence is established either by achieving the maximum number of generations or if the average of the relative difference between a generation’s solution and best solution found over the most recent 50 generations is less than a tolerance, .

**A Brief Review of PSO**

PSO is a gradient free, stochastic optimization algorithm. Like GAs and mGAs, PSO begins by first generating an initial set of candidate solutions or particles, collectively called the swarm. At each iteration, individual particles of the swarm navigate the search-space based on each particles personal best position and the overall best position found by the swarm. Therefore, previous locations that minimize the objective function, , act as attractors to which the swarm will migrate towards. The position of each particle, and the swarm alike, is defined in terms of its previous position, , and the velocity, , of the particle(or swarm). The velocity (or momentum factor) describes a particles tendency to move in a certain direction. The position and velocity are defined as:

and

where and are random numbers on the interval , denotes the best previous position of that particle, and denotes the best previous position of the swarm.

For optimization of each test function I have chosen to use a swarm size of 20. For a fair comparison with the GAs, the stopping criteria for PSO is that the average relative difference of the best solution found over the most recent 50 iterations is less than a tolerance, .

Lastly, note that since both the GA and PSO are stochastic optimization routines, the results presented below have each been averaged over 100 simulations.

**Numerical Results **

We analyze three quantities:

- Accuracy: Average Percent Error = , where is the true, known, global optimum value.
- Efficiency: Average number of Function Evaluations (FEs) required for convergence.
- General Performance Assessment:} GPA= FEs in which accuracy holds twice the weight as efficiency and smaller GPA values indicate optimal performance.

Simulation results for each test function can be found in Table 1. From Table 1 we can draw the following conclusions:

- In general MatlabGA outperforms both my own traditional GA and mGA, with the exception of optimizing the 2-D Hump Function and 6-D Hartmann function. This is not surprising given the level of sophistication of Matlab’s built in optimization algorithms as compared to the 1-day coding blitz of my GAs.
- The mGA is indeed the most efficient of the GAs, as measured by the number of FEs required to establish convergence. Of course improved efficiency comes at the cost of decreased accuracy – ultimately the user’s optimization goals will play a huge factor in algorithm selection.
- As expected, PSO outperforms all GAs for objective functions that are strictly convex, with minimal noise and a single global optimum.
- As Varadi suggests, PSO (or a hybrid PSO method) may be the preferred technique for several portfolio optimization problems.
- Objective surfaces such as the Schwefel Function and the Rastrigin Function are noisy and multi-modal, containing several local optima. For global optimization of both these functions, MatlabGA proves to outperform PSO.

For full test function details see:

*http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm*

Huummmmmm:

Who wites this stuff and furthermore, who reads it!!!!!

I know – you guys do. Thank god for people like you guys 🙂

If You’re Lucky Enough To Live On The Beach, You’re Lucky Enough

Sent From My iPad

[…] and detailed analysis of these approaches, I suggest readers look at a newcomer to the blogroll- Butler’s Math. Andrew Butler is a mathematician with a background in optimization and he does an interesting […]