DIviding RECTangle = DIRECT; A Partitioning Algorithm

DIRECT, which is shorthand for Dividing Rectangles, is a derivative-free, sampling optimization algorithm. More specifically, DIRECT is a partitioning algorithm that samples points in a given domain and then refines the search domain at each iteration based on the function values at the sampled points. Formally, DIRECT is a global optimization algorithm, that is, given enough function evaluations and a highly sensitive stopping criteria, DIRECT will always find the global optimum of a function within a given domain. Of course, this exhaustive search can be computationally expensive and thus impractical for optimization of \mathcal{L_\beta}. Therefore, we will use DIRECT to find a sub-region within the feasible region that contains the global optimum, while simultaneously reducing the chance of searching in regions that contain local optima.

The first step of DIRECT is to transform the user-supplied domain of the problem into a d-dimensional unit hypercube. Therefore, the feasible region as described by Finkel is
\Omega = \{x \  \epsilon \  R^d: 0 \leq x_i \leq 1\}.
The objective function is first evaluated at the center of this hypercube, c_1. DIRECT then divides the hypercube into smaller hyper-rectangles by evaluating the objective function at one-third of the distance, \delta, from the center in all coordinate directions ,
c_1 \pm \delta e_i, \ i=1,...,d.
The first division is performed so that the region with the most optimal function value is given the largest space.

Once the first division is complete, the next step is to identify potential optimal hyper-rectangles. Finkel provides the following definition for selecting potential optimal hyper-rectangles.

DEFINITION: Let \epsilon > 0 be a positive constant and let f_{min} be the current best function value. A hyper-rectangle j is said to be potentially optimal if there exists \hat{K} >0 such that}
f(c_j)-\hat{K}d_j \leq f(c_i) -\hat{d_i}, \ \forall \ i, \ \text{and}
f(c_j)-\hat{K}d_j \leq  f_{min}- \epsilon |f_{min}|.

Here, c_j is the center of hyper-rectangle j and d_j is the distance from the center to the corresponding vertices. In general, a value of \epsilon = 10^{-4} is used to ensure that f(c_j) exceeds f_{min} by some non-trivial amount.

From the definition, we observe that a hyper-rectangle i is potentially optimal if:
1. f(c_i) \leq f(c_j) for all hyper-rectangles of equal size, d_i=d_j .
2. d_i \geq d_k for all other hyper-rectangles k, and f(c_i) \leq f(c_j) for all hyper-rectangles of equal size, d_i=d_j.
3. d_i \leq d_k for all other hyper-rectangles k, and f(c_i) = f_{min}

Once a hyper-rectangle is selected as being potentially optimal, DIRECT begins dividing the hyper-rectangle into smaller hyper-rectangles. First, the objective function is evaluated at points c \pm \delta e_i for all longest dimensions i. Of course, if the hyper-rectangle is a hypercube, then the objective function is evaluated in all coordinate directions e_i, which is identical to the initial sampling step. We then divide the hyper-rectangle, with order determined by the objective function values returned during the sampling phase.

The three Figures below provides a 2-dimensional visualization of how DIRECT samples and divides its search space. The bold regions are the potentially optimal rectangles, which are to be divided. The first iteration, (a), shows that the right-most rectangle is potentially optimal, and is therefore sampled and divided along its vertical dimension. The second iteration, (b), shows that both the left-most rectangle, and the right-most, central rectangles are potential optimal. Here we apply the second condition for selecting potential optimal rectangles, in which the longest side of the left-most rectangle is greater than the longest side of all other rectangles, and therefore by default, the function value at the center of the left-most rectangle is less than the function value of all other rectangles of equal size. Again, we sample and divide the rectangles according to their dimension. This process of selecting potentially optimal rectangles, sampling and dividing is once again repeated in the third iteration, (c) and continues for subsequent iterations until the stopping criteria is met.

After the initial sampling phase, the rectangle on the far right is potentially optimal, and is therefore divided into thirds along its longest side.

After the initial sampling phase, the rectangle on the far right is potentially optimal, and is therefore divided into thirds along its longest side.

The central rectangle on the far right is potentially optimal. This rectangle is a cube, therefore division is identical to that of the initial sampling phase.  The rectangle on the far left is also potentially optimal as it has a side which is longer than the sides of all other rectangles.

The central rectangle on the far right is potentially optimal. This rectangle is a cube, therefore division is identical to that of the initial sampling phase. The rectangle on the far left is also potentially optimal as it has a side which is longer than the sides of all other rectangles.

The cube in the center of the far right rectangle is potentially optimal and is divided. In this case, the central cube on the bottom row, which has a side length that is greater than or equal to the side length of all other rectangles, is is also potentially optimal and is therefore divided.

he cube in the center of the far right rectangle is potentially optimal and is divided. In this case, the central cube on the bottom row, which has a side length that is greater than or equal to the side length of all other rectangles, is is also potentially optimal and is therefore divided.

Advertisements
About

I am a recent (2012) Memorial University B.Sc. graduate in Applied Mathematics and Physics. I am currently working in the Mathematics & Statistics Department at MUN, under the supervision of Dr. Ronald Haynes. My current research is in numerical optimization, as it pertains to oil well placement and optimal parameterization. In my spare time I enjoy exercising, spending time with family and friends and playing soft rock and blues guitar.

Posted in Optimization Algorithms

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: