The negative log-likelihood, is dependent upon several components, namely the mean and variance estimators and $\hat{\sigma}^2(\beta)$, as well as the inverse and determinant of , all of which are dependent upon the hyper-parameter . Recall that each is a vector of parameters, with individual components , for all $i=\{1,…d\}$. Additionally, the nugget parameter depends on the condition number $\kappa(R)$, which again is dependent upon $\beta$. For this reason, it is difficult, if not impossible, to extract analytic gradient information from . It follows that optimization methods that rely on the user providing an accurate expression for are of no benefit. We can, however, provide numerical approximations to through finite differencing, as is performed in the BFGS and Implicit Filtering (IF) algorithms. Such methods do not rely on accurately computing the gradient of the objective function and are known as derivative-free optimization algorithms.

Even when using derivative-free optimization techniques, the optimization process remains challenging. The objective function, , is often very rough around the global optimum and can contain numerous local optima and flat regions. It is not uncommon for the likelihood value at these sub-optimal solutions to be close in value to the that of the global optimum. However, the corresponding parameterization of these local optima can vary significantly, resulting in a poor-quality model fit. To ensure that the quality of the GP model is reliable, convergence to a highly precise global optimum is crucial, and thus a highly accurate and robust global optimization technique is required.

For a list of optimization algorithms please navigate to categories->Optimization Algorithms.

### Like this:

Like Loading...

*Related*

## Leave a Reply