StoGO is a global optimization algorithm that works by systematically dividing the search space---which must be bound-constrained---into smaller hyper-rectangles via a branch-and-bound technique, and searching them using a gradient-based local-search algorithm (a BFGS variant), optionally including some randomness.
Usage
stogo(
x0,
fn,
gr = NULL,
lower = NULL,
upper = NULL,
maxeval = 10000,
xtol_rel = 1e-06,
randomized = FALSE,
nl.info = FALSE,
...
)
Arguments
- x0
initial point for searching the optimum.
- fn
objective function that is to be minimized.
- gr
optional gradient of the objective function.
- lower, upper
lower and upper bound constraints.
- maxeval
maximum number of function evaluations.
- xtol_rel
stopping criterion for relative change reached.
- randomized
logical; shall a randomizing variant be used?
- nl.info
logical; shall the original NLopt info be shown.
- ...
additional arguments passed to the function.
Value
List with components:
- par
the optimal solution found so far.
- value
the function value corresponding to
par
.- iter
number of (outer) iterations, see
maxeval
.- convergence
integer code indicating successful completion (> 0) or a possible error number (< 0).
- message
character string produced by NLopt and giving additional information.
References
S. Zertchaninov and K. Madsen, ``A C++ Programme for Global Optimization,'' IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark.
Examples
## Rosenbrock Banana objective function
rbf <- function(x) {(1 - x[1]) ^ 2 + 100 * (x[2] - x[1] ^ 2) ^ 2}
x0 <- c(-1.2, 1)
lb <- c(-3, -3)
ub <- c(3, 3)
## The function as written above has a minimum of 0 at (1, 1)
stogo(x0 = x0, fn = rbf, lower = lb, upper = ub)
#> $par
#> [1] 0.9999934 0.9999865
#>
#> $value
#> [1] 5.618383e-11
#>
#> $iter
#> [1] 10000
#>
#> $convergence
#> [1] 1
#>
#> $message
#> [1] "NLOPT_SUCCESS: Generic success return value."
#>