Sequential (least-squares) quadratic programming (SQP) algorithm for nonlinearly constrained, gradient-based optimization, supporting both equality and inequality constraints.
Usage
slsqp(
x0,
fn,
gr = NULL,
lower = NULL,
upper = NULL,
hin = NULL,
hinjac = NULL,
heq = NULL,
heqjac = NULL,
nl.info = FALSE,
control = list(),
deprecatedBehavior = TRUE,
...
)
Arguments
- x0
starting point for searching the optimum.
- fn
objective function that is to be minimized.
- gr
gradient of function
fn
; will be calculated numerically if not specified.- lower, upper
lower and upper bound constraints.
- hin
function defining the inequality constraints, that is
hin <= 0
for all components. This is new behavior in line with the rest of thenloptr
arguments. To use the old behavior, please setdeprecatedBehavior
toTRUE
.- hinjac
Jacobian of function
hin
; will be calculated numerically if not specified.- heq
function defining the equality constraints, that is
heq = 0
for all components.- heqjac
Jacobian of function
heq
; will be calculated numerically if not specified.- nl.info
logical; shall the original NLopt info been shown.
- control
list of options, see
nl.opts
for help.- deprecatedBehavior
logical; if
TRUE
(default for now), the old behavior of the Jacobian function is used, where the equality is \(\ge 0\) instead of \(\le 0\). This will be reversed in a future release and eventually removed.- ...
additional arguments passed to the function.
Value
List with components:
- par
the optimal solution found so far.
- value
the function value corresponding to
par
.- iter
number of (outer) iterations, see
maxeval
.- convergence
integer code indicating successful completion (> 1) or a possible error number (< 0).
- message
character string produced by NLopt and giving additional information.
Details
The algorithm optimizes successive second-order (quadratic/least-squares) approximations of the objective function (via BFGS updates), with first-order (affine) approximations of the constraints.
Note
See more infos at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.
References
Dieter Kraft, ``A software package for sequential quadratic programming'', Technical Report DFVLR-FB 88-28, Institut fuer Dynamik der Flugsysteme, Oberpfaffenhofen, July 1988.
Examples
## Solve the Hock-Schittkowski problem no. 100 with analytic gradients
## See https://apmonitor.com/wiki/uploads/Apps/hs100.apm
x0.hs100 <- c(1, 2, 0, 4, 0, 1, 1)
fn.hs100 <- function(x) {(x[1] - 10) ^ 2 + 5 * (x[2] - 12) ^ 2 + x[3] ^ 4 +
3 * (x[4] - 11) ^ 2 + 10 * x[5] ^ 6 + 7 * x[6] ^ 2 +
x[7] ^ 4 - 4 * x[6] * x[7] - 10 * x[6] - 8 * x[7]}
hin.hs100 <- function(x) {c(
2 * x[1] ^ 2 + 3 * x[2] ^ 4 + x[3] + 4 * x[4] ^ 2 + 5 * x[5] - 127,
7 * x[1] + 3 * x[2] + 10 * x[3] ^ 2 + x[4] - x[5] - 282,
23 * x[1] + x[2] ^ 2 + 6 * x[6] ^ 2 - 8 * x[7] - 196,
4 * x[1] ^ 2 + x[2] ^ 2 - 3 * x[1] * x[2] + 2 * x[3] ^ 2 + 5 * x[6] -
11 * x[7])
}
S <- slsqp(x0.hs100, fn = fn.hs100, # no gradients and jacobians provided
hin = hin.hs100,
nl.info = TRUE,
control = list(xtol_rel = 1e-8, check_derivatives = TRUE),
deprecatedBehavior = FALSE)
#> Checking gradients of objective function.
#> Derivative checker results: 0 error(s) detected.
#>
#> eval_grad_f[1] = -1.800000e+01 ~ -1.8e+01 [ 3.023892e-10]
#> eval_grad_f[2] = -1.000000e+02 ~ -1.0e+02 [ 8.540724e-14]
#> eval_grad_f[3] = 0.000000e+00 ~ 0.0e+00 [ 0.000000e+00]
#> eval_grad_f[4] = -4.200000e+01 ~ -4.2e+01 [ 4.384556e-12]
#> eval_grad_f[5] = 0.000000e+00 ~ 0.0e+00 [ 0.000000e+00]
#> eval_grad_f[6] = -1.877429e-08 ~ 0.0e+00 [-1.877429e-08]
#> eval_grad_f[7] = -8.000000e+00 ~ -8.0e+00 [ 6.102499e-10]
#>
#> Checking gradients of inequality constraints.
#> Derivative checker results: 0 error(s) detected.
#>
#> eval_jac_g_ineq[1, 1] = 4.0e+00 ~ 4.0e+00 [2.355338e-11]
#> eval_jac_g_ineq[2, 1] = 7.0e+00 ~ 7.0e+00 [2.278881e-10]
#> eval_jac_g_ineq[3, 1] = 2.3e+01 ~ 2.3e+01 [5.297235e-11]
#> eval_jac_g_ineq[4, 1] = 2.0e+00 ~ 2.0e+00 [1.311518e-11]
#> eval_jac_g_ineq[1, 2] = 9.6e+01 ~ 9.6e+01 [1.985688e-08]
#> eval_jac_g_ineq[2, 2] = 3.0e+00 ~ 3.0e+00 [2.191188e-10]
#> eval_jac_g_ineq[3, 2] = 4.0e+00 ~ 4.0e+00 [5.631432e-10]
#> eval_jac_g_ineq[4, 2] = 1.0e+00 ~ 1.0e+00 [4.978373e-11]
#> eval_jac_g_ineq[1, 3] = 1.0e+00 ~ 1.0e+00 [5.631432e-10]
#> eval_jac_g_ineq[2, 3] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[3, 3] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[4, 3] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[1, 4] = 3.2e+01 ~ 3.2e+01 [7.463696e-09]
#> eval_jac_g_ineq[2, 4] = 1.0e+00 ~ 1.0e+00 [2.909929e-09]
#> eval_jac_g_ineq[3, 4] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[4, 4] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[1, 5] = 5.0e+00 ~ 5.0e+00 [9.378596e-11]
#> eval_jac_g_ineq[2, 5] = -1.0e+00 ~ -1.0e+00 [2.909929e-09]
#> eval_jac_g_ineq[3, 5] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[4, 5] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[1, 6] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[2, 6] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[3, 6] = 1.2e+01 ~ 1.2e+01 [1.720122e-10]
#> eval_jac_g_ineq[4, 6] = 5.0e+00 ~ 5.0e+00 [5.781509e-12]
#> eval_jac_g_ineq[1, 7] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[2, 7] = 0.0e+00 ~ 0.0e+00 [0.000000e+00]
#> eval_jac_g_ineq[3, 7] = -8.0e+00 ~ -8.0e+00 [2.355338e-11]
#> eval_jac_g_ineq[4, 7] = -1.1e+01 ~ -1.1e+01 [3.114761e-12]
#>
#>
#> Call:
#> nloptr(x0 = x0, eval_f = fn, eval_grad_f = gr, lb = lower, ub = upper,
#> eval_g_ineq = hin, eval_jac_g_ineq = hinjac, eval_g_eq = heq,
#> eval_jac_g_eq = heqjac, opts = opts)
#>
#>
#> Minimization using NLopt version 2.7.1
#>
#> NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped because
#> xtol_rel or xtol_abs (above) was reached. )
#>
#> Number of Iterations....: 60
#> Termination conditions: stopval: -Inf xtol_rel: 1e-08 maxeval: 1000 ftol_rel: 0 ftol_abs: 0
#> Number of inequality constraints: 4
#> Number of equality constraints: 0
#> Optimal value of objective function: 680.630057364075
#> Optimal value of controls: 2.330497 1.951371 -0.4775421 4.36573 -0.6244872 1.038139 1.594228
#>
#>
## The optimum value of the objective function should be 680.6300573
## A suitable parameter vector is roughly
## (2.330, 1.9514, -0.4775, 4.3657, -0.6245, 1.0381, 1.5942)
S
#> $par
#> [1] 2.3304967 1.9513713 -0.4775421 4.3657298 -0.6244872 1.0381386 1.5942275
#>
#> $value
#> [1] 680.6301
#>
#> $iter
#> [1] 60
#>
#> $convergence
#> [1] 4
#>
#> $message
#> [1] "NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel or xtol_abs (above) was reached."
#>