desc.optimize.lsq_auglag
- class desc.optimize.lsq_auglag(fun, x0, jac, bounds=(-inf, inf), constraint=None, args=(), x_scale=1, ftol=1e-06, xtol=1e-06, gtol=1e-06, ctol=1e-06, verbose=1, maxiter=None, callback=None, options={})Source
Minimize a function with constraints using an augmented Lagrangian method.
The objective function is assumed to be vector valued, and is minimized in the least squares sense.
- Parameters:
fun (callable) – objective to be minimized. Should have a signature like fun(x,*args)-> 1d array
x0 (array-like) – initial guess
jac (callable:) – function to compute Jacobian matrix of fun
bounds (tuple of array-like) – Lower and upper bounds on independent variables. Defaults to no bounds. Each array must match the size of x0 or be a scalar, in the latter case a bound will be the same for all variables. Use np.inf with an appropriate sign to disable bounds on all or some variables.
constraint (scipy.optimize.NonlinearConstraint) – constraint to be satisfied
args (tuple) – additional arguments passed to fun, grad, and hess
x_scale (array_like or
'hess'
, optional) – Characteristic scale of each variable. Settingx_scale
is equivalent to reformulating the problem in scaled variablesxs = x / x_scale
. An alternative view is that the size of a trust region along jth dimension is proportional tox_scale[j]
. Improved convergence may be achieved by settingx_scale
such that a step of a given size along any of the scaled variables has a similar effect on the cost function. If set to'hess'
, the scale is iteratively updated using the inverse norms of the columns of the Hessian matrix.ftol (float or None, optional) – Tolerance for termination by the change of the cost function. The optimization process is stopped when
dF < ftol * F
, and there was an adequate agreement between a local quadratic model and the true model in the last step. If None, the termination by this condition is disabled.xtol (float or None, optional) – Tolerance for termination by the change of the independent variables. Optimization is stopped when
norm(dx) < xtol * (xtol + norm(x))
. If None, the termination by this condition is disabled.gtol (float or None, optional) – Absolute tolerance for termination by the norm of the gradient. Optimizer terminates when
max(abs(g)) < gtol
., where If None, the termination by this condition is disabled.ctol (float, optional) – Tolerance for stopping based on infinity norm of the constraint violation. Optimizer terminates when
max(abs(constr_violation)) < ctol
AND one or more of the other tolerances are met (ftol
,xtol
,gtol
)verbose ({0, 1, 2}, optional) –
0 : work silently.
1 (default) : display a termination report.
2 : display progress during iterations
maxiter (int, optional) – maximum number of iterations. Defaults to size(x)*100
callback (callable, optional) –
Called after each iteration. Should be a callable with the signature:
callback(xk, *args) -> bool
where
xk
is the current parameter vector, andargs
are the same arguments passed to fun and jac. If callback returns True the algorithm execution is terminated.options (dict, optional) –
dictionary of optional keyword arguments to override default solver settings.
"initial_penalty_parameter"
: (float or array-like) Initial value for the quadratic penalty parameter. May be array like, in which case it should be the same length as the number of constraint residuals. Default 10."initial_multipliers"
: (float or array-like or"least_squares"
) Initial Lagrange multipliers. May be array like, in which case it should be the same length as the number of constraint residuals. If"least_squares"
, uses an estimate based on the least squares solution of the optimality conditions, see ch 14 of [1]. Default 0."omega"
: (float) Hyperparameter for determining initial gradient tolerance. See algorithm 14.4.2 from [1] for details. Default 1.0"eta"
: (float) Hyperparameter for determining initial constraint tolerance. See algorithm 14.4.2 from [1] for details. Default 1.0"alpha_omega"
: (float) Hyperparameter for updating gradient tolerance. See algorithm 14.4.2 from [1] for details. Default 1.0"beta_omega"
: (float) Hyperparameter for updating gradient tolerance. See algorithm 14.4.2 from [1] for details. Default 1.0"alpha_eta"
: (float) Hyperparameter for updating constraint tolerance. See algorithm 14.4.2 from [1] for details. Default 0.1"beta_eta"
: (float) Hyperparameter for updating constraint tolerance. See algorithm 14.4.2 from [1] for details. Default 0.9"tau"
: (float) Factor to increase penalty parameter by when constraint violation doesn’t decrease sufficiently. Default 10"max_nfev"
: (int > 0) Maximum number of function evaluations (each iteration may take more than one function evaluation). Default is5*maxiter+1
"max_dx"
: (float > 0) Maximum allowed change in the norm of x from its starting point. Default np.inf."initial_trust_radius"
: ("scipy"
,"conngould"
,"mix"
or float > 0) Initial trust region radius."scipy"
uses the scaled norm of x0, which is the default behavior inscipy.optimize.least_squares
."conngould"
uses the norm of the Cauchy point, as recommended in ch17 of [1]."mix"
uses the geometric mean of the previous two options. A float can also be passed to specify the trust radius directly. Default is"scipy"
."initial_trust_ratio"
: (float > 0) A extra scaling factor that is applied after one of the previous heuristics to determine the initial trust radius. Default 1."max_trust_radius"
: (float > 0) Maximum allowable trust region radius. Defaultnp.inf
."min_trust_radius"
: (float >= 0) Minimum allowable trust region radius. Optimization is terminated if the trust region falls below this value. Defaultnp.finfo(x0.dtype).eps
."tr_increase_threshold"
: (0 < float < 1) Increase the trust region radius when the ratio of actual to predicted reduction exceeds this threshold. Default 0.75."tr_decrease_threshold"
: (0 < float < 1) Decrease the trust region radius when the ratio of actual to predicted reduction is less than this threshold. Default 0.25."tr_increase_ratio"
: (float > 1) Factor to increase the trust region radius by when the ratio of actual to predicted reduction exceeds threshold. Default 2."tr_decrease_ratio"
: (0 < float < 1) Factor to decrease the trust region radius by when the ratio of actual to predicted reduction falls below threshold. Default 0.25."tr_method"
:"svd"
,"cho"
) Method to use for solving the trust region subproblem."cho"
uses a sequence of cholesky factorizations (generally 2-3), while"svd"
uses one singular value decomposition."cho"
is generally faster for large systems, especially on GPU, but may be less accurate for badly scaled systems. Default"svd"
- Returns:
res (OptimizeResult) – The optimization result represented as a
OptimizeResult
object. Important attributes are:x
the solution array,success
a Boolean flag indicating if the optimizer exited successfully.
References