Previous Up Next

15.3.3  Solving general nonlinear programming problems

The nlpsolve command computes the optimum of a multivariate objective function subject to equality and/or inequality constraints with continuous and/or integer variables.

Input syntax.
nlpsolve takes the following arguments:
Entering problems directly.
nlpsolve(obj ⟨,constr,bd,opt ⟩) returns a list [optobj,optdec], where optobj is the optimal value of obj and optdec is a list of of optimal values of the decision variables. If the optimization fails, then optobj is the error message and optdec is the last point on the path the solver takes in attempt to find the optimal objective value. If the problem is infeasible or if the solver fails to find a feasible point, then an error is returned.
Loading problems from files.
nlpsolve(fname ⟨,opt ⟩) solves the problem written in the file pointed to by fname, which must be written in the AMPL modeling language (such files commonly have the extension .mod). nlpsolve contains a very basic parser for such files and is able to read problem variables, objective and constraints. For example, problems from https://www.minlplib.org/instances.html can be loaded this way. Solver parameters can be specified in opt.
Automatic selection of the optimization method.
If objective and constraints are twice differentiable, then interior-point method is used. If no initial point is provided and there are only box constraints, then differential evolution is applied first to find a good starting point. In the non-differentiable case, nlpsolve applies the Nelder-Mead method and switches to differential evolution in the case of failure. In case of integrality constraints, as well as in the case of a box-constrained problem without additional constraints and initial point(s), differential evolution is selected. Linear problems are solved by calling the lpsolve command, while univariate optimization on a segment is performed by using Brent’s algorithm.
Local vs. global optimization.
When an initial point is provided, nlpsolve acts like a local optimizer. Without initial point(s), it uses a multistart technique in attempt to find the global minimum: starting points are automatically generated by using the probabilistic restart technique described by Luersen, Riche, and Guyon (2003) until the maximum number of iterations is reached. This technique avoids previously generated points and also the points of convergence, while keeping relatively close to them; thus, the search space is examined in a probabilistic but systematic way. An exception is the method of differential evolution, which is a “global” method by design, however not a true one; the initial population is generated uniformly if the search space is compact.
Automatic detection of variable bounds.
If the constraint derivatives are available, nlpsolve determines rough bounds of the decision variables, and is thus able to detect whether the search domain is compact. In particular, this information is used by the probabilistic restart technique.
(Mixed) integer optimization.
When integrality constraints are provided, nlpsolve applies a suitable method for finding integer-feasible solutions. For differentiable input, the branch&bound method is used, unless the problem is convex, in which case the outer-approximation method is used. In the non-differentiable case, differential evolution is applied.

Examples

The continuous function f defined by

f(x)=min

|x+4|
−1,
|x+1|
−1.005,
|x−3|
+0.5

has a unique global minimum in [−5,5] at x=−1, with f(−1)=1.005.

f:=min(sqrt(abs(x+4))-1,sqrt(abs(x+1))-1.005,sqrt(abs(x-3))+0.5):; nlpsolve(f,x=-5..5)
     
[−1.0049992998,[x=−1.0]]           

Minimize z=x1 x4 (x1+x2+x3)+x3 subject to

     
  x1 x2 x3 x4≥ 25,         
 x12+x22+x32=40,          

where 1≤ xi≤ 5, i=1,2,3,4.

nlpsolve(x1*x4*(x1+x2+x3)+x3, [x1*x2*x3*x4-25>=0,x1^2+x2^2+x3^2+x4^2-40=0],[x1,x2,x3,x4]=1..5)
     
[17.0140172892,[x1=1.0,x2=4.74299973362,x3=3.82114985829,x4=1.3794083106]]           

Minimize z=(x1−1)2+(x1x2)2+(x3−1)2+(x4−1)4+(x5−1)6 subject to

     
  x12 x4+sin(x4x5)
=2
2
,
         
x2+x34 x42
=8+
2
.
         
nlpsolve((x1-1)^2+(x1-x2)^2+(x3-1)^2+(x4-1)^4+(x5-1)^6, [x1^2*x4+sin(x4-x5)=2*sqrt(2),x2+x3^4*x4^2=8+sqrt(2)])
     
[0.241505128809,[x1=1.16617119669,x2=1.18211040318,x3=1.3802572722,         
x4=1.50603586392,x5=0.610913318325]]          

Maximize z=3x1 x2x1+6x2 subject to 5x1 x2−4x1−4.5x2≤ 32, where 1≤ x1,x2≤ 5 are integers.

nlpsolve(3x1*x2-x1+6x2,[5x1*x2-4x1-4.5x2<=32],[x1,x2]=1..5,assume=integer,maximize)
     
[58.0,[x1=2.0,x2=5.0]]           

Maximize z=2x1+3x2+6x3−2x1 x2x1 x3−4x2 x3 where all variables are binary.

nlpsolve(2x1+3x2+6x3-2x1*x2-x1*x3-4x2*x3,assume=nlp_binary,maximize)
     
[7.0,[x1=1.0,x2=0,x3=1.0]]           

Minimize z=5y−2ln(x+1) subject to

     
  ex/2
1
2
y
≤ 1,         
−2ln(x+1)−y+2.5≤ 0,         
x+y≤ 4,          

where x∈[0,2] and y∈[1,3] is integer.

nlpsolve(5y-2*ln(x+1),[exp(x/2)-sqrt(y)/2-1<=0,-2*ln(x+1)-y+2.5<=0,x+y-4<=0], x=0..2,y=1..3,nlp_integervariables=[y])
     
[8.54528930252,[x=1.06959999348,y=2.0]]           

Minimize z=x12+x22+3x32+4x42+2x52−8x1−2x2−3x3x4−2x5 subject to

     
  x1+x2+x3+x4+x5≤ 400,         
x1+2x2+2x3+x4+6x5≤ 800,         
2x1+x2+6x3≤ 200,         
x3+x4+5x5≤ 200,         
x1+x2+x3+x4+x5≥ 55,         
x1+x2+x3+x4≥ 48,         
x2+x4+x5≥ 34,         
6x1+7x5≥ 104,          

where 0≤ xi≤ 99 are integers for i=1,…,5.

obj:=x1^2+x2^2+3x3^2+4x4^2+2x5^2-8x1-2x2-3x3-x4-2x5; constr:=[x1+x2+x3+x4+x5<=400,x1+2x2+2x3+x4+6x5<=800, 2x1+x2+6x3<=200, x3+x4+5x5<=200, x1+x2+x3+x4+x5>=55, x1+x2+x3+x4>=48, x2+x4+x5>=34, 6x1+7x5>=104]; nlpsolve(obj,constr,[x1,x2,x3,x4,x5]=0..99,assume=integer)
     
[807.0,[x1=16.0,x2=22.0,x3=5.0,x4=5.0,x5=7.0]]           

Minimize the function

f(x1,x2)=








x2


x1
ettasin(t+x1)cos(2tx2)dt,
x1<x2,
0,x1≥ x2

for 0≤ x1,x2≤ 10 and a=2.

With the following program compiled in Xcas:

f:=proc(x1,x2,a) local t; purge(t); if x1<x2 then return approx(int(exp(-t)*t^a*sin(t+x1)*cos(2t-x2),t=x1..x2)); else return 0; fi; end:;

input:

nlpsolve(quote(f(x1,x2,2)),[x1,x2]=0..10)
     

−0.445124159544,
x1=2.15550857079,x2=5.66106529292

          

Since f is programmatic, it cannot be differentiated symbolically in Xcas, hence differential evolution is selected for minimization (note that the problem is box-constrained and we provided no initial points). Also, observe that our procedure f has three input arguments, so we have to quote the first argument in nlpsolve in order to fix the parameter a to 2. (This technique should be used for “black box” objectives possibly requiring parameter specifications.)

Additionally, to visualize the surface f, you can input:

plot3d(quote(f(x1,x2,2)),x1=0..5,x2=0..10,display=filled+yellow)

Previous Up Next