Symbolic Optimisation¶
Write the objective as a symbolic expression and let numerics handle differentiation, compilation, and optimisation.
np_minimizeexpression mode — gradient computed viadiff()and compiled automaticallynp_compile_gradient— compile expression + gradient for reusenp_minimize_cobyla— constrained optimisation (derivative-free)
load("numerics")$
load("numerics-optimize")$
load("ax-plots")$
1. Expression-Mode np_minimize¶
Just write the math — the gradient is derived and compiled behind the scenes.
$$\min_{x_1, x_2} \; x_1^2 + x_2^2$$
[x_opt, f_opt, ok] : np_minimize(x1^2 + x2^2, [x1, x2],
ndarray([3.0, 4.0]));
print("Minimum at:", np_to_list(x_opt), " f =", f_opt)$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 2.500000000000000D+01 GNORM= 1.000000000000000D+01
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 2 1.600000000000000D+01 8.000000000000000D+00 1.000000000000000D-01
2 3 1.972152263052530D-30 2.808666774861361D-15 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
[ndarray(\#\<ndarray\ 2\ DOUBLE\-FLOAT\ \(2\)\>),1.9721522630525295e-30,true]
Minimum at: [-4.440892098500626e-16,-1.3322676295501878e-15] f =
1.9721522630525295e-30
Rosenbrock Function¶
$$\min_{x, y} \; (1-x)^2 + 100(y - x^2)^2$$
A classic benchmark with a narrow curved valley.
[x_rb, f_rb, ok_rb] : np_minimize(
(1-x)^2 + 100*(y - x^2)^2, [x, y],
ndarray([-1.0, 1.0]), 1e-10, 500);
print("Rosenbrock minimum:", np_to_list(x_rb))$
print("f =", f_rb, " converged:", ok_rb)$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 4.000000000000000D+00 GNORM= 4.000000000000000D+00
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 3 3.991066466970313D+00 2.971572623024246D+00 1.672278466186611D-03
42 55 0.000000000000000D+00 0.000000000000000D+00 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
[ndarray(\#\<ndarray\ 4\ DOUBLE\-FLOAT\ \(2\)\>),0.0,true]
Rosenbrock minimum: [1.0,1.0]
f = 0.0 converged: true
Comparison: Expression vs Function Mode¶
Expression mode compiles the objective and its gradient (via diff()) into native Lisp closures — the L-BFGS loop does not involve the Maxima evaluator.
Function mode calls user-provided f and grad through the evaluator on each iteration, but can leverage vectorized ndarray operations (np_matmul, np_dot, etc.) backed by BLAS/LAPACK.
Both modes solve the same underlying L-BFGS problem.
/* Same Rosenbrock via function mode, for comparison */
f_rosen(p) := block([px, py],
px : np_ref(p, 0), py : np_ref(p, 1),
(1 - px)^2 + 100*(py - px^2)^2)$
g_rosen(p) := block([px, py],
px : np_ref(p, 0), py : np_ref(p, 1),
ndarray([-2*(1 - px) - 400*px*(py - px^2),
200*(py - px^2)]))$
[x_fn, f_fn, ok_fn] : np_minimize(f_rosen, g_rosen,
ndarray([-1.0, 1.0]), 1e-10, 500);
print("Function mode:", np_to_list(x_fn), " f =", f_fn)$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 4.000000000000000D+00 GNORM= 4.000000000000000D+00
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 3 3.991066466970313D+00 2.971572623024246D+00 1.672278466186611D-03
42 55 0.000000000000000D+00 0.000000000000000D+00 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
[ndarray(\#\<ndarray\ 62\ DOUBLE\-FLOAT\ \(2\)\>),0.0,true]
Function mode: [1.0,1.0] f = 0.0
2. np_compile_gradient¶
Compile expression + gradient once, then use with np_minimize in function mode. Useful when you want to inspect or reuse the compiled functions.
/* Compile a 3-variable objective */
[f_cg, g_cg] : np_compile_gradient(
(a - 1)^2 + 2*(b - 3)^2 + 5*(c + 1)^2, [a, b, c]);
/* Evaluate at a test point */
print("f(0,0,0) =", f_cg(ndarray([0.0, 0.0, 0.0])))$
print("grad(0,0,0) =", np_to_list(g_cg(ndarray([0.0, 0.0, 0.0]))))$
/* Minimize */
[x_cg, f_cg_opt, ok_cg] : np_minimize(f_cg, g_cg,
ndarray([0.0, 0.0, 0.0]));
print("Minimum at:", np_to_list(x_cg))$
[np_compiled_f1,np_compiled_g1]
f(0,0,0) = 24.0
grad(0,0,0) = [-2.0,-12.0,10.0]
*************************************************
N= 3 NUMBER OF CORRECTIONS= 3
INITIAL VALUES
F= 2.400000000000000D+01 GNORM= 1.574801574802362D+01
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 2 1.144553263907315D+01 9.823910534330865D+00 6.350006350009525D-02
10 13 6.162975822039155D-32 1.110223024625157D-15 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
[ndarray(\#\<ndarray\ 81\ DOUBLE\-FLOAT\ \(3\)\>),6.162975822039155e-32,true]
Minimum at: [1.0,3.0,-0.9999999999999999]
[x_disk, f_disk, nev, cob_info] : np_minimize_cobyla(
x1 + 2*x2, [x1, x2], [0.0, 0.0],
[1 - x1^2 - x2^2 >= 0]);
print("Minimum:", np_to_list(x_disk))$
print("f =", f_disk, " evals:", nev, " info:", cob_info)$
[ndarray(\#\<ndarray\ 82\ DOUBLE\-FLOAT\ \(2\)\>),-2.2360679775019228,40,0] Minimum: [-0.44721448992757556,-0.8944267437871737] f = -2.2360679775019228 evals: 40 info: 0
Equality Constraint¶
$$\min\; x_1^2 + x_2^2 \quad \text{s.t.} \quad x_1 + x_2 = 1$$
The minimum is at $(0.5, 0.5)$ with $f = 0.5$.
[x_eq, f_eq, nev_eq, inf_eq] : np_minimize_cobyla(
x1^2 + x2^2, [x1, x2], [0.8, 0.2],
[x1 + x2 = 1]);
print("Minimum:", np_to_list(x_eq), " f =", f_eq)$
[ndarray(\#\<ndarray\ 83\ DOUBLE\-FLOAT\ \(2\)\>),0.500000000001141,52,0] Minimum: [0.5000007553481848,0.4999992446518151] f = 0.500000000001141
Mixed Constraints: Box + Budget¶
A resource allocation problem:
$$\min\; -(x_1 x_2) \quad \text{s.t.} \quad x_1 + x_2 \le 10, \; x_1 \ge 0, \; x_2 \ge 0$$
Maximize the product of two non-negative quantities with a budget constraint. By AM-GM, the optimum is $x_1 = x_2 = 5$.
[x_box, f_box, nev_box, inf_box] : np_minimize_cobyla(
-(x1 * x2), [x1, x2], [1.0, 1.0],
[10 - x1 - x2 >= 0, x1 >= 0, x2 >= 0]);
print("Optimal allocation:", np_to_list(x_box))$
print("Max product:", -f_box)$
[ndarray(\#\<ndarray\ 84\ DOUBLE\-FLOAT\ \(2\)\>),-24.99999999999939,64,0] Optimal allocation: [5.00000078108792,4.99999921891208] Max product: 24.99999999999939
Summary¶
| Function | Gradient | Constraints | Use case |
|---|---|---|---|
np_minimize (expression) |
Auto via diff() + compile |
None | Small-dimensional, symbolic objectives |
np_minimize (function) |
User-provided | None | Array-scale, procedural, or BLAS-backed objectives |
np_compile_gradient |
Compile for reuse | None | Inspect or cache compiled gradient |
np_minimize_cobyla |
Derivative-free | Inequality + equality | Constrained problems |