Inverse Problems: Recovering Physical Parameters¶
Given noisy measurements from a physical system, recover the underlying model parameters by minimizing data-model misfit. The symbolic-numeric bridge is central: we solve the governing ODE symbolically with ode2(), obtaining an analytic formula for the model predictions, then optimize these parameters numerically with np_minimize.
- Spring-mass-damper [S+N] — recover $k$ and $c$ from position data
- Radioactive decay [S+N] — recover half-life from count data
- RLC circuit — recover $R$ and $L$ from voltage response
- Tikhonov regularization — stabilizing ill-conditioned recovery
load("numerics")$
load("numerics-optimize")$
load("ax-plots")$
/* Finite-difference gradient helper */
fd_gradient(f, x, eps) := block([nd, g],
nd : np_size(x), g : np_zeros([nd]),
for j : 0 thru nd - 1 do block([xp, xm],
xp : np_copy(x), np_set(xp, j, np_ref(x, j) + eps),
xm : np_copy(x), np_set(xm, j, np_ref(x, j) - eps),
np_set(g, j, (f(xp) - f(xm)) / (2 * eps))),
g)$
1. Spring-Mass-Damper System [S+N]¶
Symbolic Phase: Solving the ODE¶
A damped harmonic oscillator with mass $m$, damping $c$, and spring constant $k$: $$m\ddot{x} + c\dot{x} + kx = 0, \quad x(0) = x_0,\; \dot{x}(0) = v_0$$
Maxima's ode2() solves this symbolically. The solution depends on the unknown parameters $(k, c)$.
/* Damped harmonic oscillator */
assume(m > 0, k_spring > 0, c_damp > 0, 4*k_spring*m - c_damp^2 > 0)$
ode_spring : m * 'diff(x, t, 2) + c_damp * 'diff(x, t) + k_spring * x = 0;
/* Solve with ode2 (underdamped case) */
sol_general : ode2(ode_spring, x, t);
/* Apply ICs: x(0) = 1, x'(0) = 0, m = 1 */
sol_ic : ic2(sol_general, t = 0, x = 1, 'diff(x, t) = 0)$
sol_m1 : subst(m = 1, sol_ic)$
x_sym : rhs(sol_m1);
Generating Synthetic Data¶
We choose "true" parameters $k = 8$ and $c = 0.5$ (underdamped), generate the true trajectory from the symbolic solution, then add noise.
/* True parameters */
true_params : [k_spring = 8, c_damp = 0.5]$
x_true_sym : subst(true_params, x_sym)$
print("True solution:", x_true_sym)$
/* Sample at discrete times */
n_obs : 40$
t_obs : np_linspace(0, 6, n_obs)$
t_list : np_to_list(t_obs)$
/* Evaluate symbolic solution at observation times */
x_true_nd : np_eval(x_true_sym, t, t_obs)$
/* Add noise */
noise_level : 0.05$
x_noisy : np_add(x_true_nd, np_scale(noise_level, np_randn([n_obs])))$
print("Generated", n_obs, "noisy observations")$
True solution:
%e^-(0.25*t)*(0.08873565094161139*sin(2.817356917396161*t)
+cos(2.817356917396161*t))
Generated 40 noisy observations
ax_draw2d(
color="blue", line_width=2, name="True x(t)",
explicit(float(x_true_sym), t, 0, 6),
color="red", marker_size=5, name="Noisy observations",
points(t_list, np_to_list(x_noisy)),
title="Spring-Mass-Damper: Synthetic Data",
xlabel="Time (s)", ylabel="Position x(t)",
grid=true, showlegend=true
)$
Defining the Misfit Function¶
For trial parameters $(k, c)$, we evaluate the symbolic solution at the observation times and compute the sum of squared residuals: $$J(k, c) = \frac{1}{2n}\sum_{i=1}^n \bigl(x_{\text{obs}}(t_i) - x_{\text{model}}(t_i; k, c)\bigr)^2$$
The cost function calls np_eval to evaluate the symbolic solution at each observation time — a genuine S+N bridge where the forward model is symbolic.
/* Misfit cost: evaluate symbolic x(t; k, c) at each observation time */
spring_cost(p) := block([k_v, c_v, x_model, pred, res],
k_v : exp(np_ref(p, 0)), /* log-transform for positivity */
c_v : exp(np_ref(p, 1)),
x_model : subst([k_spring = k_v, c_damp = c_v], x_sym),
pred : np_eval(x_model, t, t_obs),
res : np_sub(pred, x_noisy),
np_sum(np_pow(res, 2)) / (2 * n_obs))$
spring_grad(p) := fd_gradient(spring_cost, p, 1e-5)$
/* Initial guess (deliberately wrong): k=4, c=1 */
p0_spring : ndarray([log(4.0), log(1.0)])$
print("Initial cost:", spring_cost(p0_spring))$
[p_spring, cost_spring, conv_spring] : np_minimize(
spring_cost, spring_grad, p0_spring, 1e-8, 200)$
k_rec : exp(np_ref(p_spring, 0))$
c_rec : exp(np_ref(p_spring, 1))$
print("Recovered: k =", k_rec, ", c =", c_rec)$
print("True: k = 8, c = 0.5")$
print("Cost:", cost_spring, " Converged:", conv_spring)$
Initial cost: 0.0866860961629057
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 8.668609616290570D-02 GNORM= 8.068459223382649D-02
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 2 2.677310664885135D-02 6.673934897436920D-02 1.239394006109579D+01
9 13 1.024169250563708D-03 1.373441423896458D-08 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
Recovered: k = 7.960144242726255 , c = 0.5078136074170632
True: k = 8, c = 0.5
Cost: 0.0010241692505637084 Converged: true
x_rec_sym : subst([k_spring = k_rec, c_damp = c_rec], x_sym)$
ax_draw2d(
color="red", marker_size=5, name="Noisy data",
points(t_list, np_to_list(x_noisy)),
color="blue", line_width=2, name="Recovered model",
explicit(float(x_rec_sym), t, 0, 6),
color="gray", dash="dash", name="True model",
explicit(float(x_true_sym), t, 0, 6),
title="Inverse Problem: Recovered Parameters",
xlabel="Time (s)", ylabel="Position x(t)",
grid=true, showlegend=true
)$
2. Radioactive Decay — Recovering Half-Life [S+N]¶
The decay law $N(t) = N_0 e^{-\lambda t}$ is exact. Given count data, recover the decay constant $\lambda$ (and hence half-life $t_{1/2} = \ln 2 / \lambda$). Here we derive the gradient analytically.
/* Symbolic decay model and gradient */
N_model(lam, N0, t_i) := N0 * exp(-lam * t_i)$
L_decay(lam, N0, t_i, y_i) := (1/2) * (y_i - N_model(lam, N0, t_i))^2$
dL_dlam : diff(L_decay(lam, N0, t[i], y[i]), lam);
dL_dN0 : diff(L_decay(lam, N0, t[i], y[i]), N0);
%e^-(t[i]*lam)*N0*t[i]*(y[i]-%e^-(t[i]*lam)*N0)
/* True: lambda=0.1, N0=1000 */
n_decay : 30$
t_decay : np_linspace(0, 20, n_decay)$
N_true : np_scale(1000.0, np_exp(np_scale(-0.1, t_decay)))$
N_noisy : np_add(N_true, np_scale(30.0, np_randn([n_decay])))$
/* Cost and analytic gradient */
decay_cost(p) := block([lam_v, N0_v, pred, res],
lam_v : np_ref(p, 0), N0_v : np_ref(p, 1),
pred : np_scale(N0_v, np_exp(np_scale(-lam_v, t_decay))),
res : np_sub(pred, N_noisy),
np_sum(np_pow(res, 2)) / (2 * n_decay))$
decay_grad(p) := block([lam_v, N0_v, e_lt, pred, res],
lam_v : np_ref(p, 0), N0_v : np_ref(p, 1),
e_lt : np_exp(np_scale(-lam_v, t_decay)),
pred : np_scale(N0_v, e_lt),
res : np_sub(pred, N_noisy),
ndarray([
-np_sum(np_mul(np_mul(res, np_scale(N0_v, t_decay)), e_lt)) / n_decay,
np_sum(np_mul(res, e_lt)) / n_decay
]))$
[p_decay, _, _] : np_minimize(decay_cost, decay_grad, ndarray([0.5, 500.0]))$
lam_rec : np_ref(p_decay, 0)$
N0_rec : np_ref(p_decay, 1)$
print("Recovered: lambda =", lam_rec, ", N0 =", N0_rec)$
print("Half-life:", log(2) / lam_rec, " (true:", float(log(2) / 0.1), ")")$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 8.986792389940278D+04 GNORM= 5.564974945276942D+04
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 7 2.027820530940025D+04 2.848700714991189D+03 8.349205297821827D-06
24 31 3.216629009449158D+02 4.431067968884647D-07 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
Recovered: lambda = 0.09947552497891572 , N0 = 1012.1149167426076
Half-life: 10.052724026457307*log(2) (true: 6.931471805599453 )
ax_draw2d(
color="red", marker_size=5, name="Noisy counts",
points(np_to_list(t_decay), np_to_list(N_noisy)),
color="blue", line_width=2, name="Recovered fit",
explicit(N0_rec * exp(-lam_rec * t), t, 0, 20),
color="gray", dash="dash", name="True decay",
explicit(1000 * exp(-0.1 * t), t, 0, 20),
title="Radioactive Decay: Parameter Recovery",
xlabel="Time", ylabel="Count N(t)",
grid=true, showlegend=true
)$
3. RLC Circuit Parameter Recovery [S+N]¶
Given measured capacitor voltage $v_C(t)$ in a series RLC circuit with step input, recover the resistance $R$ and inductance $L$ (assuming $C$ is known). This is system identification — the same state-space formulation as the control-systems notebook, but now used inversely.
/* Simulate RLC step response via state-space + matrix exponential */
simulate_rlc(R_val, L_val, C_val) := block(
[A_rlc, B_rlc, eAdt, Bu, x_rlc, v_out, dt_rlc, nt_rlc],
dt_rlc : 0.02, nt_rlc : 250,
A_rlc : ndarray(matrix([-R_val/L_val, -1/L_val], [1/C_val, 0])),
B_rlc : ndarray(matrix([1/L_val], [0])),
eAdt : np_expm(np_scale(dt_rlc, A_rlc)),
Bu : np_scale(dt_rlc, B_rlc),
x_rlc : np_zeros([2, 1]),
v_out : np_zeros([nt_rlc]),
for k : 0 thru nt_rlc - 1 do (
np_set(v_out, k, np_ref(x_rlc, 1, 0)),
x_rlc : np_add(np_matmul(eAdt, x_rlc), Bu)),
v_out)$
/* True: R=1, L=0.5, C=1 */
C_known : 1.0$
v_true : simulate_rlc(1.0, 0.5, C_known)$
v_noisy : np_add(v_true, np_scale(0.02, np_randn([250])))$
rlc_cost(p) := block([R_v, L_v, v_pred, res],
R_v : exp(np_ref(p, 0)),
L_v : exp(np_ref(p, 1)),
v_pred : simulate_rlc(R_v, L_v, C_known),
res : np_sub(v_pred, v_noisy),
np_sum(np_pow(res, 2)) / (2 * 250))$
rlc_grad(p) := fd_gradient(rlc_cost, p, 1e-4)$
[p_rlc, _, conv_rlc] : np_minimize(rlc_cost, rlc_grad,
ndarray([log(2.0), log(1.0)]), 1e-6, 100)$
print("Recovered: R =", exp(np_ref(p_rlc, 0)),
", L =", exp(np_ref(p_rlc, 1)))$
print("True: R = 1, L = 0.5")$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 2.425274468881477D-02 GNORM= 6.321217731368883D-02
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 2 7.674897663442143D-03 3.239368238191037D-02 1.581973667886055D+01
8 10 2.079612822389576D-04 2.553915708514768D-07 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
Recovered: R = 0.9966581897783504 , L = 0.49752402189312744
True: R = 1, L = 0.5
t_rlc : makelist(0.02 * i, i, 0, 249)$
v_rec : simulate_rlc(exp(np_ref(p_rlc, 0)), exp(np_ref(p_rlc, 1)), C_known)$
ax_draw2d(
color="red", marker_size=3, name="Noisy v_C(t)",
points(t_rlc, np_to_list(v_noisy)),
color="blue", line_width=2, name="Recovered model",
lines(t_rlc, np_to_list(v_rec)),
title="RLC Circuit: Parameter Recovery",
xlabel="Time (s)", ylabel="v_C(t)",
grid=true, showlegend=true
)$
4. Tikhonov Regularization¶
When data is scarce or noisy, the inverse problem may be ill-conditioned. Tikhonov regularization adds a penalty on parameter deviation from a prior: $$J_{\text{reg}}(\theta) = J_{\text{data}}(\theta) + \alpha \|\theta - \theta_{\text{prior}}\|^2$$
We demonstrate with the spring-mass problem using only 8 noisy observations.
/* Sparse, noisy observations */
n_sparse : 8$
t_sparse : np_linspace(0, 6, n_sparse)$
x_sparse : np_add(
np_eval(x_true_sym, t, t_sparse),
np_scale(0.15, np_randn([n_sparse])))$
/* Unregularized cost */
spring_cost_sp(p) := block([k_v, c_v, x_model, pred, res],
k_v : exp(np_ref(p, 0)), c_v : exp(np_ref(p, 1)),
x_model : subst([k_spring = k_v, c_damp = c_v], x_sym),
pred : np_eval(x_model, t, t_sparse),
res : np_sub(pred, x_sparse),
np_sum(np_pow(res, 2)) / (2 * n_sparse))$
/* Regularized: add alpha * ||log(p) - log(p_prior)||^2 */
alpha_reg : 0.1$
p_prior : ndarray([log(5.0), log(0.3)])$ /* prior guess */
spring_cost_reg(p) := spring_cost_sp(p) +
alpha_reg * np_sum(np_pow(np_sub(p, p_prior), 2))$
spring_grad_sp(p) := fd_gradient(spring_cost_sp, p, 1e-5)$
spring_grad_reg(p) := fd_gradient(spring_cost_reg, p, 1e-5)$
[p_unreg, _, _] : np_minimize(spring_cost_sp, spring_grad_sp,
ndarray([log(4.0), log(1.0)]))$
[p_reg, _, _] : np_minimize(spring_cost_reg, spring_grad_reg,
ndarray([log(4.0), log(1.0)]))$
print("Unregularized: k =", exp(np_ref(p_unreg, 0)),
", c =", exp(np_ref(p_unreg, 1)))$
print("Regularized: k =", exp(np_ref(p_reg, 0)),
", c =", exp(np_ref(p_reg, 1)))$
print("True: k = 8, c = 0.5")$
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 1.163723747782267D-01 GNORM= 1.252287990512722D-01
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 4 3.207124403668811D-02 3.840245537947538D-02 5.718144445160766D+00
9 16 1.222115307687371D-02 2.122266915831231D-08 5.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
*************************************************
N= 2 NUMBER OF CORRECTIONS= 2
INITIAL VALUES
F= 2.663067305831843D-01 GNORM= 2.848381654481724D-01
*************************************************
I NFN FUNC GNORM STEPLENGTH
1 2 4.624399858615044D-02 8.910871538831745D-02 3.510765484767717D+00
8 11 3.172977568292989D-02 1.091056461555555D-08 1.000000000000000D+00
THE MINIMIZATION TERMINATED WITHOUT DETECTING ERRORS.
IFLAG = 0
Unregularized: k = 7.758659296555438 , c = 0.41327495738669406
Regularized: k = 7.501113350682185 , c = 0.31723500659760673
True: k = 8, c = 0.5
x_unreg_sym : subst([k_spring = exp(np_ref(p_unreg, 0)),
c_damp = exp(np_ref(p_unreg, 1))], x_sym)$
x_reg_sym : subst([k_spring = exp(np_ref(p_reg, 0)),
c_damp = exp(np_ref(p_reg, 1))], x_sym)$
ax_draw2d(
color="red", marker_size=6, name="Sparse data (n=8)",
points(np_to_list(t_sparse), np_to_list(x_sparse)),
color="orange", line_width=2, name="Unregularized",
explicit(float(x_unreg_sym), t, 0, 6),
color="blue", line_width=2, name="Regularized",
explicit(float(x_reg_sym), t, 0, 6),
color="gray", dash="dash", name="True",
explicit(float(x_true_sym), t, 0, 6),
title="Tikhonov Regularization Effect",
xlabel="Time (s)", ylabel="x(t)",
grid=true, showlegend=true
)$
Summary¶
| System | Parameters | Forward Model | Gradient |
|---|---|---|---|
| Spring-mass-damper | $k$, $c$ | ode2() symbolic solution |
Finite differences |
| Radioactive decay | $\lambda$, $N_0$ | $N_0 e^{-\lambda t}$ | Analytic via diff() |
| RLC circuit | $R$, $L$ | State-space + np_expm |
Finite differences |
The symbolic-numeric bridge enables a powerful workflow: solve the forward model symbolically once, then evaluate it many times during optimization. Log-parameterization enforces physical positivity constraints, and Tikhonov regularization stabilizes recovery when data is scarce.