API reference.

The Minimize class

class varprox._minimize.Minimize(x0, w, Ffun, DFfun, *args, **kwargs)

This class contains methods to minimize of a separable non-linear least square criterion, which is of the form:

\[h(x, y) = \frac{1}{2} \sum_{n=1}^N \left(\epsilon_n(x, y))^2 \quad \mathrm{with} \quad \epsilon_n(x,y) = F_n(x) y - w \right.\]
Parameters:
  • x (numpy.ndarray of size (K,)) – first variable \(x\) of the criterion \(h\).

  • y (numpy.ndarray of size (J,)) – second variable \(y\) of the criterion \(h\).

  • w (numpy.ndarray of size (N,)) – set of observations \((w_n)_n\).

  • Ffun (callable) –

    function which computes the mappings \(F_n\) of the criterion, with the signature Ffun(x, *args, **kwargs).

    The argument x passed to this function F is an array of size (K,). It must allocate and return an array of shape (N, J) whose nth row F[n, :] corresponds to the vector \(F_n(x)\).

  • DFfun (callable) –

    a function which defines jacobian matrices \(DF_n(x)\) of mappings \(F_n\), with the signature DFfun(x, *args, **kwargs).

    The argument x passed to this function DF is an array of size (K,). It must allocate and return an array of shape (N, J, K) whose nth term DF[n] corresponds to the jacobian matrix \(DF_n(x)\) of \(F_n(x)\). DF[n, j, k] is the partial derivative of the jth component \(F_n(x)_j\) with respect to the kth variable \(x_k\).

  • F (numpy.ndarray of size (N, J)) – current values of \(F_n\).

  • DF (numpy.ndarray of size (N, J, K)) – current jacobian matrices of \(F_n\).

  • eps (numpy.ndarray of size (N, 1)) – residuals of Equation (3). eps[n] correspond to \(\epsilon_n(x, y)\).

  • eps_jac_x (numpy.ndarray of size (N, K)) – jacobian of residuals \(\epsilon_n\) with respect to \(x\). eps_jac_x[n, k] is the partial derivative of \(\epsilon_n\) with respect to \(x_k\).

  • bounds_x (2-tuple of array_like, optional) – Lower and upper bounds on \(x\). Defaults to no bounds. Each array must match the size of x0 or be a scalar; in the latter case a bound will be the same for all variables. Use np.inf with an appropriate sign to disable bounds on all or some variables.

  • bounds_y (2-tuple of array_like, optional) – Lower and upper bounds on \(y\).

  • kwargs (args,) – Additional arguments passed to Ffun and DFfun. Empty by default.

val_res(x)

Compute the residuals \(\epsilon_n\) in (3).

Parameters:

x (numpy.ndarray of size (N,)) – Point where to compute the residuals

Returns:

Value of the residuals at the point given in argument

jac_res_x(x)

Compute the Jacobian of residuals with respect to \(x\).

Parameters:

x (numpy.ndarray of size (N,)) – Point where to compute the Jacobian of the residuals

Returns:

Value of the Jacobian of residuals at the current point \(x\).

gradient_g(x)

Compute the gradient of the function \(g\).

h_value()

Compute the value of the criterion \(h\) in (1) using Equation (4).

Returns:

Value of \(h\) at the current point \(x\).

argmin_h_x(param)

Minimize \(h\) with respect to \(x\).

Parameters:

param (Parameters) – Parameters for the algorithm

Returns:

Minimizer of \(h\) with respect to \(x\)

argmin_h_y(x)

Minimize \(h\) with respect to \(y\).

Parameters:

x (numpy.ndarray of size (N,)) – Point where to evaluate \(F\)

Returns:

Minimizer of \(h\) with respect to \(y\)

Note

This operation corresponds to the variable projection.

argmin_h()

Minimize \(h\) with respect to \((x, y)\).

Returns:

A couple \((x, y)\) that minimizes \(h\)

rfbpd()

Implementation of the rescaled Primal-dual Forward-backward algorithm (RFBPD) to minimize the following optimization problem:

(1)\[\min_{x\in\mathbb{R}^{n}} f(x) + g(Lx) + h(x) \, ,\]

where \(f\), \(g\), and \(h\) are proper, lower semi-continuous, and convex functions, \(h\) is gradient \(\gamma\)-Lipschitz, and \(L\) is a linear operator from \(\mathbb{R}^{k}\) to \(\mathbb{R}^{n}\).

RFBPD iteration then reads:

\[\begin{split}p_{n} &= \textrm{prox}_{\rho f} (x_{n}-\rho(\nabla h(x_{n})+\sigma L^{\top}v_{n}))\\ q_{n} &= (\mathrm{Id}-\textrm{prox}_{\lambda g/\sigma}) (v_{n}+L(2p_{n}-x_{n})\\ (x_{n+1},v_{n+1}) &= (x_{n},v_{n}) + \lambda_{n}((p_{n},q_{n})-(x_{n},v_{n}))\end{split}\]

where \(\rho\) and \(\sigma\) are step sizes (strictly positive) on the primal and the dual problem respectively, \(\lambda_{n}\) are inertial parameters, and \(v_{n}\) is the rescaled dual variable.

In this implementation, \(f\) is the indicator function of the set \([\epsilon,1-\epsilon]^n\), \(g\) is the \(\ell_{1}\)-norm multiplied by a (strictly positive) regularization parameter, \(L\) is the discrete gradient operator, and \(h\) is the nonlinear least-squares.

Note that \(\rho\) and \(\sigma\) need to satisfy the following inequality in order to guarantee the convergence of the sequence \((x_{n})\) to a solution to the optimization: \(\rho^{-1}-\sigma\|L\|_{*}^{2} \geq \gamma/2\).

Parameters:

param (Varprox_Param) – Parameters of the algorithm.

Returns:

Final value of the primal variable.

Parameters

class varprox._parameters.Parameters(gtol=0.0001, maxit=1000, verbose=True, reg=RegParam(name=None, weight=0, order=1), bounds_x=(-inf, inf), bounds_y=(-inf, inf), solver_param=SolverParam(gtol=0.001, maxit=1000))

This class enables to handle parameters for the optimization.

Parameters:
  • maxit (float.) – Maximum number of iterations.

  • gtol (float. Default = 1e-4.) – Tolerance for the stopping critetion.

  • lbound_x (float. Default = -infty.) – Lower bound for the non linear variable x.

  • ubound_x (float. Default = +infty.) – Upper bound for the non linear variable x.

  • lbound_y (float. Default = -infty.) – Lower bound for the linear variable y.

  • ubound_y (float. Default = infty.) – Lower bound for the linear variable y.

  • verbose (boolean.) – Verbose if True.

  • alpha (float.) – Regularization weight on the linear variable x.

  • itermax_neg (int)

  • reg_name (str.) – Type of regularization on the non linear variable x. (None = no regularization, “tv-1d” = TV regularization).

  • reg_param (float.) – Regularization weight on the non linear variable x.

  • order (int.) – order of the derivatives to be penalized.

  • tol (float.) – Tolerance for the sub-optimization on the linear variable y.

  • maxit – Maximal number for the sub-optimization on the linear variable y.

load(filename)

Load the parameters from a file.

Parameters:

filename (str) – Name of the file containing the parameters

save(filename, filetype)

Save the parameters.

Parameters:
  • filename (str) – Name of the output file

  • filetype (str) – NameFormat of the output file. Must be ‘yaml’ or ‘init’.

The TV class

class varprox._minimize.TV(dim, order=1)
generate_discrete_grad_mat()

Generate the discrete gradient matrix, i.e. the matrix with 1 on its diagonal and -1 on its first sub-diagonal.

Returns:

The discrete gradient matrix.

value(x)

This function computes the 1-dimensional discrete total variation of its input vector

\[TV(x) = \sum_{n=1}^{N-1} x_{n+1} - x_{n}.\]
Parameters:

x (numpy.ndarray of size (N,)) – input vector of length \(N\).

Returns:

1-dimensional discrete total variation of the vector \(x\).

prox_l1(data, reg_param)

This function implements the proximal operator of the l1-norm (a.k.a. soft thresholding).

Parameters:
  • data (numpy.ndarray of size (N,)) – input vector of length \(N\).

  • reg_param (float) – parameter of the operator (strictly positive).

Returns:

The proximal operator of the l1-norm evaluated at the given point.