Optimizers

Algorithms to solve optimization problems. Currently the L-BFGS algorithm is implemented, including box constraints (optional).

Public API

InverseAlgos.Optimizers.lmbfgsFunction
lmbfgs(f::Function, ∇f::Function, args...)

An implementation of the L-BFGS algorithm following Nocedal & Wright, 2006 with the addition of box constraints. This method accepts separate functions for computing the objective function and its gradient as input.

Arguments

  • f: a function returning the misfit (a ::Function)
  • ∇f: a function returning the gradient (a ::Function)
  • x0: the starting model/initial guess
  • mem: the length (number of iterations) used for memory variables
  • maxiter: maximum number of iterations
  • bounds (optional): a two-column array where the first column contains lower bounds (constraints) and the second upper bounds
  • target_update (optional): initial step length for the line search
  • outfile (optional): the name of the output file where to save the results
  • τgrad (optional): minimum value of the gradient at which to stop the algorithm
  • overwriteoutput (optional): if true overwrite the output file if already existing
  • maxiterwolfe (optional): maximum number of iterations for the line search function
  • maxiterzoom (optional): maximum number of iterations for the zoom function
  • c1 and c2 (optional): strong Wolfe values
  • saveres (optional): save results? Defaults to true

Returns

  • x: a vector containing the solution for each iterarion
  • misf: a vector containing the misfit value for each iteration
source
lmbfgs(
    fh!::Function,
    x0::Array{<:T<:Real, 1};
    bounds,
    mem,
    maxiter,
    target_update,
    outfile,
    τgrad,
    overwriteoutput,
    maxiterwolfe,
    maxiterzoom,
    c1,
    c2,
    saveres
)

An implementation of the L-BFGS algorithm following Nocedal & Wright, 2006 with the addition of box constraints. This method takes a single function for computing the objective function and its gradient as input.

Arguments

  • fh!: a function (::Function) returning the misfit to be minimized and computing its gradient in place, e.g., misf=fh!(grad,x)
  • x0: the starting model/initial guess
  • mem: the length (number of iterations) used for memory variables
  • maxiter: maximum number of iterations
  • bounds (optional): a two-column array where the first column contains lower bounds (constraints) and the second upper bounds
  • target_update (optional): initial step length for the line search
  • outfile (optional): the name of the output file where to save the results
  • τgrad (optional): minimum value of the gradient at which to stop the algorithm
  • overwriteoutput (optional): if true overwrite the output file if already existing
  • maxiterwolfe (optional): maximum number of iterations for the line search function
  • maxiterzoom (optional): maximum number of iterations for the zoom function
  • c1 and c2 (optional): strong Wolfe values
  • saveres (optional): save results? Defaults to true

Returns

  • x: a vector containing the solution for each iterarion
  • misf: a vector containing the misfit value for each iteration
source
InverseAlgos.Optimizers.gaussnewtonFunction
gaussnewton(
    calcfwd!::Function,
    calcjac!::Function;
    obsdata,
    invCd,
    invCm,
    xprior,
    x0,
    maxiter,
    target_update,
    bounds,
    outfile,
    τgrad,
    overwriteoutput,
    maxiterwolfe,
    maxiterzoom,
    c1,
    c2,
    saveres
)

An implementation of the Gauss-Newton algorithm with the addition of box constraints to solve non-linear least squares problems. This algorithm assumes both the likelihood function and the prior to be Gaussian.

Arguments

  • calcfwd!: a function (::Function) solving the forward problem. It must have the following signature calcfwd!(u,x), where u represents the vector of calculated data and x a vector of the input model parameters.
  • calcjac!: a function (::Function) computing the Jacobian of the forward problem. It must have the following signature o = calcjac!(jac,x), where jac is the Jacobian matrix of the forward (partial derivatives) and x a vector of the input model parameters.
  • obsdata: the vector containing the observed data
  • invCd: the inverse of the covariance of the observed data
  • invCm: the inverse of the covariance of the prior model
  • xprior: the prior model (a vector)
  • x0 (optional): the starting model/initial guess
  • maxiter: maximum number of iterations
  • target_update (optional): initial step length for the line search
  • bounds (optional): a two-column array where the first column contains lower bounds (constraints) and the second upper bounds
  • outfile (optional): the name of the output file where to save the results
  • τgrad (optional): minimum value of the gradient at which to stop the algorithm
  • overwriteoutput (optional): if true overwrite the output file if already existing
  • maxiterwolfe (optional): maximum number of iterations for the line search function
  • maxiterzoom (optional): maximum number of iterations for the zoom function
  • c1 and c2 (optional): strong Wolfe values
  • saveres (optional): save results? Defaults to true

Returns

  • x: a vector containing the solution for each iterarion
  • misf: a vector containing the misfit value for each iteration
source