Optimizers
Algorithms to solve optimization problems. Currently the L-BFGS algorithm is implemented, including box constraints (optional).
Public API
InverseAlgos.Optimizers — ModuleOptimizers
A module collecting a set of deterministic inversion algorithms. Main targets are geophysical inverse problems.
Exports
InverseAlgos.Optimizers.lmbfgs — Functionlmbfgs(f::Function, ∇f::Function, args...)
An implementation of the L-BFGS algorithm following Nocedal & Wright, 2006 with the addition of box constraints. This method accepts separate functions for computing the objective function and its gradient as input.
Arguments
f: a function returning the misfit (a ::Function)∇f: a function returning the gradient (a ::Function)x0: the starting model/initial guessmem: the length (number of iterations) used for memory variablesmaxiter: maximum number of iterationsbounds(optional): a two-column array where the first column contains lower bounds (constraints) and the second upper boundstarget_update(optional): initial step length for the line searchoutfile(optional): the name of the output file where to save the resultsτgrad(optional): minimum value of the gradient at which to stop the algorithmoverwriteoutput(optional): if true overwrite the output file if already existingmaxiterwolfe(optional): maximum number of iterations for the line search functionmaxiterzoom(optional): maximum number of iterations for the zoom functionc1andc2(optional): strong Wolfe valuessaveres(optional): save results? Defaults to true
Returns
x: a vector containing the solution for each iterarionmisf: a vector containing the misfit value for each iteration
lmbfgs(
fh!::Function,
x0::Array{<:T<:Real, 1};
bounds,
mem,
maxiter,
target_update,
outfile,
τgrad,
overwriteoutput,
maxiterwolfe,
maxiterzoom,
c1,
c2,
saveres
)
An implementation of the L-BFGS algorithm following Nocedal & Wright, 2006 with the addition of box constraints. This method takes a single function for computing the objective function and its gradient as input.
Arguments
fh!: a function (::Function) returning the misfit to be minimized and computing its gradient in place, e.g., misf=fh!(grad,x)x0: the starting model/initial guessmem: the length (number of iterations) used for memory variablesmaxiter: maximum number of iterationsbounds(optional): a two-column array where the first column contains lower bounds (constraints) and the second upper boundstarget_update(optional): initial step length for the line searchoutfile(optional): the name of the output file where to save the resultsτgrad(optional): minimum value of the gradient at which to stop the algorithmoverwriteoutput(optional): if true overwrite the output file if already existingmaxiterwolfe(optional): maximum number of iterations for the line search functionmaxiterzoom(optional): maximum number of iterations for the zoom functionc1andc2(optional): strong Wolfe valuessaveres(optional): save results? Defaults to true
Returns
x: a vector containing the solution for each iterarionmisf: a vector containing the misfit value for each iteration
InverseAlgos.Optimizers.gaussnewton — Functiongaussnewton(
calcfwd!::Function,
calcjac!::Function;
obsdata,
invCd,
invCm,
xprior,
x0,
maxiter,
target_update,
bounds,
outfile,
τgrad,
overwriteoutput,
maxiterwolfe,
maxiterzoom,
c1,
c2,
saveres
)
An implementation of the Gauss-Newton algorithm with the addition of box constraints to solve non-linear least squares problems. This algorithm assumes both the likelihood function and the prior to be Gaussian.
Arguments
calcfwd!: a function (::Function) solving the forward problem. It must have the following signaturecalcfwd!(u,x), whereurepresents the vector of calculated data andxa vector of the input model parameters.calcjac!: a function (::Function) computing the Jacobian of the forward problem. It must have the following signatureo = calcjac!(jac,x), wherejacis the Jacobian matrix of the forward (partial derivatives) andxa vector of the input model parameters.obsdata: the vector containing the observed datainvCd: the inverse of the covariance of the observed datainvCm: the inverse of the covariance of the prior modelxprior: the prior model (a vector)x0(optional): the starting model/initial guessmaxiter: maximum number of iterationstarget_update(optional): initial step length for the line searchbounds(optional): a two-column array where the first column contains lower bounds (constraints) and the second upper boundsoutfile(optional): the name of the output file where to save the resultsτgrad(optional): minimum value of the gradient at which to stop the algorithmoverwriteoutput(optional): if true overwrite the output file if already existingmaxiterwolfe(optional): maximum number of iterations for the line search functionmaxiterzoom(optional): maximum number of iterations for the zoom functionc1andc2(optional): strong Wolfe valuessaveres(optional): save results? Defaults to true
Returns
x: a vector containing the solution for each iterarionmisf: a vector containing the misfit value for each iteration