API for Solvers
This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?
solve!
RegularizedLeastSquares.solve!
— Methodsolve!(solver::AbstractLinearSolver, b; x0 = 0, callbacks = (_, _) -> nothing)
Solves an inverse problem for the data vector b
using solver
.
Required Arguments
solver::AbstractLinearSolver
- linear solver (e.g.,ADMM
orFISTA
), containing forward/normal operator and regularizerb::AbstractVector
- data vector ifA
was supplied to the solver, back-projection of the data otherwise
Optional Keyword Arguments
x0::AbstractVector
- initial guess for the solution; default is zerocallbacks
- (optionally a vector of) function or callable struct that takes the two argumentscallback(solver, iteration)
and, e.g., stores, prints, or plots the intermediate solutions or convergence parameters. Be sure not to modifysolver
oriteration
in the callback function as this would japaridze convergence. The default does nothing.
Examples
The optimization problem
\[ argmin_x ||Ax - b||_2^2 + λ ||x||_1\]
can be solved with the following lines of code:
julia> using RegularizedLeastSquares
julia> A = [0.831658 0.96717
0.383056 0.39043
0.820692 0.08118];
julia> x = [0.5932234523399985; 0.2697534345340015];
julia> b = A * x;
julia> S = ADMM(A, reg = L1Regularization(0.0001));
julia> x_approx = solve!(S, b)
2-element Vector{Float64}:
0.5932171509222105
0.26971370566079866
Here, we use L1Regularization
. All regularization options can be found in API for Regularizers.
The following example solves the same problem, but stores the solution x
of each interation in tr
:
julia> tr = Dict[]
Dict[]
julia> store_trace!(tr, solver, iteration) = push!(tr, Dict("iteration" => iteration, "x" => solver.x, "beta" => solver.β))
store_trace! (generic function with 1 method)
julia> x_approx = solve!(S, b; callbacks=(solver, iteration) -> store_trace!(tr, solver, iteration))
2-element Vector{Float64}:
0.5932234523399984
0.26975343453400163
julia> tr[3]
Dict{String, Any} with 3 entries:
"iteration" => 2
"x" => [0.593223, 0.269753]
"beta" => [1.23152, 0.927611]
The last example show demonstrates how to plot the solution at every 10th iteration and store the solvers convergence metrics:
julia> using Plots
julia> conv = StoreConvergenceCallback()
julia> function plot_trace(solver, iteration)
if iteration % 10 == 0
display(scatter(solver.x))
end
end
plot_trace (generic function with 1 method)
julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);
The keyword callbacks
allows you to pass a (vector of) callable objects that takes the arguments solver
and iteration
and prints, stores, or plots intermediate result.
See also StoreSolutionCallback
, StoreConvergenceCallback
, CompareSolutionCallback
for a number of provided callback options.
RegularizedLeastSquares.init!
— Methodinit!(solver::AbstractLinearSolver, b; kwargs...)
Prepare the solver for iteration based on the given data vector b
and kwargs
.
RegularizedLeastSquares.init!
— Methodinit!(solver::AbstractLinearSolver, state::AbstractSolverState, b::AbstractMatrix; scheduler = SequentialState, kwargs...)
Initialize the solver with each column of b
and pass the corresponding states to the scheduler.
ADMM
RegularizedLeastSquares.ADMM
— TypeADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
ADMM( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
Creates an ADMM
object for the forward operator A
or normal operator AHA
.
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedprecon
- preconditionner for the internal CG algorithmreg::AbstractParameterizedRegularization
- regularization term; can also be a vector of regularization termsregTrafo
- transformation to a space in whichreg
is applied; ifreg
is a vector,regTrafo
has to be a vector of the same length. UseopEye(eltype(AHA), size(AHA,1))
if no transformation is desired.normalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
rho::Real
- penalty of the augmented Lagrangianvary_rho::Symbol
- vary rho to balance primal and dual feasibility; options:none
,:balance
,:PnP
iterations::Int
- maximum number of (outer) ADMM iterationsiterationsCG::Int
- maximum number of (inner) CG iterationsabsTol::Real
- absolute tolerance for stopping criterionrelTol::Real
- relative tolerance for stopping criteriontolInner::Real
- relative tolerance for CG stopping criterionverbose::Bool
- print residual in each iteration
ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg
and regTrafo
as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization
, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz))
.
See also createLinearSolver
, solve!
.
CGNR
RegularizedLeastSquares.CGNR
— TypeCGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
CGNR( ; AHA = , reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
creates an CGNR
object for the forward operator A
or normal operator AHA
.
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedreg::AbstractParameterizedRegularization
- regularization term; can also be a vector of regularization termsnormalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
iterations::Int
- maximum number of iterationsrelTol::Real
- tolerance for stopping criterion
See also createLinearSolver
, solve!
.
Kaczmarz
RegularizedLeastSquares.Kaczmarz
— TypeKaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10)
Creates a Kaczmarz object for the forward operator A
.
Required Arguments
A
- forward operator
Optional Keyword Arguments
reg::AbstractParameterizedRegularization
- regularization termnormalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
randomized::Bool
- randomize Kacmarz algorithmsubMatrixFraction::Real
- fraction of rows used in randomized Kaczmarz algorithmshuffleRows::Bool
- randomize Kacmarz algorithmseed::Int
- seed for randomized algorithmiterations::Int
- number of iterations
See also createLinearSolver
, solve!
.
FISTA
RegularizedLeastSquares.FISTA
— TypeFISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
FISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
creates a FISTA
object for the forward operator A
or normal operator AHA
.
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedprecon
- preconditionner for the internal CG algorithmreg::AbstractParameterizedRegularization
- regularization term; can also be a vector of regularization termsnormalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
rho::Real
- step size for gradient step; the default is0.95 / max_eigenvalue
as determined with power iterations.theta::Real
- parameter for predictor-corrector steprelTol::Real
- tolerance for stopping criterioniterations::Int
- maximum number of iterationsrestart::Symbol
-:none
,:gradient
options for restartingverbose::Bool
- print residual in each iteration
See also createLinearSolver
, solve!
.
OptISTA
RegularizedLeastSquares.OptISTA
— TypeOptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
OptISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
creates a OptISTA
object for the forward operator A
or normal operator AHA
. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.
Reference:
- Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedreg::AbstractParameterizedRegularization
- regularization termnormalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
rho::Real
- step size for gradient step; the default is0.95 / max_eigenvalue
as determined with power iterations.theta::Real
- parameter for predictor-corrector steprelTol::Real
- tolerance for stopping criterioniterations::Int
- maximum number of iterationsverbose::Bool
- print residual in each iteration
See also createLinearSolver
, solve!
.
POGM
RegularizedLeastSquares.POGM
— TypePOGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
POGM( ; AHA = , reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
Creates a POGM
object for the forward operator A
or normal operator AHA
. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.
References:
A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]
Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedreg::AbstractParameterizedRegularization
- regularization termnormalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
rho::Real
- step size for gradient step; the default is0.95 / max_eigenvalue
as determined with power iterations.theta::Real
- parameter for predictor-corrector stepsigma_fac::Real
- parameter for decreasing γ-momentum ∈ [0,1]relTol::Real
- tolerance for stopping criterioniterations::Int
- maximum number of iterationsrestart::Symbol
-:none
,:gradient
options for restartingverbose::Bool
- print residual in each iteration
See also createLinearSolver
, solve!
.
SplitBregman
RegularizedLeastSquares.SplitBregman
— TypeSplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
SplitBregman( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
Creates a SplitBregman
object for the forward operator A
or normal operator AHA
.
Required Arguments
A
- forward operator
OR
AHA
- normal operator (as a keyword argument)
Optional Keyword Arguments
AHA
- normal operator is optional ifA
is suppliedprecon
- preconditionner for the internal CG algorithmreg::AbstractParameterizedRegularization
- regularization term; can also be a vector of regularization termsregTrafo
- transformation to a space in whichreg
is applied; ifreg
is a vector,regTrafo
has to be a vector of the same length. UseopEye(eltype(AHA), size(AHA,1))
if no transformation is desired.normalizeReg::AbstractRegularizationNormalization
- regularization normalization scheme; options areNoNormalization()
,MeasurementBasedNormalization()
,SystemMatrixBasedNormalization()
rho::Real
- weights for condition on regularized variables; can also be a vector for multiple regularization termsiterations::Int
- maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)iterationsInner::Int
- maximum number of inner iterationsiterationsCG::Int
- maximum number of (inner) CG iterationsabsTol::Real
- absolute tolerance for stopping criterionrelTol::Real
- relative tolerance for stopping criteriontolInner::Real
- relative tolerance for CG stopping criterionverbose::Bool
- print residual in each iteration
This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁
such that ||Ax -b||₂² < σ²
. In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁
, you can either set iterations=1
or use ADMM instead, which is equivalent (iterations=1
in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner
is simply called iterations
in ADMM)
Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg
and regTrafo
as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization
, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz))
.
See also createLinearSolver
, solve!
.
Miscellaneous
RegularizedLeastSquares.solverstate
— Functionsolverstate(solver::AbstractLinearSolver)
Return the current state of the solver
RegularizedLeastSquares.solversolution
— Functionsolversolution(solver::AbstractLinearSolver)
Return the current solution of the solver
solversolution(state::AbstractSolverState)
Return the current solution of the solver's state
RegularizedLeastSquares.solverconvergence
— Functionsolverconvergence(solver::AbstractLinearSolver)
Return a named tuple of the solvers current convergence metrics
RegularizedLeastSquares.StoreSolutionCallback
— TypeStoreSolutionCallback(T)
Callback that accumlates the solvers solution
per iteration. Results are stored in the solutions
field.
RegularizedLeastSquares.StoreConvergenceCallback
— TypeStoreConvergenceCallback()
Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas
field.
RegularizedLeastSquares.CompareSolutionCallback
— TypeCompareSolutionCallback(ref, cmp)
Callback that compares the solvers current solution
with the given reference via cmp(ref, solution)
per iteration. Results are stored in the results
field.
RegularizedLeastSquares.linearSolverList
— FunctionReturn a list of all available linear solvers
RegularizedLeastSquares.createLinearSolver
— FunctioncreateLinearSolver(solver::AbstractLinearSolver, A; kargs...)
This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.
TODO: give a hint what solvers are available
RegularizedLeastSquares.applicableSolverList
— Functionapplicable(args...)
list all solvers
that are applicable to the given arguments. Arguments are the same as for isapplicable
without the solver
type.
See also isapplicable
, linearSolverList
.
RegularizedLeastSquares.isapplicable
— Functionisapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)
return true
if a solver
of type solverType
is applicable to system matrix A
, data x
and regularization terms reg
.