Plug-and-Play Regularization
A group of regularization terms that can not be directly written down as function are learned plug-and-play (PnP) priors. These are terms based on deep neural networks, which are trainted to implement the proximal map corresponding to the regularization term. Such a PnP prior can be used in the same way as any other regularization term.
The following example shows how to use a PnP prior in the context of the Kaczmarz
solver.
using RegularizedLeastSquares
A = randn(32, 16)
x = randn(16)
b = A*x;
For the documentation we will just use the identity function as a placeholder for the PnP prior.
model = identity
identity (generic function with 1 method)
In practice, you would replace this with a neural network:
using Flux
model = Flux.loadmodel!(model, ...)
The model can then be used together with the PnPRegularization
term:
reg = PnPRegularization(1.0; model = model, shape = [16]);
Since models often expect a specific input range, we can use the MinMaxTransform
to normalize the input:
reg = PnPRegularization(1.0; model = model, shape = [16], input_transform = RegularizedLeastSquares.MinMaxTransform);
Custom input transforms can be implemented by passing something callable as the input_transform
keyword argument. For more details see the PnPRegularization documentation.
The regularization term can then be used in the solver:
solver = createLinearSolver(Kaczmarz, A; reg = reg, iterations = 32)
x_approx = solve!(solver, b)
16-element Vector{Float64}:
0.06431443960884065
-0.47583770984867213
-1.395915603743609
1.218494071436919
0.20986448998865215
0.588569233879982
0.4427758687612322
-0.12126858045607158
0.2871526751180413
1.678287084893874
-0.7827059407863903
-0.7899185058375096
-1.6253234501619687
-0.4083565796905617
0.5010867558960668
0.7611293001404082
This page was generated using Literate.jl.