Plug-and-Play Regularization

A group of regularization terms that can not be directly written down as function are learned plug-and-play (PnP) priors. These are terms based on deep neural networks, which are trainted to implement the proximal map corresponding to the regularization term. Such a PnP prior can be used in the same way as any other regularization term.

The following example shows how to use a PnP prior in the context of the Kaczmarz solver.

using RegularizedLeastSquares
A = randn(32, 16)
x = randn(16)
b = A*x;

For the documentation we will just use the identity function as a placeholder for the PnP prior.

model = identity
identity (generic function with 1 method)

In practice, you would replace this with a neural network:

using Flux
model = Flux.loadmodel!(model, ...)

The model can then be used together with the PnPRegularization term:

reg = PnPRegularization(1.0; model = model, shape = [16]);

Since models often expect a specific input range, we can use the MinMaxTransform to normalize the input:

reg = PnPRegularization(1.0; model = model, shape = [16], input_transform = RegularizedLeastSquares.MinMaxTransform);

Custom input transforms can be implemented by passing something callable as the input_transform keyword argument. For more details see the PnPRegularization documentation.

The regularization term can then be used in the solver:

solver = createLinearSolver(Kaczmarz, A; reg = reg, iterations = 32)
x_approx = solve!(solver, b)
16-element Vector{Float64}:
  0.8032433810455986
  0.2499294822749707
  1.6551137137020577
  0.9100496084231173
 -1.2347940443886
  1.3399545268728252
 -1.0778544300010877
  0.9883810624708744
  0.887782235207317
 -1.1295144167930344
  1.9879853030695254
  0.26466952852503467
  0.3247652184793879
 -0.3123305843221159
 -0.34564164791093477
 -0.0035304933310271736

This page was generated using Literate.jl.