Plug-and-Play Regularization
A group of regularization terms that can not be directly written down as function are learned plug-and-play (PnP) priors. These are terms based on deep neural networks, which are trainted to implement the proximal map corresponding to the regularization term. Such a PnP prior can be used in the same way as any other regularization term.
The following example shows how to use a PnP prior in the context of the Kaczmarz solver.
using RegularizedLeastSquares
A = randn(32, 16)
x = randn(16)
b = A*x;For the documentation we will just use the identity function as a placeholder for the PnP prior.
model = identityidentity (generic function with 1 method)In practice, you would replace this with a neural network:
using Flux
model = Flux.loadmodel!(model, ...)The model can then be used together with the PnPRegularization term:
reg = PnPRegularization(1.0; model = model, shape = [16]);Since models often expect a specific input range, we can use the MinMaxTransform to normalize the input:
reg = PnPRegularization(1.0; model = model, shape = [16], input_transform = RegularizedLeastSquares.MinMaxTransform);Custom input transforms can be implemented by passing something callable as the input_transform keyword argument. For more details see the PnPRegularization documentation.
The regularization term can then be used in the solver:
solver = createLinearSolver(Kaczmarz, A; reg = reg, iterations = 32)
x_approx = solve!(solver, b)16-element Vector{Float64}:
-0.5466586872166126
0.3675118673062081
-0.9358073027238525
0.3851063621850095
0.9351537634366034
-0.03172357786563573
-0.14536047917770523
1.1250271028787369
0.6956725769082861
-2.296955470397778
-1.0054066720370007
-0.09448579208224084
-0.8045737189031641
0.023268299619398025
2.5171914349112408
-0.13951337897153904This page was generated using Literate.jl.