On the dynamic programming principle for static and dynamic inverse - - PowerPoint PPT Presentation

on the dynamic programming principle for static and
SMART_READER_LITE
LIVE PREVIEW

On the dynamic programming principle for static and dynamic inverse - - PowerPoint PPT Presentation

On the dynamic programming principle for static and dynamic inverse problems Stefan Kindermann, Industrial Mathematics Institute Johannes Kepler University Linz, Austria joint work with Antonio Leitao, University of Santa Catarina, Brazil


slide-1
SLIDE 1

On the dynamic programming principle for static and dynamic inverse problems

Stefan Kindermann, Industrial Mathematics Institute Johannes Kepler University Linz, Austria joint work with Antonio Leitao, University of Santa Catarina, Brazil

Dynamic programming principle

slide-2
SLIDE 2

Static Inverse Problems

Static inverse problems in Hilbert spaces Fu = y F . . . (linear) operator between Hilbert spaces u . . . unknown solution y . . . data (possibly noisy) Examples (Parameter identification, image processing ....) Regularization Well established theory

Dynamic programming principle

slide-3
SLIDE 3

Dynamic Inverse Problems

Dynamic inverse problems in Hilbert spaces F(t)u(t) = y(t) t . . . (artificial) time parameter F(t) . . . (linear) time-dependent operator between Hilbert spaces u(t) . . . unknown time-dependent solution y(t) . . . time-dependent data (possibly noisy) Can be handled by standard theory standard numerical algorithm do not take into account time-structure

Dynamic programming principle

slide-4
SLIDE 4

Examples

Parameter Identification in elliptic PDEs with time-dependent parameter (moving objects) Dynamic impedance tomography Endocardiography Online identification (K¨ ugler) . . . just include a t in your favorite inverse problems

Dynamic programming principle

slide-5
SLIDE 5

Regularization for dynamic problems

F(t) : H → L2 F(t)u(t) = y(t) Y = L2([0, T], L2(Ω)) . . . data space First choice for solution space: X = L2([0, T], H) . . . solution space e.g. Tikhonov Regularization uα = argmin1 2 T F(t)u(t) − y(t)2

Hdt + α

2 T u(., t)2

Hdt

Solution: uα(t) = (F ∗(t)F(t) + αI)−1F ∗(t)y(t)

Dynamic programming principle

slide-6
SLIDE 6

Tikhonov Regularization L2-case

Solution: uα(t) = (F ∗(t)F(t) + αI)−1F ∗(t)y(t) Problem is time decoupled Easy implementation, results not satisfactory Not continuous in time ⇒ Use higher regularization in t.

Dynamic programming principle

slide-7
SLIDE 7

Tikhonov Regularization-H1

H1 Regularization in t: uα = argmin1 2 T F(t)u(t) − y(t)2

Hdt + α

2 T u′(t)2

Hdt

Solution F ∗(t)F(t)uα + αuα(t)′′ = F ∗(t)y(t) u′

α(0) = u′ α(T) = 0

Hilbert-space-valued boundary-value problem. Discretization: If F ∗F is matrix of size n × n, [0, T] is discretized into nT intervals ⇒ full matrix of size (nnT) × (nnT).

Dynamic programming principle

slide-8
SLIDE 8

Louis-Schmitt Method

A general numerical method for solving such problems was suggested by [Louis, Schmitt] Approximate derivatives by differences t-discretization Decomposition of the matrices Problem requires to solve Sylvester-Matrix equation FF ∗UR + αU = Y for U, which can be done more efficient. Alternative: Dynamic Programming [S.K.,A. Leitao] iterative method

Dynamic programming principle

slide-9
SLIDE 9

Principles of Dynamic Programming

Dynamic Programming: [R. Bellman] Idea: Follow the path of the optimal solution in t ⇒ Evolution equation in t

Dynamic programming principle

slide-10
SLIDE 10

Solve Problem by dynamic programming

Rewrite problem as constraint optimization problem min

u,v

1 2 T F(t)u(t) − y(t)2

Hdt + α

2 T v(t)2

Hdt

constraints u′ = v Linear quadratic control problem, v ∼ control

Dynamic programming principle

slide-11
SLIDE 11

Solve Problem by dynamic programming

Value function: V (t, ξ) := min

u,v

1 2 T

t

F(s)u(s) − y(s)2 + αv(s)2 ds u(t) = ξ u′ = v Value function satisfies Hamilton-Jacobi Eq Vt(t, ξ) = −1 2F(t)ξ − y(t)2 + 1 α(Vξ, Vξ) V (T, ξ) = 0 From V (t, ξ) we get v v = − 1 αVξ(t, u) ⇒ u can be found by evolution equation u′(t) = − 1 αVξ(t, u(.t))

Dynamic programming principle

slide-12
SLIDE 12

Hamilton-Jacobi Equation

Hamilton-Jacobi: Equation: PDE in a very high dimensional space Dimension = number of unknown in solution Numerical solution almost impossible If F is linear, V is a quadratic functional in ξ Ansatz: V (t, ξ) = 1 2ξ, Q(t)ξ + b(t)ξ + g(t)

Dynamic programming principle

slide-13
SLIDE 13

Ansatz

V (t, ξ) = 1 2ξ, Q(t)ξ + b(t)ξ + g(t) From HJ-Equation ⇒ Equation for Q, b, g Riccati equation for operator Q(t) Q′(t) = F ∗(t)F(t) + 1

αQ(t)∗Q

Q(T) = Evolution equation for b b′(t) = Q∗ 1

α + F ∗(t)y(t)

b(T) = Equation for solution u′(t) = − 1 α (Q(t)u(t) + b(t))

Dynamic programming principle

slide-14
SLIDE 14

Dynamic programming method I

First: apply dynamic programming principle Second: discretize equation Algorithm

1 Solve Riccati-Equation for Q(t) and Equation for b(t)

backwards in time by explicit Euler method

2 Solve Eq for u forwards in time by explicit Euler with some

initial condition u0

Dynamic programming principle

slide-15
SLIDE 15

Alternative Method

First discretize Functional t ∼ ti = k

N

Second: use discrete version of dynamic programming principle Algorithm Two backward recursions for Q, b one forward recursion for u: Qk−1 = (Qk + α−1I)−1Qk + F ∗

k−1Fk−1

k = N + 1, . . . , 2 bk−1 = (Qk + α−1I)−1bk − F ∗

k−1yk−1

k = N + 1, . . . , 2 uk = (Qk + αI)−1(αuk−1 − bk)k = 1, . . . , N

Dynamic programming principle

slide-16
SLIDE 16

Regularization Properties

u(t) = T

  • σ

γ(σ, µ, t)dEλ(σ)F ∗y(µ)dµ γ(λ, µ, t) = α−1 √ λ cosh(α−1√ λ)

  • cosh(α−1√

λ(t − 1)) sinh(α−1√ λ(µ) µ ≤ t sinh(α−1√ λ(t) cosh(α−1√ λ(µ − 1)) µ ≥ t

For fixed λ, γ(λ, ., .) is the Greens-Function for the boundary value problem Lx := x′′ − λ αx x(T) = 0 x′(0) = 0 Sturm-Liouville Theory ⇒ Convergence Results as α → 0

Dynamic programming principle

slide-17
SLIDE 17

Computational Complexity

After discretization let F(t) be a n × N matrix, T timesteps Naive Approach: Solve Optimality Conditions 2 3(NT)3 Louis-Schmitt Method (Sylvester Matrix Equation) 25(n + T)3 + 2T(n(T + N)) Method I Explicit Euler O(n3T) if Q(t) is precomputed, then O(n2T) Method II O(n3T) Complexity is linear T.

Dynamic programming principle

slide-18
SLIDE 18

Related work: Dynamic programming for static problems

Dynamic programming principle with T as regularization parameter Static Problem: Fu = y Artificial time variable in u: u = u(t) Approximate solution u by minimizing J(u) = 1 2 T Fu(t) − y2

Hdt + 1

2 T u′(., t)Hdt Use uT := u(T) as regularized solution Question: Is this a regularization ? Limit limT→∞ uT ? T acts as regularization parameter

Dynamic programming principle

slide-19
SLIDE 19

Dynamic programming

As before apply Dynamic programming principle: . . . Algorithm Backward evolution for Q Q′(t) = −I + Q(t)FF ∗Q Q(T) = Forward evolution for u u′(t) = −(F ∗Q(t) (Fu(t) − y) Approximation of solution by uT

Dynamic programming principle

slide-20
SLIDE 20

Regularization Properties

Q′(t) = −I + Q(t)FF ∗Q(t) Q(T) = By Spectral Theory we get Q(t) =

  • σ

q(t, λ)dFλ q(t, λ) = 1 √ λ tanh( √ λ(T − t)). uT := u(T) = 1 −

1 cosh( √ λT)

λ dEλF ∗y +

  • 1

cosh( √ λT) dEλu0. Convergence and convergence rates results by usual spectral filter theory. (as in Engl, Hanke, Neubauer). Convergence as T → ∞, Convergence rates

Dynamic programming principle

slide-21
SLIDE 21

Discrete Problems

First: Discretization Second: Principle of optimality Instead of T, iteration index N acts as regularization parameter Recursion for Qi and ui uN =

  • gN(λ)dEλF ∗y.

gN(λ) = 1 λ

  • 1 −
  • λ

4 + 1

T2N+1

  • λ

4 + 1

−1 . Tn(x): Chebyshev polynomial of the first kind of order n Convergence as N → ∞, convergence rates

Dynamic programming principle

slide-22
SLIDE 22

Examples

Linear Integralequation with convolution kernel Stationary case Error vs. N, T

10 10

1

10

2

10 10

1

error 10 10

1

10

2

10

1

error

Landweber, CG, MethodI, MethodII Same order of convergence as CG parameter tuning in algorithm

Dynamic programming principle

slide-23
SLIDE 23

Numerical Results- Dynamic case

Linearized Dynamic Impedance Tomography Problem ∇.γ(t, .)∇u = 0 Identify γ from Dirichlet to Neumann map for linearized problem Λγ : u → ∂ ∂nu

Dynamic programming principle

slide-24
SLIDE 24

Numerical Results- Dynamic case

Nonlinear Problem Fnl : γ(t) → Λγ Linearized Problem Fδγ := F ′

nl(1)δγ

For results we use nonlinear data and add random noise

Dynamic programming principle

slide-25
SLIDE 25

Numerical Results-Exact solution

Dynamic programming principle

slide-26
SLIDE 26

Numerical Results-Reconstruction 5 % noise

Dynamic programming principle

slide-27
SLIDE 27

Generalizations-Nonlinear Problems

Dynamic programming principle can be generalized to nonlinear problems: Nonlinear dynamic problems F(t, u(t)) = y(t) F is nonlinear operator in u Tikhonov functional for nonlinear problems J(u) = 1 2 T F(t, u(t)) − y(t)2

Hdt + α

2 T u′(t)2

Hdt

Dynamic programming principle

slide-28
SLIDE 28

Nonlinear Problem Dynamic programming

Hamilton-Jacobi Equation for Value function Vt(t, ξ) = −1 2F(t, ξ) − y(t)2 + 1 α(Vξ, Vξ) V (T, ξ) = 0 u can be found by nonlinear evolution equation u′(t) = − 1 αVξ(t, u(.t)) If V is known, equation for u is rather standard Open Problem: How to solve HJ equation ?

Dynamic programming principle

slide-29
SLIDE 29

Conclusion

Iterative Method for static dynamical linear inverse problems Dynamic Programming Regularization theory available Complexity is linear in T Q can be precomputed Generalization to nonlinear problems

Dynamic programming principle