pdes from monge kantorovich mass transportation theory
play

PDEs from Monge-Kantorovich Mass Transportation Theory Luca - PowerPoint PPT Presentation

PDEs from Monge-Kantorovich Mass Transportation Theory Luca Petrelli Math & Computer Science Dept. Mount St Marys University Outline Monge-Kantorovich mass transportation problem Gradient Flow formalism Time-step


  1. PDEs from Monge-Kantorovich Mass Transportation Theory Luca Petrelli Math & Computer Science Dept. Mount St Mary’s University

  2. Outline • Monge-Kantorovich mass transportation problem • Gradient Flow formalism • Time-step discretization of gradient flows • Application of theory to nonlinear diffusion problems • Signed measures

  3. Monge’s original problem move a pile of soil from a deposit to an excavation with minimum amount of work + - from “Memoir sur la theorie des deblais et des remblais” - 1781

  4. Mathematical Model of Monge’s Problem � d , nonnegative Radon measures on µ + µ − � d � � d � µ + � = µ − � < ∞ s : � d → � d one-to-one mapping rearranging into µ + µ − s # µ + = µ − ( s # ) or � � ∀ h ∈ C ( � d ; � d ) h ( s ( x )) dµ + ( x ) = h ( y ) dµ − ( y ) X Y for X = spt ( µ + ) , Y = spt ( µ − )

  5. c(x,y) cost of moving a unit mass from x ∈ � d to y ∈ � d � total cost � d c ( x, s ( x )) dµ + ( x ) I [ s ] := Monge’s problem is then to find s ∗ ∈ A (admissable set) such that: I [ s ∗ ] = min s ∈ A I [ s ] ( M ) � � s # ( µ + ) = µ − � � with A = s

  6. Problem is too hard! · Constraint is highly nonlinear! � � ∀ h ∈ C ( � d ; � d ) h ( s ( x )) dµ + ( x ) = h ( y ) dµ − ( y ) X Y Hard to identify minimum! · minimizing sequence such that { s k } ∞ k =1 ⊂ A I [ s k ] → inf s ∈ A I [ s ] Hard to find { s k j } subsequence such that s k j → s ∗ optimal. · Classical methods of Calculus of Variation fail! No terms create compactness for I [ · ] · does not involve gradients hence it can not I [ · ] · be shown coercive on any Sobolev space

  7. Kantorovich’s relaxation -1940’s Kantorovich’s idea: transform (M) into linear problem Define : � � � prob. meas. µ on � d × � d � � proj x µ = µ + , proj y µ = µ − M := � � J [ µ ] := � d ×� d c ( x, y ) dµ ( x, y ) µ ∗ ∈ M Find such that J [ µ ∗ ] = min µ ∈ M J [ µ ] ( K )

  8. Motivation as given we can define s ∈ A µ ∈ M E ⊂ � d × � d , E Borel � ( x, s ( x )) ∈ E x ∈ � d � µ ( E ) := µ + � � � � Problem need not be generated by any one-to-one mapping µ ∗ s ∈ A Solution only look for “weak” or generalized solutions

  9. Linear programming analogy (Finite dimensional case) → µ + µ + ( x ) µ − ( y ) − → µ − − i j µ ( x, y ) − c ( x, y ) − → µ i,j → c i,j ( i = 1 , · · · , n, j = 1 , · · · , m ) n m � � Mass Balance Condition µ + µ − j < ∞ = i i =1 j =1 m n � � µ i,j = µ + Constraints µ i,j = µ − µ i,j > 0 i , j , j =1 i =1 n m Linear programming problem � � minimize c i,j µ i,j i =1 j =1 n m � � u i µ + maximize i + v j µ − j Then dual problem is i =1 j =1 subject to u i + v j ≤ c i,j

  10. Kantorovich’s Dual Problem Define: � � x, y ∈ � d � � � u, v : � d → � + continuous , u ( x ) + v ( y ) ≤ c ( x, y ) � � L := ( u, v ) � � � � d v ( y ) dµ − ( y ) � d u ( x ) dµ + ( x ) + K ( u, v ) := Then dual problem to (K) is: Find such that K ( u ∗ , v ∗ ) = u ∗ , v ∗ ( u,v ) ∈ L K ( u, v ) max

  11. Gradient Flows To define a gradient flow we need: a differentiable manifold M · a metric tensor on which makes a Riemannian manifold ( M , g ) M g · and a functional on M E · du Then is the gradient flow of on . ( M , g ) dt = − grad E ( u ) E where for all vector fields on . g (grad E, s ) = di ff E · s M s � du Then for all vector fields along . � g u dt , s + di ff E | u · s = 0 s u Main property of gradient flows: energy of system is decreasing along trajectories, i.e. · dt E ( u ) = di ff E | u · du d � du dt , du � dt = − g u dt

  12. Partial Differential Equations as gradient flows � � � Let M := u ≥ 0 , measurable, with u dx = 1 define the tangent space to as M � � � T u M := s measurable, with s dx = 0 and identify it with � �� p measurable ∼ via the elliptic equation . −∇ · ( u ∇ p ) = s

  13. Define � � � � g u ( s 1 , s 2 ) = u ∇ p 1 · ∇ p 2 dx ≡ s 1 p 2 dx � and E ( u ) = e ( u ) dx Then � � ∂ u � � du � ∂ t p − ∇ · ( u ∇ p ) e � ( u ) + di ff E | u · s = dx = g u dt , s � � ∂ u � � ∂ u � � ∂ t p + ∇ p · ( u ∇ e � ( u )) ∂ t − ∇ · ( u ∇ e � ( u )) = dx = dx = 0 p ∂ u ∂ t = ∇ · ( u ∇ e � ( u )) = ⇒

  14. Examples of PDE that can be obtained as Gradient Flows ∂ u Heat Equation e ( u ) = u log u ∂ t = ∆ u ∂ u Fokker-Planck Equation e ( u ) = u log u + u V ∂ t = ∆ u + ∇ · ( u ∇ V ) 1 ∂ u Porous Medium Equation e ( u ) = m − 1 u m ∂ t = ∆ u m Note: equations are only solved in a weak or generalized way.

  15. Important fact! Can implement gradient flow without making explicit use of gradient operator through time-discretization and then passing to the limit as the time step goes to 0. Jordan, Kinderlehrer and Otto (1998) ∂ u ( x, t ) � � u ∇ ψ ( x ) − ∆ u = 0 − div ∂ t Otto (1998) ∂ u ( x, t ) − ∆ u 2 = 0 ∂ t Kinderlehrer and Walkington (1999) ∂ u ( x, t ) − ∂ � � u ∇ ψ ( x ) + K ( u ) x = g ( x, t ) ∂ t ∂ x Agueh (2002) ∂ u ( x, t ) � � u ∇ c ∗ [ ∇ ( F � ( u ) + V ( x )] = 0 − div ∂ t Petrelli and Tudorascu (2004) ∂ u ( x, t ) − ∇ · ( u ∇ Ψ ( x, t )) − ∆ f ( t, u ) = g ( x, t, u ) ∂ t

  16. Time-discretized gradient flows 1. Set up variational principle u h Let h > 0 be the time step. Define the sequence recursively � � k k ≥ 0 as follows: is the intial datum ; given , define as the u h u h u h u 0 k − 1 k 0 solution of the minimization problem � 1 � � 2 + E ( u ) u h � min ( P ) 2 h d k − 1 , u u ∈ M where d, the Wasserstein metric, is defined as �� � d ( µ + , µ − ) 2 := | x − y | 2 dµ ( x, y ) inf µ ∈ M � d ×� d µ + i.e. d is the least cost of Monge-Kantorovich mass reallocation of to µ − for . c ( x, y ) = | x − y | 2

  17. 2 . Euler-Lagrange Equations Use Variation of Domain method to recover E-L eqns. � � � � d φ ( u h � d ×� d ( y − x ) · ξ ( y ) dµ ( x, y ) − h k ) ∇ · ξ dx = 0 where φ ( s ) =: e � ( s ) s − e ( s ) or in Gradient Flow terms: u h k − u h k − 1 = − grad E ( u h k ) h Then recover approximate E-L eqns., i.e. � 1 � � � 1 � h ( u h k − u h k − 1 ) ζ − φ ( u h 2 h �∇ 2 ζ � ∞ d ( u h k , u h � � k − 1 ) 2 k ) ∆ ζ � ≤ dx � � � � d

  18. 3 . Linear time interpolation Define u h ( x, t ) := u h k ( x ) if kh ≤ t < ( k + 1) h After integration in each interval over time we obtain � 1 � � n � � � � � k − 1 ) 2 u h ( x, t + τ ) − u h ( x, t ) ζ − φ ( u h ) ∆ ζ d ( u h k , u h � � dxdt � ≤ C � � h � � [0 ,T ] ×� d � k =1 n � 2 ≤ C h � Necessary inequality: u h k , u h � d k − 1 k =1

  19. 4. Convergence result as time step h goes to 0 Linear case Through a Dunford-Pettis like criteria show existence of function u u h � u such that, up to a subsequence, in some space. L p Nonlinear case Stronger convergence is needed, through precompactness result in . Also needed discrete maximum principle: L 1 u 0 bounded ⇒ u h bounded Then, passing to the limit in the general Euler-Lagrange equation shows that u is a “weak” solution of � � � ∂ u � u ∇ e � ( u ) = ∇ · ≡ ∆ φ ( u ) ∂ t

  20. Nonlinear Diffusion Problems  u t − ∇ · ( u ∇ Ψ ( x, t )) − ∆ f ( t, u ) = g ( x, t, u ) in Ω × (0 , T ) ,  � � u ∇ Ψ + ∇ f ( t, u ) · ν x = 0 on ∂ Ω × (0 , T ) , ( NP ) u ( · , 0) = u 0 ≥ 0 in Ω .  Theorem 4. Assume (f1)-(f3), (g1)-(g4) and ( Ψ ), then the problem (NP) admits a nonnegative essentially bounded weak solution provided that Ω is bounded and convex and the initial u 0 data is nonnegative and essentially bounded.

  21. Hypothesis ( u − v )( f ( t, u ) − f ( t, v )) ≥ c | u − v | ω for all u, v ≥ 0 , ( f 1) f ( · , s ) are Lipschitz continuous for s in bounded sets ( f 2) f ( t, · ) di ff erentiable, ∂ f ∂ s positive and monotone in time ( f 3) for all x ∈ � d g ( x, · , · ) nonnegative in [0 , ∞ ) × [0 , ∞ ) ( g 1) g ( x, t, u ) ≤ C (1 + u ) locally uniformly w.r.t. ( x, t ) , t ≥ 0 ( g 2) g ( x, t, · ) is continuous on [0 , ∞ ) ( g 3) { g ( x, · , u ) } ( x,u ) is equicontinuous on [0 , ∞ ) w.r.t. ( x, u ) ( g 4) Ψ : � d × [0 , ∞ ) → � di ff .ble and locally Lipschitz in x ∈ � d ( Ψ )

  22. Novelties Time-dependent potential and diffusion coefficient Ψ ( · , t ) f ( t, · ) Non homogeneous forcing term g ( x, t, u ) � ( k +1) h Ψ k := 1 Averaging in time for , and , e.g. Ψ f Ψ ( · , t ) dt g h kh � kh New variational principle for v k − 1 := u k − 1 + g ( · , t, u k − 1 ) dt ( k − 1) h � 1 � � 2 + E ( u ) v h � ( P � ) min 2 h d k − 1 , u u ∈ M

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend