bayesian probabilistic numerical methods
play

Bayesian Probabilistic Numerical Methods Numerical Disintegration - PowerPoint PPT Presentation

Bayesian Probabilistic Numerical Methods Numerical Disintegration and Pipelines Jon Cockayne June 6, 2017 1 (Re)introduction (Re)introduction Prior 2 u Data A ( u ) = a Q # a Information Equation Posterior


  1. Bayesian Probabilistic Numerical Methods Numerical Disintegration and Pipelines Jon Cockayne June 6, 2017 1

  2. (Re)introduction

  3. (Re)introduction “Prior” 2 u ∼ µ “Data” A ( u ) = a Q # µ a “Information Equation” “Posterior”

  4. 2 Q1: How can we access µ a ?

  5. (Re)introduction Unless probabilistic numerical methods “agree” about what their uncertainty means, 3 they cannot be composed coherently.

  6. Modelling Electro-Mechanics in the Heart 4

  7. Modelling Electro-Mechanics in the Heart 5 Ca Flux during Caffeine Ca Fit NCX Model transient Ca Flux during tail of Field I CaL Voltage Stimulation Ca Transient. Less Ca clamp traces Flux through NCX (calculated) Fit I CaL Model Fit SERCA Model Ca flux during start of Field Stimulation Ca Fit RyR Model Transient. Less Ca Flux through NCX, SERCA and I CaL (calculated)

  8. Q2: when is it “legal” to compose Bayesian PNM in pipelines? 5

  9. Numerical Disintegration

  10. Numerical Disintegration Recall, the issue: which means… d a d 6 X a = { u ∈ X : A ( u ) = a } µ ( X a ) = 0

  11. Numerical Disintegration Recall, the issue: which means… 6 X a = { u ∈ X : A ( u ) = a } µ ( X a ) = 0 ∄ d µ a d µ

  12. a (“Numerical Disintegration”) Our Approach Two sources of error • Intractability of • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a .

  13. Our Approach Two sources of error • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a . • Intractability of µ a (“Numerical Disintegration”)

  14. Our Approach Two sources of error • Intractability of non-Gaussian priors (“prior truncation”) 7 Design an algorithm for approximately sampling µ a . • Intractability of µ a (“Numerical Disintegration”)

  15. Three Considerations Numerical Disintegration Prior Truncation 8

  16. Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 8

  17. Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 8

  18. a relaxation function chosen so that: • . 0 as r r • 1 0 Numerical Disintegration 9 Introduce the δ -relaxed measure µ a δ … d µ a ( ∥ A ( u ) − a ∥ A ) δ d µ ∝ ϕ δ

  19. 9 Numerical Disintegration Introduce the δ -relaxed measure µ a δ … d µ a ( ∥ A ( u ) − a ∥ A ) δ d µ ∝ ϕ δ ϕ : R + → R + a relaxation function chosen so that: • ϕ ( 0 ) = 1 • ϕ ( r ) → 0 as r → ∞ .

  20. Numerical Disintegration: Intuition “Ideal” Radon–Nikodym derivative 10 “ d µ a d µ ∝ I ( u ∈ X a ) ”

  21. Example Relaxation Functions 11 1 0 a a ϕ ( r ) = I ( r < 1 ) ϕ ( r ) = exp ( − r 2 )

  22. Example Relaxation Functions 11 1 0 a a ϕ ( r ) = I ( r < 1 ) ϕ ( r ) = exp ( − r 2 ) Uniform noise over B δ ( a ) Gaussian noise with s.d. ∝ δ

  23. N and consider 0 is the prior and easy to sample. N has N close to zero and is hard to sample. 12 a • Intermediate distributions defjne a “ladder” which takes us from prior to posterior. a • a • N 1 a 0 a 1 0 Set schemes to sample the posterior. Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering

  24. 0 is the prior and easy to sample. N has N close to zero and is hard to sample. 12 • • Intermediate distributions defjne a “ladder” which takes us from prior to posterior. schemes to sample the posterior. a • a Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering Set δ 0 > δ 1 > . . . > δ N and consider µ a δ 0 , µ a δ 1 , . . . , µ a δ N

  25. • Intermediate distributions defjne a “ladder” which takes us from prior to posterior. schemes to sample the posterior. 12 Tempering for Sampling µ a δ To sample µ a δ we take inspiration from rare event simulation and use tempering Set δ 0 > δ 1 > . . . > δ N and consider µ a δ 0 , µ a δ 1 , . . . , µ a δ N • µ a δ 0 is the prior and easy to sample. • µ a δ N has δ N close to zero and is hard to sample.

  26. • Impose boundary conditions explicitly. • Impose interior conditions at x 2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 1 3 , x 2 3 . Consider • Use a Gaussian prior on u x . 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1

  27. • Impose boundary conditions explicitly. • Impose interior conditions at x 2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 1 3 , x 2 3 . Consider 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) .

  28. 2 10 4 . r 2 . Example: Poisson’s Equation • Construct the posterior using ND with exp r • Use 1 0 10 13 Consider • Impose boundary conditions explicitly. − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) . • Impose interior conditions at x = 1 / 3 , x = 2 / 3 .

  29. Example: Poisson’s Equation Consider • Impose boundary conditions explicitly. . 13 − d 2 d x 2 u ( x )= sin ( 2 π x ) x ∈ ( 0 , 1 ) u ( x )= 0 x = 0 , x = 1 • Use a Gaussian prior on u ( x ) . • Impose interior conditions at x = 1 / 3 , x = 2 / 3 . 1 . 0 , 10 − 2 , 10 − 4 } { • Construct the posterior using ND with δ ∈ • Use ϕ ( r ) = exp ( − r 2 ) .

  30. Example: Poisson’s Equation On the right are contours of 14 In what follows, on the left are samples from the posterior µ a δ in X -space. ( ∥ A ( u ) − a ∥ A ) ϕ δ in A -space.

  31. Example: Poisson’s Equation 15 = 1.0 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2

  32. Example: Poisson’s Equation 15 = 0.01 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2

  33. Example: Poisson’s Equation 15 = 0.0001 0.100 0.4 0.075 0.6 0.050 0.025 0.8 0.000 0.025 1.0 0.050 1.2 0.075 0.100 0.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 1.2

  34. Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 16

  35. i require difgerent i IID Uniform, i IID Gaussian, i IID Cauchy, For practical computation we truncate to N terms. Prior Construction 2 • 2 • 1 • for almost-sure convergence… Difgerent 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∞ ∑ u ( x ) = u i ϕ i ( x ) i = 0

  36. For practical computation we truncate to N terms. Prior Construction 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∞ ∑ u ( x ) = γ i ξ i ϕ i ( x ) i = 0 Difgerent ξ i require difgerent γ for almost-sure convergence… • ξ i IID Uniform, γ ∈ ℓ 1 • ξ i IID Gaussian, γ ∈ ℓ 2 • ξ i IID Cauchy, γ ∈ ℓ 2

  37. Prior Construction N 17 Assume X has a countable basis { ϕ i } , i = 0 , . . . , ∞ . Then for any u ∈ X ∑ u N ( x ) = γ i ξ i ϕ i ( x ) i = 0 Difgerent ξ i require difgerent γ for almost-sure convergence… • ξ i IID Uniform, γ ∈ ℓ 1 • ξ i IID Gaussian, γ ∈ ℓ 2 • ξ i IID Cauchy, γ ∈ ℓ 2 For practical computation we truncate to N terms.

  38. Three Considerations Numerical Disintegration Prior Truncation Sampler Convergence 18

  39. Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Results are generic to A u , . Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1

  40. Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1 Results are generic to A ( u ) , µ .

  41. Convergence, but in what metric? All results show weak convergence framed in terms of an abstract integral probability metric 1 : sup Examples: Total Variation, Wasserstein 1 Müller [1997] 19 d F ( ν, ν ′ ) = � ν ( f ) − ν ′ ( f ) � � � ∥ f ∥ F ≤ 1 Results are generic to A ( u ) , µ .

  42. 20 d -almost-all a for A C 1 C a a Then, for small Assume that Theorem Convergence of µ a δ � α d F ( µ a , µ a ′ ) ≤ C µ � a − a ′ � � for some C µ , α constant and A # µ -almost-all a , a ′ ∈ A .

  43. Theorem Assume that 20 Convergence of µ a δ � α d F ( µ a , µ a ′ ) ≤ C µ � � a − a ′ � for some C µ , α constant and A # µ -almost-all a , a ′ ∈ A . Then, for small δ δ , µ a ) ≤ C µ ( 1 + C φ ) δ α d F ( µ a for A # µ -almost-all a ∈ A

  44. 21 Total Error Denote by µ a δ, N the posterior distribution given by d µ a ( ∥ A ◦ P N ( u ) − a ∥ A ) δ, N ∝ ϕ d µ δ

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend