conditional sampling for max stable random fields
play

Conditional Sampling for Max-Stable Random Fields Yizao Wang - PowerPoint PPT Presentation

Conditional Sampling for Max-Stable Random Fields Yizao Wang Department of Statistics, the University of Michigan June 28th, 2011 7th EVA Conference, Lyon, France Joint work with Stilian A. Stoev An Example A sample from the Smith Model


  1. Conditional Sampling for Max-Stable Random Fields Yizao Wang Department of Statistics, the University of Michigan June 28th, 2011 7th EVA Conference, Lyon, France Joint work with Stilian A. Stoev

  2. An Example A sample from the Smith Model (Smith 1990): A stationary (moving maxima) max-stable random field. 4 2 12 2 2 0.74 2.97 1 10 4 0.86 4 6 8 2 7.8 0 8 6 6 4 1 3.83 0 6 4 −1 0.89 8 9.67 1 0 2 12 4 2 −2 −2 −1 0 1 If only partial observations (red crosses) are provided...

  3. What We Can Do... Four (fast) conditional samplings from the Smith Model: 8 4 0.74 6 0.74 2 2 2 12 12 2.97 2.97 1 4 1 2 4 4 2 0.74 0.86 2 0.86 6 6 2.97 2 2 7.8 7.8 0 0 1 10 8 8 10 4 6 4 4 3.83 10 3.83 10 6 6 6 0.86 −1 −1 4 0.89 0.89 8 8 4 4 6 9.67 9.67 8 8 10 2 10 2 12 12 −2 2 −2 7.8 0 8 −2 −1 0 1 −2 −1 0 1 6 2 2 6 6 4 0.74 2 0.74 2 1 2.97 2.97 3.83 0 1 1 2 4 4 6 0.86 0.86 4 4 6 6 −1 2 2 0.89 8 7.8 7.8 0 0 8 8 4 4 9.67 3.83 10 3.83 10 10 2 6 6 6 2 6 −1 −1 0.89 0.89 8 8 9.67 2 8 9.67 12 10 10 4 2 4 −2 12 12 2 −2 −2 −2 −1 0 1 −2 −1 0 1 −2 −1 0 1 ◮ Potential applications in prediction. ◮ Analytical formula impossible in general. ◮ Need models with special structures.

  4. Max-Linear Models Definition ◮ Z 1 , . . . , Z p : iid α -Fr´ echet: P ( Z > t ) = exp {− σ α t − α } . ◮ A = ( a i , j ) ∈ R n × p given. + ◮ Let X = A ⊙ Z denote p � X i = a i , j Z j , i = 1 , . . . , n . j =1 Remark ◮ { X i } 1 ≤ i ≤ n is a spectrally discrete α -Fr´ echet process. ◮ Approximation of arbitrary max-stable process (random field) with p sufficiently large. ◮ Results valid for non-negative Z j with densities f Z j .

  5. Conditional Sampling for Max-Linear Models Our contribution: (W and Stoev 2011) ◮ Explicit formula of conditional distribution for max-linear models (including a large class of max-stable random fields). ◮ Efficient algorithm (R package maxLinear) for large scale conditional samplings. ◮ Potential applications in prediction for extremal phenomena, e.g., environmental and financial problems. Two applications: ◮ MARMA processes (Davis and Resnick 1989, 1993). ◮ Smith model (Smith 1990).

  6. MARMA Processes Davis and Resnick (1989, 1993) Max-autoregressive moving average process (MARMA) ( m , q ): X t = φ 1 X t − 1 ∨ · · · ∨ φ m X t − m ∨ Z t ∨ θ 1 Z t − 1 ∨ · · · ∨ θ q Z t − q , with t ∈ Z , φ i ≥ 0 , θ j ≥ 0, { Z t } t ∈ Z iid 1-Fr´ echet. Or equivalently (iff φ ∗ = � m i =1 φ i < 1) ∞ � X t = ψ j Z t − j , t ∈ N j =0 with ψ j ≥ 0 , � ∞ j =0 ψ j < ∞ (explicit formula for ψ j ).

  7. Prediction of MARMA(3,0) processes MARMA process 80 Conditional 95% quantile Conditional median Projection predictor 60 X 40 20 0 0 50 100 150 t Prediction of a MARMA(3,0) process with φ 1 = 0 . 7 , φ 2 = 0 . 5 and φ 3 = 0 . 3, based on the observation of the first 100 values of the process.

  8. Smith Model A stationary max-stable random field model (Smith 1990): � e t = ( t 1 , t 2 ) ∈ R 2 . X t = R 2 φ ( t − u ) M α ( d u ) , with � � � � β 1 β 2 1 β 2 1 t 2 1 − 2 ρβ 1 β 2 t 1 t 2 + β 2 2 t 2 φ ( t 1 , t 2 ) := � 1 − ρ 2 exp − . 2 2(1 − ρ 2 ) 2 π ◮ Consistent estimators for ρ, β 1 , β 2 (de Haan and Pereira 2006) ◮ A discretized version: � h 2 /α φ ( t − u j 1 j 2 ) Z j 1 j 2 , t ∈ {− q , . . . , q } 2 X t = − q ≤ j 1 , j 2 ≤ q − 1 with u j 1 j 2 = (( j 1 + 1 / 2) h , ( j 2 + 1 / 2) h ) and Z j 1 j 2 ’s iid 1-Fr´ echet.

  9. Simulations 5 15 5 5 1 0 15 5 5 5 5 1 1 5 0 1 5 5 10 5 5 5 0 5 0 15 5 5 5 10 −1 −1 5 5 5 5 5 5 15 −2 −2 −2 −1 0 1 −2 −1 0 1 10 8 4 2 8 5 6 5 6 5 5 4 1 1 8 4 2 5 5 2 6 12 0 5 0 5 5 0 1 5 5 4 8 −1 −1 6 5 5 6 4 2 5 5 4 4 6 6 6 4 8 −2 −2 −2 −1 0 1 −2 −1 0 1 Figure: 4 conditional samplings from Smith model. Parameters: ρ = 0 , β 1 = 1 , β 2 = 1 (isotropic).

  10. Simulations 4 2 12 2 2 5 0.74 0.74 15 10 2.97 2.97 1 10 1 4 0.86 0.86 4 6 8 2 10 7.8 5 7.8 0 0 8 6 6 4 3.83 1 3.83 0 6 4 5 −1 −1 0.89 0.89 8 10 9.67 9.67 10 2 1 12 5 2 4 −2 −2 0 −2 −1 0 1 −2 −1 0 1 Figure: 95%-quantile of the conditional marginal deviation. Parameters: ρ = − 0 . 8 , β 1 = 1 . 5 , β 2 = 0 . 7.

  11. This Talk: an Inverse Problem and an Efficient Solution 15 5 5 5 1 0 5 15 5 5 5 1 1 5 0 1 5 5 10 5 5 5 0 5 0 15 5 5 5 10 −1 −1 5 5 5 5 5 5 15 −2 −2 −2 −1 0 1 −2 −1 0 1 10 8 4 2 8 5 6 5 6 5 5 1 4 1 8 4 2 2 5 5 6 12 5 5 0 0 5 0 1 5 5 4 8 −1 −1 6 6 5 4 5 2 5 5 4 4 6 6 4 6 8 −2 −2 −2 −1 0 1 −2 −1 0 1

  12. Formulation of the Problem ◮ Random structures (models) X = A ⊙ Z and Y = B ⊙ Z . ◮ Predict P ( Y ∈ · | X = x ) . ◮ A ∈ R n , p and B ∈ R n B , p known. + + ◮ Key: conditional distribution P ( Z ∈ E | X = x ) =? Remark ◮ Theoretical issue: P ( Z ∈ E | X = x ) is not well defined. Rigorous treatment: regular conditional probability. ◮ Computational issue: n small, p , n B large.

  13. A Toy Example � X 1 = Z 1 ∨ Z 2 ∨ Z 3 � 1 � 1 1 A = . X 2 = Z 1 1 0 0 (In)equality constraints on Z : Z 1 = X 2 =: � z 1 ≤ X 1 =: � Z 2 z 2 Z 3 ≤ X 1 =: � z 3 . Case 1: (red) If X 1 = X 2 = x , then Z 1 = � z 1 , Z 2 ≤ � z 2 , Z 3 ≤ � z 3 . In this case, 3 � P ( Z ∈ E | case 1 ) = δ � z 1 ( π 1 ( E )) P { Z j ∈ π j ( E ) | Z j < � z j } . j =2

  14. A Toy Example � X 1 = Z 1 ∨ Z 2 ∨ Z 3 � 1 � 1 1 A = . X 2 = Z 1 1 0 0 (In)equality constraints on Z : = X 2 =: � Z 1 z 1 Z 2 ≤ X 1 =: � z 2 ≤ X 1 =: � z 3 . Z 3 Case 2: (blue) If x 2 = X 2 < X 1 = x 1 , then Z 1 = � z 1 , and either Z 1 = � z 1 , Z 2 = � z 2 , Z 3 ≤ � z 3 or Z 1 = � z 1 , Z 2 ≤ � z 2 , Z 3 = � z 3 . In the first case (similar for the second), 2 � P ( Z ∈ E | case 2 first ) = δ � z j ( π j ( E )) P { Z 3 ∈ π 3 ( E ) | Z 3 < � z 3 } . j =1

  15. Observations from the Toy Example We have seen... ◮ There are subcases that the conditional distributions are easy. ◮ The subcases are determined by the equality constraints. It turns out... ◮ The desired conditional distribution is a mixture of the ones in the subcases. An inverse problem ◮ Find all combinations of equality constraints that yield valid subcases. ◮ Each subcase referred to as a hitting scenario J ⊂ { 1 , . . . , p } .

  16. A Toy Example � X 1 = Z 1 ∨ Z 2 ∨ Z 3 . X 2 = Z 1 Case 1: (red) If X 1 = X 2 = x , then Z 1 = � z 1 , Z 2 < � z 2 , Z 3 < � z 3 and J = { 1 } . Case 2: (blue) If x 2 = X 2 < X 1 = x 1 , then Z 1 = � z 1 , Z 2 = � z 2 , Z 3 < � or Z 1 = � z 1 , Z 2 < � z 2 , Z 3 = � z 3 , z 3 and correspondingly, J = { 1 , 2 } or J = { 1 , 3 } . J = { 3 } is not a hitting scenario (and why?).

  17. Hitting Scenarios J ⊂ { 1 , . . . , p } is a hitting scenario, if C J ( A , x ) ≡ { z ∈ R p z k , k ∈ J c , A ⊙ z = x } + : z j = � z j , j ∈ J , z k < � is non-empty. ◮ Not every J ∈ { 1 , . . . , p } is a valid hitting scenario. ◮ Restricted to each C J ( A , x ), ν J ( x , E ) ≡ P ( Z ∈ E | Z ∈ C J ( A , x )) � � P { Z j ∈ π j ( E ) | Z j < � = δ � z j ( π j ( E )) z j } , j ∈ J c j ∈ J ◮ { C J ( A , x ) } J ∈J mutually disjoint.

  18. Conditional Distribution for Max-Linear Models Theorem (W and Stoev 2011) P ( Z ∈ E | X = x ) is a mixture of ν J , indexed by J ∈ J r ( A , x ). � P ( Z ∈ E | X = x ) = p J ( A , x ) ν J ( x , E ) , E ∈ R R p + J ∈J r ( A , x ) where � � � p J ( A , x ) ∝ w J := z j f Z j ( � � z j ) F Z j ( � z j ) , p J = 1 . j ∈ J j ∈ J c J ∈J r ( A , x ) J r ( A , x ): all valid hitting scenarios.

  19. Algorithm for conditional sampling Z | X = x Algorithm I (1) compute � z , J r ( A , x ) and p J ( A , x ), and (2) sample Z ∼ P ( Z ∈ · | X = x ). Not the end of the story! ◮ We haven’t discussed how to solve the inverse problem of identification of J r ( A , x ), which is closely related to the NP-hard set cover problem. ◮ Fortunately, by the max-linear structure, with probability 1, J r ( A , x ) has a nice structure, leading to an efficient algorithm.

  20. An Efficient Algorithm in a Nut Shell A standard algorithm takes exponentially long time to solve J r ( A , x ) (and |J r ( A , x ) | can be very large). But with probability 1, ◮ { 1 , . . . , p } = � r ( s ) (disjoint). s =1 J ( s ) } s =1 ,..., r : ◮ J r ( A , x ) determined by { J ( s ) | = 1 , s = 1 , . . . , p } . J r ( A , x ) = { J ⊂ { 1 , . . . , p } : | J ∩ J ◮ Z can be divided into conditionally independent sub-vectors { Z j } j ∈ J (1) , . . . , { Z j } j ∈ J ( r ) . ( s ) } s =1 ,..., r depending on ( A , x ) can be solved efficiently. ◮ { J

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend