markov chain monte carlo mcmc methods
play

Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 - PowerPoint PPT Presentation

Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) Consider again deblurring of the constellation image (b) in the case (ii) of Example 6 in which 1 -norm based prior pr ( x ) exp ( n i = 1 | x i | ) was


  1. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) Consider again deblurring of the constellation image (b) in the case (ii) of Example 6 in which ℓ 1 -norm based prior π pr ( x ) ∝ exp (� n i = 1 −α| x i | ) was used resulting to a posterior of the form � � � n 1 2 σ 2 � y − Ax � 2 − α π ( x | y ) ∝ exp − | x i | . i = 1 Find using Gibbs sampler a conditional mean (CM) estimate with σ = 0 . 005 and α = 20 and compare the result to the MAP estimate computed in Example 6. In order to speed up the sampling procedure, restrict the problem first into a region of interest (ROI), referring here to a set in which the pixels a priori seem to be nonzero based on the blurred image.

  2. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued Solution The ROI consisted of seven 7 × 7 pixel sets cocentric with the stars, one set per each star. With the ROI, he dimension of the inverse problem was 7 3 = 343 and without it 64 2 = 4096 . Exact Blurred ROI

  3. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued The challenge in implementing the Gibbs sampler was mainly in localizing the "essential" part of the conditional posterior density π ( x i | y , x 1 , . . . , x i − 1 , x i + 1 , . . . , x n ) = � � 1 y − a i x i � 2 − α x i π ( x i | � y i ) ∝ exp − 2 σ 2 � � , where a i denotes the i -th column of A = ( a 1 , a 2 , . . . , a n ), y i = y − � � j � = i a j x j and x i = | x i | , since pixel values must a priori be positive x i ≥ 0 , i = 1 , 2 , . . . , n . The maximum is located at the point satisfying � � d 1 y i − a i x i � 2 − α x i 0 = − 2 σ 2 � � dx i � � � 1 � d 1 1 y T y T 2 σ 2 a T i a i x 2 = − 2 σ 2 � i � y i + σ 2 � i a i − α x i − i dx i

  4. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued x i = ασ 2 − � y T = −α + 1 i a i − 1 i a i y T σ 2 a T σ 2 � i a i x i = 0 , i.e. , a T i a i if ( ασ 2 − � y T i a i ) / ( a T i a i ) > 0 , and otherwise the maximizer is x i = 0 . Denoting the maximum value of � 1 � p i ( x i ) = − 1 1 y T y T 2 σ 2 a T i a i x 2 2 σ 2 � i � y i + σ 2 � i a i − α x i − i , x i ≥ 0 by M , the essential part of the conditional posterior density can be defined as the interval, in which p i ( x i ) ≥ M − c with some suitably chosen constant c > 0 . Since p i ( x i ) is a second degree polynomial with the second-degree coefficient

  5. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued i a i / ( 2 σ 2 ) and maximizer max { 0 , ( ασ 2 − � a T y T i a i ) / ( a T i a i ) } , the interval in which p i ( x i ) ≥ M − c and x i > 0 is given by � � � � � � 0 , ασ 2 − � , ασ 2 − � y T y T i a i 2 c i a i 2 c I c = max −σ + σ , a T a T a T a T i a i i a i i a i i a i if ( ασ 2 − � y T i a i ) / ( a T i a i ) > 0 , and otherwise this interval is � � � 0 , ασ 2 − � � ασ 2 − � y T y T � 2 + 2 σ 2 c i a i i a i I c = + . a T a T a T i a i i a i i a i

  6. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued The interval I c chosen as shown on the previous page contains the maximum value of the conditional posterior density and all the values on the interval are larger or equal to exp ( − c ) times the maximum. If the constant c is chosen appropriately, then the Gibbs sampler will both be numerically stable and produce unbiased results. Here, the choice was c = 7 and the conditional posterior was evaluated on I c in 20 equally distributed points (examples below).

  7. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued In the sampling process, a total number 10000 sample points were generated first 1000 of which were labeled as the burn-in sequence, which was excluded from the eventual sample enseble yielding the CM estimate. MAP (Example 6) Exact CM

  8. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 12 (Matlab) continued The sampling history and sample based marginal density were analyzed for two individual pixels (A) and (B). (A), History (A), Marginal (A), Burn-in (B), History (B), Burn-in (B), Marginal purple = exact, green = conditional mean, black = 95 % credibility

  9. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) Consider deblurring of the constellation image in the case of the conditionally gaussian (hierarchical) prior model π pr ( x , θ ) = π ( x | θ ) π hyper ( θ ) of Examples 7 and 8. Use the Gibbs sampler to find a conditional mean (CM) estimate corresponding to both gamma and inverse gamma hyperprior π hyper ( θ ) . Use the noise standard deviation of σ = 0 . 005 and shape and scaling parameter values β = 1 . 5 and θ 0 = 0 . 00125 for the gamma and inverse gamma density. Compare the CM estimates obtained to the corresponding MAP estimates (see, Examples 6, 7 and 8).

  10. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued Solution The sampling procedure was based on utilizing the conditional posterior densities of the form π ( x | y , θ ) and π ( θ | y , x ) according to the following algorithm: 1. Choose x ( 0 ) and θ ( 0 ) and set k = 1 . 2. Pick x ( k ) from π ( x | y , θ ( k − 1 ) ) . 3. Pick θ ( k ) from π ( θ | y , x ( k ) ) . 4. If k is smaller than the desired number of sampling points, set k = k + 1 and go back to the second step.

  11. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued The conditional density of the second step is of the form � � − 1 2 σ 2 � y − Ax � 2 − 1 π ( x | y , θ ( k − 1 ) ) 2 x T D − 1 ∝ exp θ ( k − 1 ) x , � σ − 1 A � � σ − 1 y � � � � 2 � − 1 � � = exp � x − � D − 1 / 2 0 2 θ ( k − 1 ) with D θ ( k − 1 ) = diag ( θ ( k − 1 ) , θ ( k − 1 ) , . . . , θ ( k − 1 ) ) . By defining n 1 2 � σ − 1 A � � σ − 1 y � M θ = and b = . D − 1 / 2 0 θ we have π ( x | y , θ ) ∝ exp ( − 1 2 � M θ x − b � 2 ) the mean and θ M θ ) − 1 M T covariance matrix of which are given by x θ = ( M T θ b and Γ θ = ( M T θ M θ ) − 1 , respectively.

  12. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued Let now X satisfy the equation M θ ( X − x θ ) = Z in the least-squares sense, that is, M T θ M θ ( X − x θ ) = M T θ Z . If Z is zero-mean Gaussian white noise Z ∼ N ( 0 , I ) , i.e. E [ ZZ T ] = I , θ M θ ) − 1 M T then X = ( M T θ Z + x θ is also Gaussian random variable with the mean x θ and the covariance matrix �� �� � T � E [( X − x θ )( X − x θ ) T ] = E ( M T θ M θ ) − 1 M T ( M T θ M θ ) − 1 M T θ Z θ Z ) � θ M θ ) − 1 � ( M T θ M θ ) − 1 M T θ ZZ T M θ ( M T = E = ( M T θ M θ ) − 1 M T θ E [ ZZ T ] M θ ( M T θ M θ ) − 1 = ( M T θ M θ ) − 1 ( M T θ M θ )( M T θ M θ ) − 1 θ M θ ) − 1 = Γ θ , = ( M T that is, X ∼ N ( x θ , Γ θ ).

  13. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued Consequently, if z is picked from the zero-mean Gaussian white noise distribution N ( 0 , I ) , then θ M θ ) − 1 M T θ M θ ) − 1 M T x = ( M T θ z + x θ = ( M T θ ( z + b ) is distributed according to π ( x | y , θ ) . If now z = ( z 1 , z 2 ) , then expanding the above equation for θ ( k − 1 ) gives the formula ( σ − 2 A T A + D − 1 θ ( k − 1 ) ) − 1 ( σ − 1 A T y + σ − 1 A T z 1 + D − 1 / 2 x = θ ( k − 1 ) z 2 ) ( A T A + σ 2 D − 1 θ ( k − 1 ) ) − 1 ( σ A T y + σ A T z 1 + σ 2 D − 1 / 2 = θ ( k − 1 ) z 2 ) , which was used to pick x ( k ) from π ( x | y , θ ( k − 1 ) ) on the second step of the sampling procedure.

  14. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued Gibbs sampler was used on the third step to pick θ ( k ) from π ( θ | y , x ( k ) ) . If the gamma hyperprior is in question, then � � x ( k ) i 2 �� � n + θ i − ( β − 3 π ( θ | y , x ( k ) ) ∝ exp − 2 ) log θ i , 2 θ i θ 0 i = 1 and, again, if the inverse gamma is used, then 2 � � x ( k ) �� � n + 2 θ 0 + ( β + 3 π ( θ | y , x ( k ) ) ∝ exp i − 2 ) log θ i . 2 θ i i = 1 In both cases, each one-dimensional conditional density is of the form π ( θ i | y , x ( k ) , θ 1 , . . . , θ i − 1 , θ i + 1 , . . . , θ n ) ∝ exp [ − f i ( θ i )] . The essential part of exp [ − f i ( θ i )] was sought by first finding

  15. Markov chain Monte Carlo (MCMC) methods Gibbs Sampler Example 13 (Matlab) continued satisfying f ′ the maximizer θ max i ( θ max ) = 0 . After that, the i i quadratic approximation ) + 1 f i ( t ) ≈ f i ( θ max i ( θ max )( t − θ max i ( θ max )( t − θ max ) 2 ) + f ′ 2 f ′′ i i i i i was utilized to estimate the solution of f i ( t ) = f i ( θ max ) − c by i solving the equation ) + 1 ) 2 = 0 . i ( θ max )( t − θ max i ( θ max )( t − θ max c + f ′ 2 f ′′ i i i i The solution t was then chosen as the initial iterate t [ 0 ] to Newton’s iteration t [ ℓ ] = t [ ℓ − 1 ] − [ f ( t [ ℓ − 1 ] ) − f i ( θ max ) + c ] / f ′ ( t [ ℓ − 1 ] ) . i

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend