a non parametric approach for uncertainty quantification
play

A Non-parametric Approach for Uncertainty Quantification in - PowerPoint PPT Presentation

A Non-parametric Approach for Uncertainty Quantification in Elastodynamics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk Uncertainty Quantification p.1/43 Newport, RI,


  1. A Non-parametric Approach for Uncertainty Quantification in Elastodynamics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk Uncertainty Quantification – p.1/43 Newport, RI, May 1, 2006

  2. Stochastic structural dynamics The equation of motion: M ¨ x ( t ) + C ˙ x ( t ) + Kx ( t ) = p ( t ) Due to the presence of uncertainty M , C and K become random matrices. The main objectives are: to quantify uncertainties in the system matrices to predict the variability in the response vector x Uncertainty Quantification – p.2/43 Newport, RI, May 1, 2006

  3. Current Methods Two different approaches are currently available Low frequency : Stochastic Finite Element Method (SFEM) - considers parametric uncertainties in details High frequency : Statistical Energy Analysis (SEA) - do not consider parametric uncertainties in details Work needs to be done : Hybrid method - some kind of ‘combination’ of the above two Uncertainty Quantification – p.3/43 Newport, RI, May 1, 2006

  4. Random Matrix Method (RMM) The objective : To have an unified method which will work across the frequency range. The methodology : Derive the matrix variate probability density functions of M , C and K Propagate the uncertainty (using Monte Carlo simulation or analytical methods) to obtain the response statistics (or pdf) Uncertainty Quantification – p.4/43 Newport, RI, May 1, 2006

  5. Outline of the presentation In what follows next, I will discuss: Introduction to Matrix variate distributions Maximum entropy distribution Optimal Wishart distribution Numerical examples Open problems & discussions Uncertainty Quantification – p.5/43 Newport, RI, May 1, 2006

  6. Matrix variate distributions The probability density function of a random matrix can be defined in a manner similar to that of a random variable. If A is an n × m real random matrix, the matrix variate probability density function of A ∈ R n,m , denoted as p A ( A ) , is a mapping from the space of n × m real matrices to the real line, i.e., p A ( A ) : R n,m → R . Uncertainty Quantification – p.6/43 Newport, RI, May 1, 2006

  7. Gaussian random matrix The random matrix X ∈ R n,p is said to have a matrix variate Gaussian distribution with mean matrix M ∈ R n,p and covariance matrix Σ ⊗ Ψ , where Σ ∈ R + n and Ψ ∈ R + p provided the pdf of X is given by p X ( X ) = (2 π ) − np/ 2 | Σ | − p/ 2 | Ψ | − n/ 2 � � − 1 2 Σ − 1 ( X − M ) Ψ − 1 ( X − M ) T etr (1) This distribution is usually denoted as X ∼ N n,p ( M , Σ ⊗ Ψ ) . Uncertainty Quantification – p.7/43 Newport, RI, May 1, 2006

  8. Gaussian orthogonal ensembles A random matrix H ∈ R n,n belongs to the Gaussian orthogonal ensemble (GOE) provided its pdf of is given by H 2 � � � � p H ( H ) = exp − θ 2 Trace + θ 1 Trace ( H ) + θ 0 where θ 2 is real and positive and θ 1 and θ 0 are real. Uncertainty Quantification – p.8/43 Newport, RI, May 1, 2006

  9. Wishart matrix An n × n random symmetric positive definite matrix S is said to have a Wishart distribution with parameters p ≥ n and Σ ∈ R + n , if its pdf is given by � − 1 � � 1 � � � − 1 2 np Γ n 1 1 1 2 p 2 ( p − n − 1) etr 2 Σ − 1 S p S ( S ) = 2 2 p | Σ | | S | (2) This distribution is usually denoted as S ∼ W n ( p, Σ ) . Note: If p = n + 1 , then the matrix is non-negative definite. Uncertainty Quantification – p.9/43 Newport, RI, May 1, 2006

  10. Matrix variate Gamma distribution An n × n random symmetric positive definite matrix W is said to have a matrix variate gamma distribution with parameters a and Ψ ∈ R + n , if its pdf is given by Γ n ( a ) | Ψ | − a � − 1 � p W ( W ) = 2 ( n +1) etr {− ΨW } ; | W | a − 1 ℜ ( a ) > ( n − 1) / 2 (3) This distribution is usually denoted as W ∼ G n ( a, Ψ ) . Here the multivariate gamma function: n � � a − 1 1 � 4 n ( n − 1) Γ n ( a ) = π Γ 2( k − 1) ; for ℜ ( a ) > ( n − 1) / 2 (4) k =1 Uncertainty Quantification – p.10/43 Newport, RI, May 1, 2006

  11. Distribution of the system matrices The distribution of the random system matrices M , C and K should be such that they are symmetric positive-definite, and the moments (at least first two) of the inverse of the dynamic stiffness matrix D ( ω ) = − ω 2 M + iω C + K should exist ∀ ω Uncertainty Quantification – p.11/43 Newport, RI, May 1, 2006

  12. Distribution of the system matrices The exact application of the last constraint requires the derivation of the joint probability density function of M , C and K , which is quite difficult to obtain. We consider a simpler problem where it is required that the inverse moments of each of the system matrices M , C and K must exist. Provided the system is damped, this will guarantee the existence of the moments of the frequency response function matrix. Uncertainty Quantification – p.12/43 Newport, RI, May 1, 2006

  13. Maximum Entropy Distribution Suppose that the mean values of M , C and K are given by M , C and K respectively. Using the notation G (which stands for any one the system matrices) the matrix variate density function of G ∈ R + n is given by p G ( G ) : R + n → R . We have the following constrains to obtain p G ( G ) : � p G ( G ) d G = 1 (5) (normalization) G > 0 � and G p G ( G ) d G = G (the mean matrix) G > 0 (6) Uncertainty Quantification – p.13/43 Newport, RI, May 1, 2006

  14. Further constraints Suppose the inverse moments (say up to order ν ) of the system matrix exist. This implies that ν � � G − 1 � �� E should be finite. Here the � F Frobenius norm of matrix A is given by AA T �� 1 / 2 . � � � A � F = Trace Taking the logarithm for convenience, the condition for the existence of the inverse moments can be expresses by ln | G | − ν � � E < ∞ Uncertainty Quantification – p.14/43 Newport, RI, May 1, 2006

  15. MEnt Distribution - 1 The Lagrangian becomes: � � � � � L p G = − p G ( G ) ln p G ( G ) d G + G > 0 �� � � ( λ 0 − 1) p G ( G ) d G − 1 − ν ln | G | p G d G G > 0 G > 0 � �� �� + Trace G p G ( G ) d G − G (7) Λ 1 G > 0 Note: ν cannot be obtained uniquely! Uncertainty Quantification – p.15/43 Newport, RI, May 1, 2006

  16. MEnt Distribution - 2 Using the calculus of variation � � ∂ L p G = 0 ∂p G = λ 0 + Trace ( Λ 1 G ) − ln | G | ν � � or − ln p G ( G ) or p G ( G ) = exp {− λ 0 } | G | ν etr {− Λ 1 G } Uncertainty Quantification – p.16/43 Newport, RI, May 1, 2006

  17. MEnt Distribution - 3 Using the matrix variate Laplace transform ( T ∈ R n,n , S ∈ C n,n , a > ( n + 1) / 2 ) � etr {− ST } | T | a − ( n +1) / 2 d T = Γ n ( a ) | S | − a T > 0 and substituting p G ( G ) into the constraint equations it can be shown that � − r | G | ν etr p G ( G ) = r − nr { Γ n ( r ) } − 1 � � − 1 G � � − r G � G (8) where r = ν + ( n + 1) / 2 . Uncertainty Quantification – p.17/43 Newport, RI, May 1, 2006

  18. MEnt Distribution - 4 Comparing it with the Wishart distribution we have: Theorem 1. If ν -th order inverse-moment of a system matrix G ≡ { M , C , K } exists and only the mean of G is available, say G , then the maximum-entropy pdf of G follows the Wishart distribution with parameters p = (2 ν + n + 1) and Σ = G / (2 ν + n + 1) , that is � � G ∼ W n 2 ν + n + 1 , G / (2 ν + n + 1) . Uncertainty Quantification – p.18/43 Newport, RI, May 1, 2006

  19. Properties of the Distribution Covariance tensor of G : 1 � � cov ( G ij , G kl ) = G ik G jl + G il G jk 2 ν + n + 1 Normalized standard deviation matrix   � G − E [ G ] � 2 � � � � } 2 G = E  1 + { Trace 1 G   F δ 2 = � E [ G ] � 2 � 2 � 2 ν + n + 1 Trace G F  1 + n δ 2 2 ν + n + 1 and ν ↑ ⇒ δ 2 G ≤ G ↓ . Uncertainty Quantification – p.19/43 Newport, RI, May 1, 2006

  20. Distribution of the inverse - 1 If G is W n ( p, Σ ) then V = G − 1 has the inverted Wishart distribution: P V ( V ) = 2 m − n − 1 n/ 2 | Ψ | m − n − 1 / 2 � � − 1 2 V − 1 Ψ Γ n [( m − n − 1) / 2] | V | m/ 2 etr where m = n + p + 1 and Ψ = Σ − 1 (recall that p = 2 ν + n + 1 and Σ = G /p ) Uncertainty Quantification – p.20/43 Newport, RI, May 1, 2006

  21. Distribution of the inverse - 2 − 1 p G G − 1 � � Mean: E = p − n − 1 G − 1 ij , G − 1 � � cov = kl � � − 1 − 1 − 1 − 1 − 1 ilG − 1 2 ν + n + 1)( ν − 1 G ij G kl + G ik G jl + G kj 2 ν (2 ν + 1)(2 ν − 2) Uncertainty Quantification – p.21/43 Newport, RI, May 1, 2006

  22. Distribution of the inverse - 3 Suppose n = 101 & ν = 2 . So p = 2 ν + n + 1 = 106 and p − n − 1 = 4 . Therefore, E [ G ] = G and = 106 − 1 = 26 . 5 G − 1 !!!!!!!!!! G − 1 � � E 4 G From a practical point of view we do not expect them to be so far apart! One way to reduce the gap is to increase p . But this implies the reduction of variance. Uncertainty Quantification – p.22/43 Newport, RI, May 1, 2006

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend