uncertainty quantification and propagation in structural
play

Uncertainty Quantification and Propagation in Structural Mechanics: - PowerPoint PPT Presentation

Uncertainty Quantification and Propagation in Structural Mechanics: A Random Matrix Approach Sondipon Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL:


  1. Uncertainty Quantification and Propagation in Structural Mechanics: A Random Matrix Approach Sondipon Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html Random Matrix Method – p.1/63 B E College, India, January 2007

  2. Overview of Predictive Methods in Engineering There are five key steps: Physics (mechanics) model building Uncertainty Quantification (UQ) Uncertainty Propagation (UP) Model Verification & Validation (V & V) Prediction Tools are available for each of these steps. In this talk we will focus mainly on UQ and UP in linear dynamical systems. Random Matrix Method – p.2/63 B E College, India, January 2007

  3. Bristol Aerospace Random Matrix Method – p.3/63 B E College, India, January 2007

  4. Structural dynamics The equation of motion: M ¨ x ( t ) + C ˙ x ( t ) + Kx ( t ) = p ( t ) Due to the presence of uncertainty M , C and K become random matrices. The main objectives in the ‘forward problem’ are: to quantify uncertainties in the system matrices to predict the variability in the response vector x Random Matrix Method – p.4/63 B E College, India, January 2007

  5. Current Methods Two different approaches are currently available Low frequency : Stochastic Finite Element Method (SFEM) - assumes that stochastic fields describing parametric uncertainties are known in details High frequency : Statistical Energy Analysis (SEA) - do not consider parametric uncertainties in details Random Matrix Method – p.5/63 B E College, India, January 2007

  6. Random Matrix Method (RMM) The objective : To have an unified method which will work across the frequency range. The methodology : Derive the matrix variate probability density functions of M , C and K Propagate the uncertainty (using Monte Carlo simulation or analytical methods) to obtain the response statistics (or pdf) Random Matrix Method – p.6/63 B E College, India, January 2007

  7. Outline of the presentation In what follows next, I will discuss: Introduction to Matrix variate distributions Matrix factorization approach Optimal Wishart distribution Some examples Open problems & discussions Random Matrix Method – p.7/63 B E College, India, January 2007

  8. Matrix variate distributions The probability density function of a random matrix can be defined in a manner similar to that of a random variable. If A is an n × m real random matrix, the matrix variate probability density function of A ∈ R n,m , denoted as p A ( A ) , is a mapping from the space of n × m real matrices to the real line, i.e., p A ( A ) : R n,m → R . Random Matrix Method – p.8/63 B E College, India, January 2007

  9. Gaussian random matrix The random matrix X ∈ R n,p is said to have a matrix variate Gaussian distribution with mean matrix M ∈ R n,p and covariance matrix Σ ⊗ Ψ , where Σ ∈ R + n and Ψ ∈ R + p provided the pdf of X is given by p X ( X ) = (2 π ) − np/ 2 | Σ | − p/ 2 | Ψ | − n/ 2 � � − 1 2 Σ − 1 ( X − M ) Ψ − 1 ( X − M ) T etr (1) This distribution is usually denoted as X ∼ N n,p ( M , Σ ⊗ Ψ ) . Random Matrix Method – p.9/63 B E College, India, January 2007

  10. Wishart matrix A n × n symmetric positive definite random matrix S is said to have a Wishart distribution with parameters p ≥ n and Σ ∈ R + n , if its pdf is given by � − 1 � � 1 � � � − 1 2 np Γ n 1 1 1 2 p 2 ( p − n − 1) etr 2 Σ − 1 S p S ( S ) = 2 2 p | Σ | | S | (2) This distribution is usually denoted as S ∼ W n ( p, Σ ) . Note: If p = n + 1 , then the matrix is non-negative definite. Random Matrix Method – p.10/63 B E College, India, January 2007

  11. Matrix variate Gamma distribution A n × n symmetric positive definite matrix random W is said to have a matrix variate gamma distribution with parameters a and Ψ ∈ R + n , if its pdf is given by ℜ ( a ) > 1 Γ n ( a ) | Ψ | − a � − 1 | W | a − 1 2 ( n +1) etr {− ΨW } ; � p W ( W ) = 2( n − (3) This distribution is usually denoted as W ∼ G n ( a, Ψ ) . Here the multivariate gamma function: n � � a − 1 1 � 4 n ( n − 1) Γ n ( a ) = π Γ 2( k − 1) ; for ℜ ( a ) > ( n − 1) / 2 (4) k =1 Random Matrix Method – p.11/63 B E College, India, January 2007

  12. Distribution of the system matrices The distribution of the random system matrices M , C and K should be such that they are symmetric positive-definite, and the moments (at least first two) of the inverse of the dynamic stiffness matrix D ( ω ) = − ω 2 M + iω C + K should exist ∀ ω Random Matrix Method – p.12/63 B E College, India, January 2007

  13. Distribution of the system matrices The exact application of the last constraint requires the derivation of the joint probability density function of M , C and K , which is quite difficult to obtain. We consider a simpler problem where it is required that the inverse moments of each of the system matrices M , C and K must exist. Provided the system is damped, this will guarantee the existence of the moments of the frequency response function matrix. Random Matrix Method – p.13/63 B E College, India, January 2007

  14. Maximum Entropy Distribution Soize (2000,2006) used this approach and obtained the matrix variate Gamma distribution. Since Gamma and Wishart distribution are similar we have: Theorem 1. If ν -th order inverse-moment of a system matrix G ≡ { M , C , K } exists and only the mean of G is available, say G , then the maximum-entropy pdf of G follows the Wishart distribution with parameters p = (2 ν + n + 1) and Σ = G / (2 ν + n + 1) , that is � � G ∼ W n 2 ν + n + 1 , G / (2 ν + n + 1) . Random Matrix Method – p.14/63 B E College, India, January 2007

  15. Properties of the Distribution Covariance tensor of G : 1 � � cov ( G ij , G kl ) = G ik G jl + G il G jk 2 ν + n + 1 Normalized standard deviation matrix   � G − E [ G ] � 2 � � � � } 2 G = E  1 + { Trace 1 G   F δ 2 = � E [ G ] � 2 � 2 � 2 ν + n + 1 Trace G F  1 + n δ 2 2 ν + n + 1 and ν ↑ ⇒ δ 2 G ≤ G ↓ . Random Matrix Method – p.15/63 B E College, India, January 2007

  16. Distribution of the inverse - 1 If G is W n ( p, Σ ) then V = G − 1 has the inverted Wishart distribution: P V ( V ) = 2 m − n − 1 n/ 2 | Ψ | m − n − 1 / 2 � � − 1 2 V − 1 Ψ Γ n [( m − n − 1) / 2] | V | m/ 2 etr where m = n + p + 1 and Ψ = Σ − 1 (recall that p = 2 ν + n + 1 and Σ = G /p ) Random Matrix Method – p.16/63 B E College, India, January 2007

  17. Distribution of the inverse - 2 − 1 p G G − 1 � � Mean: E = p − n − 1 G − 1 ij , G − 1 � � cov = kl � � − 1 − 1 − 1 − 1 − 1 ilG − 1 2 ν + n + 1)( ν − 1 G ij G kl + G ik G jl + G kj 2 ν (2 ν + 1)(2 ν − 2) Random Matrix Method – p.17/63 B E College, India, January 2007

  18. Application Suppose n = 101 & ν = 2 . So p = 2 ν + n + 1 = 106 and p − n − 1 = 4 . Therefore, E [ G ] = G and = 106 − 1 = 26 . 5 G − 1 !!!!!!!!!! G − 1 � � E 4 G From a practical point of view we do not expect them to be so far apart! One way to reduce the gap is to increase p . But this implies the reduction of variance. Random Matrix Method – p.18/63 B E College, India, January 2007

  19. Matrix Factorization Approach (MFA) Because G is a symmetric and positive-definite random matrix, it can be always factorized as G = XX T (5) where X ∈ R n × p , p ≥ n is in general a rectangular matrix. The simplest case is when the mean of X is O ∈ R n × p , p ≥ n and the covariance tensor of X is given by Σ ⊗ I p ∈ R np × np where Σ ∈ R + n . X is a Gaussian random matrix with mean O ∈ R n × p , p ≥ n and covariance Σ ⊗ I p ∈ R np × np . Random Matrix Method – p.19/63 B E College, India, January 2007

  20. Wishart Pdf After some algebra it can be shown that G is a W n ( p, Σ ) Wishart random matrix, whose pdf is given given by � − 1 � � 1 � � � − 1 2 np Γ n 1 1 1 2 p 2 ( p − n − 1) etr 2 Σ − 1 G p G ( G ) = 2 2 p | Σ | | G | (6) Random Matrix Method – p.20/63 B E College, India, January 2007

  21. Parameter Estimation of Wishart Distribution The distribution of G must be such that E [ G ] and − 1 respectively. G − 1 � � E should be closest to G and G Since G ∼ W n ( p, Σ ) , there are two unknown parameters in this distribution, namely, p and Σ . This implies that there are in total 1 + n ( n + 1) / 2 number of unknowns. We define and subsequently minimize ‘normalized errors’: � � � � ε 1 = � G − E [ G ] F / � G � � F − 1 − E � G − 1 �� � − 1 � � ε 2 = F / � G � G � � � � � � F Random Matrix Method – p.21/63 B E College, India, January 2007

  22. MFA Distribution Solving the optimization problem we have: Theorem 2. If ν -th order inverse-moment of a system matrix G ≡ { M , C , K } exists and only the mean of G is available, say G , then the distribution of G follows the Wishart distribution with parameters � p = (2 ν + n + 1) and Σ = G / 2 ν (2 ν + n + 1) , that is � � � G ∼ W n 2 ν + n + 1 , G / 2 ν (2 ν + n + 1) . Random Matrix Method – p.22/63 B E College, India, January 2007

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend