introduction to general and generalized linear models
play

Introduction to General and Generalized Linear Models General Linear - PowerPoint PPT Presentation

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby October 2010 Henrik MadsenPoul


  1. Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby October 2010 Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 1 / 37

  2. Today The general linear model - intro The multivariate normal distribution Deviance Likelihood, score function and information matrix The general linear model - definition Estimation Fitted values Residuals Partitioning of variation Likelihood ratio tests The coefficient of determination Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 2 / 37

  3. The general linear model - intro The general linear model - intro We will use the term classical GLM for the General linear model to distinguish it from GLM which is used for the Generalized linear model. The classical GLM leads to a unique way of describing the variations of experiments with a continuous variable. The classical GLM’s include Regression analysis Analysis of variance - ANOVA Analysis of covariance - ANCOVA The residuals are assumed to follow a multivariate normal distribution in the classical GLM. Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 3 / 37

  4. The general linear model - intro The general linear model - intro Classical GLM’s are naturally studied in the framework of the multivariate normal distribution. We will consider the set of n observations as a sample from a n -dimensional normal distribution. Under the normal distribution model, maximum-likelihood estimation of mean value parameters may be interpreted geometrically as projection on an appropriate subspace. The likelihood-ratio test statistics for model reduction may be expressed in terms of norms of these projections. Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 4 / 37

  5. The multivariate normal distribution The multivariate normal distribution Let Y = ( Y 1 , Y 2 , . . . , Y n ) T be a random vector with Y 1 , Y 2 , . . . , Y n independent identically distributed (iid) N(0 , 1) random variables. Note that E[ Y ] = 0 and the variance-covariance matrix Var[ Y ] = I . Definition (Multivariate normal distribution) Z has an k -dimensional multivariate normal distribution if Z has the same distribution as AY + b for some n , some k × n matrix A , and some k vector b . We indicate the multivariate normal distribution by writing Z ∼ N( b , AA T ) . Since A and b are fixed, we have E[ Z ] = b and Var[ Z ] = AA T . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 5 / 37

  6. The multivariate normal distribution The multivariate normal distribution Let us assume that the variance-covariance matrix is known apart from a constant factor, σ 2 , i.e. Var[ Z ] = σ 2 Σ . The density for the k -dimensional random vector Z with mean µ and covariance σ 2 Σ is: � � 1 − 1 2 σ 2 ( z − µ ) T Σ − 1 ( z − µ ) (2 π ) k/ 2 σ k √ f Z ( z ) = exp det Σ where Σ is seen to be (a) symmetric and (b) positive semi-definite. We write Z ∼ N k ( µ , σ 2 Σ ) . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 6 / 37

  7. The multivariate normal distribution The normal density as a statistical model Consider now the n observations Y = ( Y 1 , Y 2 , . . . , Y n ) T , and assume that a statistical model is Y ∼ N n ( µ , σ 2 Σ ) for y ∈ R n The variance-covariance matrix for the observations is called the dispersion matrix , denoted D[ Y ] , i.e. the dispersion matrix for Y is D[ Y ] = σ 2 Σ Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 7 / 37

  8. Inner product and norm Inner product and norm Definition (Inner product and norm) The bilinear form δ Σ ( y 1 , y 2 ) = y T 1 Σ − 1 y 2 defines an inner product in R n . Corresponding to this inner product we can define orthogonality , which is obtained when the inner product is zero. A norm is defined by � || y || Σ = δ Σ ( y , y ) . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 8 / 37

  9. Deviance Deviance for normal distributed variables Definition (Deviance for normal distributed variables) Let us introduce the notation D( y ; µ ) = δ Σ ( y − µ , y − µ ) = ( y − µ ) T Σ − 1 ( y − µ ) to denote the quadratic norm of the vector ( y − µ ) corresponding to the inner product defined by Σ − 1 . For a normal distribution with Σ = I , the deviance is just the Residual Sum of Squares (RSS). Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 9 / 37

  10. Deviance Deviance for normal distributed variables Using this notation the normal density is expressed as a density defined on any finite dimensional vector space equipped with the inner product, δ Σ : � � 1 − 1 f ( y ; µ , σ 2 ) = √ exp 2 σ 2 D( y ; µ ) . 2 π ) n σ n � ( det( Σ ) Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 10 / 37

  11. Likelihood, score function and information matrix The likelihood and log-likelihood function The likelihood function is: � � 1 − 1 L ( µ , σ 2 ; y ) = √ 2 π ) n σ n � exp 2 σ 2 D( y ; µ ) ( det( Σ ) The log-likelihood function is (apart from an additive constant): 1 2 σ 2 ( y − µ ) T Σ − 1 ( y − µ ) ℓ µ,σ 2 ( µ , σ 2 ; y ) = − ( n/ 2) log( σ 2 ) − 1 = − ( n/ 2) log( σ 2 ) − 2 σ 2 D( y ; µ ) . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 11 / 37

  12. Likelihood, score function and information matrix The score function, observed - and expected information for µ The score function wrt. µ is � � ∂ µ ℓ µ,σ 2 ( µ , σ 2 ; y ) = 1 ∂ = 1 Σ − 1 y − Σ − 1 µ σ 2 Σ − 1 ( y − µ ) σ 2 The observed information (wrt. µ ) is j ( µ ; y ) = 1 σ 2 Σ − 1 . It is seen that the observed information does not depend on the observations y . Hence the expected information is i ( µ ) = 1 σ 2 Σ − 1 . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 12 / 37

  13. The general linear model The general linear model In the case of a normal density the observation Y i is most often written as Y i = µ i + ǫ i which for all n observations ( Y 1 , Y 2 , . . . , Y n ) can be written on the matrix form Y = µ + ǫ where Y ∼ N n ( µ , σ 2 Σ ) for y ∈ R n Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 13 / 37

  14. The general linear model General Linear Models In the linear model it is assumed that µ belongs to a linear (or affine) subspace Ω 0 of R n . The full model is a model with Ω full = R n and hence each observation fits the model perfectly, i.e. � µ = y . The most restricted model is the null model with Ω null = R . It only describes the variations of the observations by a common mean value for all observations. In practice, one often starts with formulating a rather comprehensive model with Ω = R k , where k < n . We will call such a model a sufficient model . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 14 / 37

  15. The general linear model The General Linear Model Definition (The general linear model) Assume that Y 1 , Y 2 , . . . , Y n is normally distributed as described before. A general linear model for Y 1 , Y 2 , . . . , Y n is a model where an affine hypothesis is formulated for µ . The hypothesis is of the form H 0 : µ − µ 0 ∈ Ω 0 , where Ω 0 is a linear subspace of R n of dimension k , and where µ 0 denotes a vector of known offset values . Definition (Dimension of general linear model) The dimension of the subspace Ω 0 for the linear model is the dimension of the model . Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 15 / 37

  16. The general linear model The design matrix Definition (Design matrix for classical GLM) Assume that the linear subspace Ω 0 = span { x 1 , . . . , x k } , i.e. the subspace is spanned by k vectors ( k < n ). Consider a general linear model where the hypothesis can be written as H 0 : µ − µ 0 = Xβ with β ∈ R k , where X has full rank. The n × k matrix X of known deterministic coefficients is called the design matrix . The i th row of the design matrix is given by the model vector   T x i 1   x i 2   x T i = ,  .  .   . x ik for the i th observation. Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 16 / 37

  17. Estimation Estimation of mean value parameters Under the hypothesis H 0 : µ ∈ Ω 0 , the maximum likelihood estimate for the set µ is found as the orthogonal projection (with respect to δ Σ ), p 0 ( y ) of y onto the linear subspace Ω 0 . Theorem (ML estimates of mean value parameters) For hypothesis of the form H 0 : µ ( β ) = Xβ the maximum likelihood estimated for β is found as a solution to the normal equation X T Σ − 1 y = X T Σ − 1 X � β . If X has full rank, the solution is uniquely given by β = ( X T Σ − 1 X ) − 1 X T Σ − 1 y � Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 17 / 37

  18. Estimation Properties of the ML estimator Theorem (Properties of the ML estimator) For the ML estimator we have β ∼ N k ( β , σ 2 ( X T Σ − 1 X ) − 1 ) � Unknown Σ Notice that it has been assumed that Σ is known. If Σ is unknown, one possibility is to use the relaxation algorithm described in Madsen (2008) a . a Madsen, H. (2008) Time Series Analysis. Chapman, Hall Henrik MadsenPoul Thyregod (IMM-DTU) Chapman & Hall October 2010 18 / 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend