quantitative genomics and genetics btry 4830 6830 pbsb
play

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB.5201.01 - PowerPoint PPT Presentation

Quantitative Genomics and Genetics BTRY 4830/6830; PBSB.5201.01 Lecture18: Logistic regression II, basics of GLMs Jason Mezey jgm45@cornell.edu April 11, 2019 (Th) 10:10-11:25 Announcements Quantitative Genomics and Genetics - Spring 2019


  1. Quantitative Genomics and Genetics BTRY 4830/6830; PBSB.5201.01 Lecture18: Logistic regression II, basics of GLMs Jason Mezey jgm45@cornell.edu April 11, 2019 (Th) 10:10-11:25

  2. Announcements Quantitative Genomics and Genetics - Spring 2019 BTRY 4830/6830; PBSB 5201.01 Midterm - available, Fri., April 12 Midterm exam due before 11:59PM, Sun., April 14 PLEASE NOTE THE FOLLOWING INSTRUCTIONS: 1. You are to complete this exam alone. The exam is open book, so you are allowed to use any books or information available online, your own notes and your previously constructed code, etc. HOWEVER YOU ARE NOT ALLOWED TO COMMUNICATE OR IN ANY WAY ASK ANYONE FOR ASSISTANCE WITH THIS EXAM IN ANY FORM (the only exceptions are Olivia, Scott, and Dr. Mezey). As a non-exhaustive list this includes asking classmates or ANYONE else for advice or where to look for answers concerning problems, you are not allowed to ask anyone for access to their notes or to even look at their code whether constructed before the exam or not, etc. You are therefore only allowed to look at your own materials and materials you can access on your own. In short, work on your own! Please note that you will be violating Cornell’s honor code if you act otherwise.

  3. Announcements 2. Please pay attention to instructions and complete ALL requirements for ALL questions, e.g. some questions ask for R code, plots, AND written answers. We will give partial credit so it is to your advantage to attempt every part of every question. 3. A complete answer to this exam will include R code answers in Rmarkdown, where you will submit your .Rmd script and associated .pdf file. Note there will be penalties for scripts that fail to compile (!!). Also, as always, you do not need to repeat code for each part (i.e., if you write a single block of code that generates the answers for some or all of the parts, that is fine, but do please label your output that answers each question!!). You should include all of your plots and written answers in this same .Rmd script with your R code. 4. The exam must be uploaded on CMS before 11:59PM Sun., April 14. It is your responsibility to make sure that it is in uploaded by then and no excuses will be accepted (power outages, computer problems, Cornell’s internet slowed to a crawl, etc.). Remember: you are welcome to upload early! We will deduct points for being late for exams received after this deadline (even if it is by minutes!!).

  4. Review: Case / Control Phenotypes • While a linear regression may provide a reasonable model for many phenotypes, we are commonly interested in analyzing phenotypes where this is NOT a good model • As an example, we are often in situations where we are interested in identifying causal polymorphisms (loci) that contribute to the risk for developing a disease, e.g. heart disease, diabetes, etc. • In this case, the phenotype we are measuring is often “has disease” or “does not have disease” or more precisely “case” or “control” • Recall that such phenotypes are properties of measured individuals and therefore elements of a sample space, such that we can define a random variable such as Y (case) = 1 and Y (control) = 0

  5. Review: linear vs. logistic • Recall that for a linear regression, the regression function was a line and the error term accounted for the difference between each point and the expected value (the linear regression line), which we assume follow a normal • For a logistic regression, we use the logistic function and the error term makes up the value to either 0 or 1: Y Y Xa Xa • Recall in both of these cases we are plotting versus just Xa (because the picture is easier to understand) but the real regression (linear or logistic) is a multiple regression so the “real” picture is Y vs the Xa and Xd variables

  6. Review: calculating the components of an individual I • Recall that an individual with phenotype Yi is described by the following equation: | − Y i = E( Y i | X i ) + ⇤ i Y i = ⇥ − 1 ( Y i | X i ) + ⇤ i e β µ + x i,a β a + x i,d β d Y i = 1 + e β µ + x i,a β a + x i,d β d + ⇤ i • To understand how an individual with a phenotype Yi and a genotype Xi breaks down in this equation, we need to consider the expected (predicted!) part and the error term (we will do this separately

  7. Review: calculating the components of an individual II • For example, say we have an individual i that has genotype A1A1 and phenotype Yi = 0 • We know Xa = -1 and Xd = -1 • Say we also know that for the population, the true parameters (which we will not know in practice! We need to infer them!) are: � µ = 0 . 2 � a = 2 . 2 � d = 0 . 2 • We can then calculate the E(Yi|Xi) and the error term for i: | e β µ + x i,a β a + x i,d β d Y i = 1 + e β µ + x i,a β a + x i,d β d + ⇤ i e 0 . 2+( − 1)2 . 2+( − 1)0 . 2 0 = 1 + e 0 . 2+( − 1)2 . 2+( − 1)0 . 2 + ⇤ i 0 = 0 . 1 − 0 . 1

  8. Review: the error term • Recall that the error term is either the negative of E(Yi | Xi) when Yi is zero and 1- E(Yi | Xi) when Yi is one: | | | | − ∼ − ⇤ i | ( Y i = 0) = − E( Y i | X i ) ⇤ i | ( Y i = 1) = 1 − E( Y i | X i ) • For the entire distribution of the population, recall that | � ✏ i = Z � E ( Y i | X i ) | � Pr ( Z ) ⇠ bern ( p ) p = E( Y | X ) For example: | ⇤ i = − 0 . 9 ⇤ i = − 0 . 1 p = 0 . 1

  9. Review: Notation • Remember that while we are plotting this versus just Xa, the true plot is versus BOTH Xa and Xd (harder to see what is going on) • For an entire sample, we can use matrix notation as follows: e X β 1 E( Y | X ) = � − 1 ( X � ) = 1 + e X β = 1 + e − X β 2 e β µ + x 1 ,a β a + x 1 ,d β d 3 1+ e β µ + x 1 ,a β a + x 1 ,d β d . 6 7 E( y | x ) = γ � 1 ( x β ) = . 6 7 . 6 7 e β µ + xn,a β a + xn,d β d 4 5 1+ e β µ + xn,a β a + xn,d β d

  10. Inference • Recall that our goal with using logistic regression was to model the probability distribution of a case / control phenotype when there is a causal polymorphism • To use this for a GWAS, we need to test the null hypothesis that a genotype is not a causal polymorphism (or more accurately that the genetic marker we are testing is not in LD with a causal polymorphism!): � µ = c � a = 0 � d = 0 H 0 : � a = 0 ∩ � d = 0 • To assess this null hypothesis, we will use the same approach as in linear regression, i.e. we will construct a LRT = likelihood ratio test (recall that an F-test is an LRT!) • We will need MLE for the parameters of the logistic regression for the LRT

  11. MLE of logistic regression parameters • Recall that an MLE is simply a statistic (a function that takes the sample as an input and outputs the estimate of the parameters)! • In this case, we want to construct the following MLE: MLE (ˆ β ) = MLE ( ˆ β µ , ˆ β a , ˆ β d ) • To do this, we need to maximize the log-likelihood function for the logistic regression, which has the following form (sample size n): n ⇤ y i ln ( γ � 1 ( β µ + x i,a β a + x i,d β d )) + (1 � y i ) ln (1 � γ � 1 ( β µ + x i,a β a + x i,d β d )) � ⇥ l ( β ) = i =1 • Unlike the case of linear regression, where we had a “closed-form” equation that allows us to plug in the Y’s and X’s and returns the beta values that maximize the log-likelihood, there is no such simple equation for a logistic regression • We will therefore need an algorithm to calculate the MLE

  12. Algorithm Basics • algorithm - a sequence of instructions for taking an input and producing an output • We often use algorithms in estimation of parameters where the structure of the estimation equation (e.g., the log-likelihood) is so complicated that we cannot • Derive a simple (closed) form equation for the estimator • Cannot easily determine the value the estimator should take by other means (e.g., by graphical visualization) • We will use algorithms to “search” for the parameter values that correspond to the estimator of interest • Algorithms are not guaranteed to produce the correct value of the estimator (!!), because the algorithm may “converge” (=return) the wrong answer (e.g., converges to a “local” maximum or does not converge!) and because the compute time to converge to exactly the same answer is impractical for applications

  13. IRLS algorithm I • For logistic regression (and GLM’s in general!) we will construct an algorithm to find the parameters that correspond to the maximum of the log-likelihood: n ⇤ y i ln ( γ � 1 ( β µ + x i,a β a + x i,d β d )) + (1 � y i ) ln (1 � γ � 1 ( β µ + x i,a β a + x i,d β d )) � ⇥ l ( β ) = i =1 • For logistic regression (and GLM’s in general!) we will construct an Iterative Re-weighted Least Squares (IRLS) algorithm, which has the following structure: 1. Choose starting values for the β ’s. Since we have a vector of three β ’s in our case, we assign these numbers and call the resulting vector β [0] . 2. Using the re-weighting equation (described next slide), update the β [ t ] vector. 3. At each step t > 0 check if β [ t +1] ⇡ β [ t ] (i.e. if these are approximately equal) using an appropriate function. If the value is below a defined threshold, stop. If not, repeat steps 2,3.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend