application of artificial intelligence
play

Application of Artificial Intelligence Opportunities and limitations - PowerPoint PPT Presentation

Application of Artificial Intelligence Opportunities and limitations through life & Life sciences examples Clovis Galiez Grenoble Statistiques pour les sciences du Vivant et de lHomme March 31, 2020 C. Galiez (LJK-SVH) Application of


  1. Application of Artificial Intelligence Opportunities and limitations through life & Life sciences examples Clovis Galiez Grenoble Statistiques pour les sciences du Vivant et de l’Homme March 31, 2020 C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 1 / 17

  2. Disclaimer You should form teams of 2 persons on Teide. Answer the questions in the template at https: //clovisg.github.io/teaching/asdia/ctd2/quote2.Rmd and post-it on teide. You can use the following Riot channel https://riot.ensimag.fr/#/room/#ASDIA:ensimag.fr , I’ll be present to answer live questions during the lecture slots. Do not hesitate to post your understandings and mis-understandings out of the time slots, I won’t judge it, I’ll only judge your involvment and curiosity. You can send me emails ( clovis.galiez@grenoble-inp.fr ) for specific questions, and I’ll answer publicly on the riot channel. C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 2 / 17

  3. Goals Have a critical understanding of the place of AI in society Discover and practice machine learning (ML) techniques Linear regression Logistic regression Experiment some limitations Curse of dimensionality Hidden overfitting Sampling bias Towards autonomy with ML techniques Design experiments Organize the data Evaluate performances C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 3 / 17

  4. Today’s outline Short summary of the last lecture Lasso regularization Experiment the curse of dimensionality Logistic regression C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 4 / 17

  5. Last lecture Remember What do you remember from last lecture? C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  6. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  7. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI Microbiomes C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  8. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI Microbiomes Diverse Still a lot to discover Play key roles in global geochemical cycles and in human health C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  9. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI Microbiomes Diverse Still a lot to discover Play key roles in global geochemical cycles and in human health Curse of dimensionality C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  10. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI Microbiomes Diverse Still a lot to discover Play key roles in global geochemical cycles and in human health Curse of dimensionality Overfit can stem from too many features (capacity of description increases exponentially) More data helps C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  11. Last lecture Remember What do you remember from last lecture? Phantasm and opportunities of AI Microbiomes Diverse Still a lot to discover Play key roles in global geochemical cycles and in human health Curse of dimensionality Overfit can stem from too many features (capacity of description increases exponentially) More data helps Restricting the parameter space: regularization Ridge C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 5 / 17

  12. Ridge regularization example 3 β i x i + ǫ . Let’s come back to the model Y = � i =0 The maximum likelihood with 4 points will give a � β fitting perfectly the points: Maximum likelihood coefficients: β 0 β 1 β 2 β 3 5.169 -54.388 155.755 -114.487 C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 6 / 17

  13. Ridge regularization example 3 β i x i + ǫ . Let’s come back to the model Y = � i =0 With a prior N (0 , η 2 ) the maximum a posteriori of the vector � β corresponds to (blue curve): Maximum a posteriori coefficients β 0 β 1 β 2 β 3 -0.1279 2.2561 -1.5779 0.3180 C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 6 / 17

  14. Ridge regularization Consider the linear model Y = � β. � X + ǫ with ǫ ∼ N (0 , σ 2 ) . Facts 1. The maximum likelihood solution is the same as the solution of the following optimization problem: N ( y i − � x i ) 2 min � β.� � β i =0 2. Putting a Gaussian prior β i ∼ N (0 , η 2 ) on the parameters is the same as solving the following optimization problem (ridge regularization): N x i ) 2 + σ 2 ( y i − � η 2 || � β || 2 min � β.� 2 � β i =0 3. It tells the model to avoid high values for the parameters. It is equivalent to introduce fake data at coordinates: x = ( σ η , σ η , ..., σ η ) , y = 0 � C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 7 / 17

  15. From ridge to lasso Suppose you model a variable Y depending on some explanatory variables x with a linear model: Y = β 0 + � x + ǫ with ǫ ∼ N (0 , σ 2 ) . β.� Imagine now that you know that actually only few variables actually explain your target variable. Question! Gaussian priors on β i centered on 0 avoid high values of β i . Will it push the non-explanatory variables down to 0? Think individually (5’) Vote C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 8 / 17

  16. Lasso penalization What should be the shape around 0 of the prior distribution if we want to use less parameters? C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 9 / 17

  17. Lasso penalization What should be the shape around 0 of the prior distribution if we want to use less parameters? Something like: 2 λe − λ | x | f ( x ) = 1 Exercise Work out the formula to see what criterion is minimized when maximizing the posterior probability of the parameters. C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 9 / 17

  18. Show that curse of dimensionality happens! Design a simple experiment showing the curse of dimensionality in the linear regression setting. Individual reflexion (5’) Then we decide on a common experimental plan C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 10 / 17

  19. Experimental plan Simulate in R a dependence between a vector � X and an output variable y . C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 11 / 17

  20. Experimental plan Simulate in R a dependence between a vector � X and an output variable y . Find the maximum likelihood of the parameters of a linear regression. C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 11 / 17

  21. Experimental plan Simulate in R a dependence between a vector � X and an output variable y . Find the maximum likelihood of the parameters of a linear regression. Add components to � X that are not related to the output variable? Are the coefficients near to 0? C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 11 / 17

  22. Experimental plan Simulate in R a dependence between a vector � X and an output variable y . Find the maximum likelihood of the parameters of a linear regression. Add components to � X that are not related to the output variable? Are the coefficients near to 0? Add regularization and check if the correct coefficient are recovered. C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 11 / 17

  23. Logistic regression (classification) C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 12 / 17

  24. Classification Let: � X be an M -dimensional random variable, and Z binary (0/1) random variable. � X and Z are linked by some unknown joint distribution. C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 13 / 17

  25. Classification Let: � X be an M -dimensional random variable, and Z binary (0/1) random variable. � X and Z are linked by some unknown joint distribution. A predictor f : R M + → [0 , 1] is a function chosen to minimize some loss in order to have C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 13 / 17

  26. Classification Let: � X be an M -dimensional random variable, and Z binary (0/1) random variable. � X and Z are linked by some unknown joint distribution. A predictor f : R M + → [0 , 1] is a function chosen to minimize some loss in x, z of � order to have f ( � x ) ≈ z for realizations � X, Z . C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 13 / 17

  27. Classification Let: � X be an M -dimensional random variable, and Z binary (0/1) random variable. � X and Z are linked by some unknown joint distribution. A predictor f : R M + → [0 , 1] is a function chosen to minimize some loss in x, z of � order to have f ( � x ) ≈ z for realizations � X, Z . Which loss? C. Galiez (LJK-SVH) Application of Artificial Intelligence March 31, 2020 13 / 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend