practical advances in machine learning a computer science
play

Practical Advances in Machine Learning: A Computer Science - PowerPoint PPT Presentation

Practical Advances in Machine Learning: A Computer Science Perspective Scott Neal Reilly & Jeff Druce Charles River Analytics Prepared for 2017 Workshop on Data Science and String Theory November 30 December 1, 2017 Objectives of this


  1. Practical Advances in Machine Learning: A Computer Science Perspective Scott Neal Reilly & Jeff Druce Charles River Analytics Prepared for 2017 Workshop on Data Science and String Theory November 30 – December 1, 2017

  2. Objectives of this breakout session Ÿ Quick review of machine learning “from a CS perspective” Ÿ Review some of the latest advances in machine learning Ÿ Tips for using ML Ÿ Discussion of academic/industrial collaboration opportunities/challenges Ÿ Discussion about all of the above

  3. Introductions Ÿ Charles River Analytics Ÿ 160 people, 30-year history Ÿ Mostly government contract R&D Ÿ AI, ML, robotics, computer vision, human sensing, computational social science, human factors Ÿ Scott Neal Reilly Ÿ PhD, Computer Science, Carnegie Mellon University Ÿ Senior Vice President & Principal Scientist, Charles River Analytics Ÿ Focus on ensemble machine learning and causal learning Ÿ Jeff Druce Ÿ PhD, Civil Engineering, University of Minnesota Ÿ BS, Applied Math and Physics, University of Michigan Ÿ Scientist, Charles River Analytics Ÿ Focus on deep learning, GANs, signal processing+ML

  4. Question: What can machine learning do for me?

  5. Simple Definition Machine learning is about getting computers to perform tasks that I don’t want to or don’t know how to tell them to do. What kinds of tasks? How do they learn if I don’t tell them?

  6. Dimensions of a Machine Learning Problem Ÿ Dimension #1: Data Ÿ What kind of data do I have? Ÿ What are the properties of the data? Ÿ Dimension #2: Objective/Task Ÿ What is it that is being learned? Ÿ What are the computational/time constraints on learning/execution? Ÿ These tend to suggest particular techniques

  7. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised ?

  8. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised Ÿ Unlabeled: unsupervised ?

  9. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised Ÿ Unlabeled: unsupervised Ÿ Partially labeled: semi-supervised ?

  10. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised Ÿ Unlabeled: unsupervised Ÿ Partially labeled: semi-supervised ?

  11. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised Ÿ Unlabeled: unsupervised Ÿ Partially labeled: semi-supervised Ÿ An environment that can label data for you: exploratory Ÿ Active learning, Reinforcement learning ? ? ? ? ?

  12. Dimension #1: Data Ÿ Sub-Dimension #1: What kind of data do I have? Ÿ Labeled: supervised Ÿ Unlabeled: unsupervised Ÿ Partially labeled: semi-supervised Ÿ An environment that can label data for you: exploratory Ÿ Active learning, Reinforcement learning ?

  13. Dimension #1: Data Ÿ Sub-Dimension #2: What are the properties of the data? Ÿ How much is there? Ÿ How noisy is it? Ÿ How many features are there?

  14. Dimension #2: What is the learning task? Classification Ÿ Given features of X, what is X? Ÿ Supervised, unsupervised, semi-supervised, etc. Ÿ Regression Ÿ Given features of X, what is value of feature Y? Ÿ Linear regression, symbolic regression/genetic programming, etc. Ÿ Dimensionality reduction Ÿ Given features of X, can I describe X with fewer features that are comparably descriptive? Ÿ Principal component analysis, latent Dirichlet allocation, etc. Ÿ Anomaly detection Ÿ Given features of X, is X unusual given other X’s? Ÿ Principal component analysis, support vector machines, etc. Ÿ Process learning Ÿ Given task T, how do I decide what action A (or plan P) will accomplish T? Ÿ Reinforcement learning, genetic programming, RNNs, etc. Ÿ Structure learning Ÿ Given variables V, how do they relate to each other? Ÿ Statistical relational learning, etc. Ÿ Model learning Ÿ Discriminative vs. generative models Ÿ Learn p(class|features) or p(features|class) respectively. Ÿ

  15. Some Approaches to ML Ÿ Given what data is available and the task, pick from… Ÿ Neural Nets / Deep Learning Ÿ Bayesian Learning Ÿ Statistical Relational Learning Ÿ Symbolic/rule learning Ÿ Reinforcement Learning Ÿ Genetic programming Ÿ Other Approaches Ÿ kNN, svm, logistic regression, decision trees/forests

  16. Question: What are some of the interesting recent advances in machine learning?

  17. Advance #1: Deep Learning Advance #1: Deep Learning Convolutional Neural Networks Deep Reinforcement Learning Generative Adversarial Networks

  18. Convolutional Neural Networks In traditional image/signal processing and Ÿ learning problems, human crafted features are used to transform the images into more informative space. However, using human-designed features Ÿ does not leverage the computational power of modern day computers/GPUs ! To perform better classification, we let a Ÿ deep neural network learn optimal features that can best separate the data.

  19. Convolutional Neural Networks In traditional image/signal processing and Ÿ learning problems, human crafted features are used to transform the images into more informative space. However, using human-designed features Ÿ does not leverage the computational power of modern day computers/GPUs ! To perform better classification, we let a Ÿ deep neural network learn optimal features that can best separate the data. Raw Input Classification

  20. Convolutional Neural Networks In traditional image/signal processing and Ÿ Automated learning problems, human crafted features Feature Extraction are used to transform the images into more informative space. (CNN) However, using human-designed features Ÿ does not leverage the computational power of modern day computers/GPUs ! To perform better classification, we let a Ÿ deep neural network learn optimal features that can best separate the data. Raw Input Classification

  21. Fully Convolutional Networks Fully Convolutional Networks for Segmentation

  22. CNNs for non-image problems Natural Language Processing – Text Classification

  23. CNNs for non-image problems Natural Language Processing – Text Classification Signal Processing - Stereotypical Motor Movement Detection in Autism

  24. CNNs: Tools for Local Structure Mining Ÿ What do all problems where leveraging CNNs is effective have in common? Ÿ CNNs mine high dimensional data where proximal input features possess some structure which can be exploited to achieve some task.

  25. CNNs: Tools for Local Structure Mining Ÿ What do all problems where leveraging CNNs is effective have in common? Ÿ CNNs mine high dimensional data where proximal input features possess some structure which can be exploited to achieve some task. Ÿ Lots of proximal structure! Ÿ What problems are you facing where subtle, complex, embedded local structures could potentially be exploited?

  26. Advance #1: Deep Learning Advance #1: Deep Learning Convolutional Neural Networks Deep Reinforcement Learning Generative Adversarial Networks

  27. Reinforcement Learning Goal Observed State Agent

  28. Reinforcement Learning Goal Policy Observed State Agent

  29. Reinforcement Learning Goal Policy Observed State How can we learn an optimal policy to achieve the goal? Agent

  30. Deep Reinforcement Learning Episodes

  31. Deep Reinforcement Learning Episodes Ÿ Learn the best policy through a series of training episodes. Ÿ Training uses an action-value function ( aka Q function) , or the expected return for following some policy.

  32. Deep Reinforcement Learning (Q learning) Episodes Ÿ Ÿ Ÿ Traditionally, a linear function was used, DRL uses a deep net to approximate Q.

  33. DRL Successes Bots are now the world champion in… A variety of Atari Go - AlphaZero Dota 2 - Deepmind games - Mnih

  34. DRL Successes Bots are now the world champion in… A variety of Atari Go - AlphaZero Dota 2 - Deepmind games - Mnih Is DRL only good for games?

  35. DRL – What can it do? Natural Language Processing Ÿ Intelligent Transportation Systems: Bojarski et al. (2017). Ÿ Text Generation Ÿ Understanding Deep Learning: Daniely et al. (2016) Ÿ Deep Probabilistic Programming, Tran et al. (2017) Ÿ Machine Translation: He et al. (2016a) Ÿ Building Compact Networks Ÿ

  36. DRL – What can it do? Natural Language Processing Ÿ Intelligent Transportation Systems: Bojarski et al. (2017). Ÿ Text Generation Ÿ Understanding Deep Learning: Daniely et al. (2016) Ÿ Deep Probabilistic Programming, Tran et al. (2017) Ÿ Machine Translation: He et al. (2016a) Ÿ Building Compact Networks Ÿ DRL can be used where a large, diverse state space makes it difficult to explore all possible strategies, and actions may have latent effects, which at some point become very important in achieving a task.

  37. Advance #1: Deep Learning Advance #1: Deep Learning Convolutional Neural Networks Deep Reinforcement Learning Generative Adversarial Networks

  38. Generative Adversarial Networks (GANs) G D Output Vs. The example: We can think of G as a counterfeiter attempting to produce fake money such that they can not be detected by the discriminative false currency detecting agent D. 38 Proprietary

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend