Learning Prof. Kuan-Ting Lai 2019/7/2 Deep Learning a new Buzzword - - PowerPoint PPT Presentation

learning
SMART_READER_LITE
LIVE PREVIEW

Learning Prof. Kuan-Ting Lai 2019/7/2 Deep Learning a new Buzzword - - PowerPoint PPT Presentation

Introduction to Deep Learning Prof. Kuan-Ting Lai 2019/7/2 Deep Learning a new Buzzword 2 AI Papers 3 Registration of NIPS 4 AL/ML Investement 5 Source: Sand Hill Econometrics 6 Source: Sand Hill Econometrics 7 AlphaGo 8 So,


slide-1
SLIDE 1

Introduction to Deep Learning

  • Prof. Kuan-Ting Lai

2019/7/2

slide-2
SLIDE 2

Deep Learning – a new Buzzword

2

slide-3
SLIDE 3

AI Papers

3

slide-4
SLIDE 4

4

Registration of NIPS

slide-5
SLIDE 5

AL/ML Investement

5

slide-6
SLIDE 6

Source: Sand Hill Econometrics

6

Source: Sand Hill Econometrics

slide-7
SLIDE 7

7

slide-8
SLIDE 8

AlphaGo

8

slide-9
SLIDE 9

So, what is Deep Learning?

slide-10
SLIDE 10

10

slide-11
SLIDE 11

Machine Learning

slide-12
SLIDE 12

12

slide-13
SLIDE 13

13

slide-14
SLIDE 14

Learning Representation

  • Objective: Classify white & black
  • Input: (x, y)
  • Output: Black or White

14

slide-15
SLIDE 15

The Master Algorithm – Pedro Domingos

15

slide-16
SLIDE 16

Five Tribes of Machine Learning

  • Evolutionaries (基因演化法)
  • Connectionists (類神經網路)
  • Symbolists (歸納法)
  • Bayesians (貝氏機率)
  • Analogizers (類比近似)

16

slide-17
SLIDE 17

Five Tribes of Machine Learning

  • Symbolists: Decision Trees, Random Forest
  • Bayesians: Naïve Bayesians
  • Analogizers: SVM, k-NN
  • Evolutionaries: Gene algorithms
  • Connectionists: Deep Learning

17

slide-18
SLIDE 18

All Algorithms can be Reduced to 3 Operations

1 1 1 1

18

slide-19
SLIDE 19

XOR

1 1

19

slide-20
SLIDE 20

OK, machine learning is cool. But what is Deep Learning?

slide-21
SLIDE 21

21

slide-22
SLIDE 22

Neuron

22

slide-23
SLIDE 23

Frank Rosenblatt’s Perceptron (1957)

23

slide-24
SLIDE 24

24

slide-25
SLIDE 25

25

slide-26
SLIDE 26

26

slide-27
SLIDE 27

Deep Learning

27

slide-28
SLIDE 28

28

slide-29
SLIDE 29

Learning XOR (1986) Geoffrey Hinton

29

slide-30
SLIDE 30

Backpropagation

30

slide-31
SLIDE 31

Chain Rule

31

slide-32
SLIDE 32

Computation Graph

c = a + b d = b + 1 e = c*d

32

slide-33
SLIDE 33

MNIST database of Handwritten Digits

33

slide-34
SLIDE 34

34

slide-35
SLIDE 35

35

slide-36
SLIDE 36

36

slide-37
SLIDE 37

37

slide-38
SLIDE 38

38

slide-39
SLIDE 39

39

slide-40
SLIDE 40

40

slide-41
SLIDE 41

Gradient Descent

41

slide-42
SLIDE 42

42

https://hackernoon.com/gradient-descent-aynk-7cbe95a778da

slide-43
SLIDE 43

Cost Function

  • Mean-Squared Error

43

𝐾 𝜄 = 1 𝑂 ෍

𝑗=1 𝑂

𝑔

𝜄 𝑦𝑗 − 𝑧𝑗 2

slide-44
SLIDE 44

Gradient Descent of MSE

  • Gradient of MSE
  • Update
  • Repeat until Convergence

44

𝜖𝐾 𝜄 𝜖𝜄 = 2 𝑂 ෍

𝑗=1 𝑂

𝑔

𝜄 𝑦𝑗 − 𝑧𝑗 𝑔 𝜄 ′ 𝑦𝑗

𝜄

𝑘 ← 𝜄 𝑘 − 𝛽 𝜖𝐾 𝜄

𝜖𝜄

𝑘

slide-45
SLIDE 45

45

slide-46
SLIDE 46

46

slide-47
SLIDE 47

Convolutional Neural Network (LeNet-5)

  • https://medium.com/@sh.tsang/paper-brief-review-of-lenet-1-lenet-4-lenet-5-

boosted-lenet-4-image-classification-1f5f809dbf17

slide-48
SLIDE 48

48

slide-49
SLIDE 49

ImageNet Large Scale Visual Object Recognition Challenge (ILSVRC)

  • 1000 categories
  • For ILSVRC 2017

− Training images for each category ranges from 732 to 1300 − 50,000 validation images and 100,000 test images.

  • Total number of images in ILSVRC 2017 is around 1,150,000

49

slide-50
SLIDE 50

Convolutional Neural Network

  • Alex Krizhevsky, Geoffrey Hinton et al., 2012

50

slide-51
SLIDE 51

Previous Winners of ILSVRC

51

slide-52
SLIDE 52

Deep Reinforcement Learning

52

slide-53
SLIDE 53

Reinforcement Learning

slide-54
SLIDE 54

54

slide-55
SLIDE 55

AlphaGo

55

slide-56
SLIDE 56

The Complexity of Go vs Chess

56

slide-57
SLIDE 57

Reinforcement Learning

  • An agent learns how to do actions at to achieve maximum reward R
  • Policy π(at|st): agent’s behavior function
  • Value function V: evaluate quality of each action/state
  • Model: agent’s representation of the environment

Policy 57

slide-58
SLIDE 58

Learn to Play Atari Games

  • Mnih et al., “Human Level Control through Deep Reinforcement

Learning,” Nature, 2015

58

slide-59
SLIDE 59

DRL in Atari

slide-60
SLIDE 60

AlphaGo Zero

60

slide-61
SLIDE 61

61

slide-62
SLIDE 62

62

slide-63
SLIDE 63

Virtual-to-real Learning

  • Inspired by DeepMind (Mnih et al., Nature, 2015)

− “Human Level Control through Deep Reinforcement Learning”

  • Applied to computer vision applications

− Image segmentation: Armeni et al. (2016), Qiu et al., (2017) − Indoor navigation: Brodeur et al. (2017), Gupta et al. (2017), Savva et al. (2017), Wu et al. (2018) − Autonomous vehicles: Marinez et al. (2017), Muller et al. (2018), Pan et al. (2017), Shah et al. (2018)

63

UnrealCV CAD2Real

slide-64
SLIDE 64

64

Semantic Segmentation Depth Prediction

VIVID

Autonomous Navigation Action Recognition

slide-65
SLIDE 65

Simulate Real-life Events

65

slide-66
SLIDE 66

Searching for the Shooter

66

slide-67
SLIDE 67

DeepDrive

67

slide-68
SLIDE 68

Limits of Deep Learning

68

slide-69
SLIDE 69

No Idea of Real World

69

slide-70
SLIDE 70

Adversarial Attack

70

slide-71
SLIDE 71

Number of Connections in the Brain

Neurons (for adults): 10^11, or 100 billion, 100000000000 Synapses (based on 1000 per neuron): 10^14, or 100 trillion, 100000000000000

71

slide-72
SLIDE 72

Generative Adversarial Networks (GAN)

72

slide-73
SLIDE 73

Generative Adversarial Networks (GAN)

  • Ian Goodfellow

73

slide-74
SLIDE 74

Painting like Van Gogh

74

slide-75
SLIDE 75

Super Resolution

75

slide-76
SLIDE 76

DeepFake: Is this you?

76

slide-77
SLIDE 77

Google’s AutoML

  • Learning neural network cells automatically

77

https://ai.googleblog.com/2017/11/automl-for-large-scale-image.html

slide-78
SLIDE 78

AutoML on ImageNet

78

slide-79
SLIDE 79

EfficientNet (May, 2019)

79

slide-80
SLIDE 80

80

slide-81
SLIDE 81

References

  • Francois Chollet, “Deep Learning with Python.” Chapter 1
  • What is backpropagation really doing? ( 3Blue1Brown)

https://www.youtube.com/watch?v=Ilg3gGewQ5U

  • http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-

deep-learning/

  • https://pmirla.github.io/2016/08/16/AI-Winter.html
  • https://tw.saowen.com/a/6cdc2f1279016e566832bb1234e06d321992dd1fabcdf

4a2e0a3e16fc0dc09dc

  • https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html
  • https://hackernoon.com/gradient-descent-aynk-7cbe95a778da
  • http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf

81