15-780 Graduate Artificial Intelligence: Convolutional and - - PowerPoint PPT Presentation

β–Ά
15 780 graduate artificial intelligence convolutional and
SMART_READER_LITE
LIVE PREVIEW

15-780 Graduate Artificial Intelligence: Convolutional and - - PowerPoint PPT Presentation

15-780 Graduate Artificial Intelligence: Convolutional and recurrent networks J. Zico Kolter (this lecture) and Nihar Shah Carnegie Mellon University Spring 2020 1 Online course logistics: main points Course online at zoom:


slide-1
SLIDE 1

15-780 – Graduate Artificial Intelligence: Convolutional and recurrent networks

  • J. Zico Kolter (this lecture) and Nihar Shah

Carnegie Mellon University Spring 2020

1

slide-2
SLIDE 2

Online course logistics: main points

Course online at zoom: https://cmu.zoom.us/j/165246573 All lectures recorded over Zoom, you are encouraged but not required to watch in real time (polls will remain open) All homework deadlines extended by 2 weeks (final project cannot be extended) 24 hours take home exam instead of in-class final See Diderot for more information, and please let course staff know if anything comes up that hampers your ability to participate in the course

2

slide-3
SLIDE 3

Outline

Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks

3

slide-4
SLIDE 4

Outline

Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks

4

slide-5
SLIDE 5

The problem with fully-connected networks

A 256x256 (RGB) image ⟹ ~200K dimensional input 𝑦 A fully connected network would need a very large number of parameters, very likely to overfit the data Generic deep network also does not capture the β€œnatural” invariances we expect in images (translation, scale)

5

zi zi+1 (Wi)1 zi zi+1 (Wi)2

slide-6
SLIDE 6

Convolutional neural networks

To create architectures that can handle large images, restrict the weights in two ways

  • 1. Require that activations between layers only occur in β€œlocal” manner
  • 2. Require that all activations share the same weights

These lead to an architecture known as a convolutional neural network

6

zi zi+1 Wi zi zi+1 Wi

slide-7
SLIDE 7

Convolutions

Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to β€œslide” the weights π‘₯ (called a filter) over the image to produce a new image, written 𝑧 = 𝑨 βˆ— π‘₯

7

w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32 y33 w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32

33

y11 = z11w11 + z12w12 + z13w13 + z21w21 + . . .

slide-8
SLIDE 8

Convolutions

Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to β€œslide” the weights π‘₯ (called a filter) over the image to produce a new image, written 𝑧 = 𝑨 βˆ— π‘₯

8

w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32 y33 w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32

33

y12 = z12w11 + z13w12 + z14w13 + z22w21 + . . .

slide-9
SLIDE 9

Convolutions

Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to β€œslide” the weights π‘₯ (called a filter) over the image to produce a new image, written 𝑧 = 𝑨 βˆ— π‘₯

9

w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32 y33 w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32

33

y13 = z13w11 + z14w12 + z15w13 + z23w21 + . . .

slide-10
SLIDE 10

Convolutions

Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to β€œslide” the weights π‘₯ (called a filter) over the image to produce a new image, written 𝑧 = 𝑨 βˆ— π‘₯

10

w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32 y33 w11 w12 w13 w21 w22 w23 w31 w32 w33 z11 z21 z31 z41 z51 z12 z22 z32 z42 z52 z13 z23 z33 z43 z53 z14 z24 z34 z44 z54 z15 z25 z35 z45 z55 βˆ— = y11 y12 y13 y21 y22 y23 y31 y32

33

y21 = z21w11 + z22w12 + z23w13 + z31w21 + . . .

slide-11
SLIDE 11

Convolutions in image processing

Convolutions (typically with prespecified filters) are a common operation in many computer vision applications

11

Original image 𝑨

𝑨 βˆ— 1 4 7 4 16 26 4 1 16 4 7 26 41 4 16 26 1 4 4 26 7 16 4 4 1 /273 𝑨 βˆ— βˆ’1 1 βˆ’2 2 βˆ’1 1

2

+ 𝑨 βˆ— βˆ’1 βˆ’2 βˆ’1 1 2 1

2 1 2

Gaussian blur Image gradient

slide-12
SLIDE 12

Convolutional neural networks

Idea of a convolutional neural network, in some sense, is to let the network β€œlearn” the right filters for the specified task In practice, we actually use β€œ3D” convolutions, which apply a separate convolution to multiple layers of the image, then add the results together

12

zi zi+1 (Wi)1 zi zi+1 (Wi)2

slide-13
SLIDE 13

Additional notes on convolutions

For anyone with a signal processing background: this is actually not what you call a convolution, this is a correlation (convolution with the filter flipped upside-down and left-right) It’s common to β€œzero pad” the input image so that the resulting image is the same size Also common to use a max-pooling operation that shrinks images by taking max

  • ver a region (also common: strided convolutions)

13

zi zi+1

max

slide-14
SLIDE 14

Poll: Number of parameters

Consider a convolutional network that takes as input color (RGB) 32x32 images, and uses the layers (all convolutional layers use zero-padding) 1. 5x5x64 convolution 2. 2x2 Maxpooling 3. 3x3x128 convolution 4. 2x2 Maxpooling 5. Fully-connected to 10-dimensional output How many parameters does this network have? 1. β‰ˆ 103 2. β‰ˆ 104 3. β‰ˆ 105 4. β‰ˆ 106

14

slide-15
SLIDE 15

Learning with convolutions

How do we apply backpropagation to neural networks with convolutions? 𝑨푖+1 = 𝑔푖(𝑨푖 βˆ— π‘₯ν‘– + 𝑐푖) Remember that for a dense layer 𝑨푖+1 = 𝑔푖(𝑋푖𝑨푖 + 𝑐푖), forward pass required multiplication by 𝑋푖 and backward pass required multiplication by 𝑋푖

푇

We’re going to show that convolution is a type of (highly structured) matrix multiplication, and show how to compute the multiplication by its transpose

15

slide-16
SLIDE 16

Convolutions as matrix multiplication

Consider initially a 1D convolution 𝑨푖 βˆ— π‘₯ν‘– for π‘₯ν‘– ∈ ℝ3, 𝑨푖 ∈ ℝ6 Then 𝑨푖 βˆ— π‘₯ν‘– = 𝑋푖𝑨푖 for 𝑋푖 = π‘₯1 π‘₯2 π‘₯1 π‘₯3 π‘₯2 π‘₯3 π‘₯1 π‘₯2 π‘₯1 π‘₯3 π‘₯2 π‘₯3 So how do we multiply by 𝑋푖

푇 ?

16

slide-17
SLIDE 17

Convolutions as matrix multiplication, cont

Multiplication by transpose is just 𝑋푖

푇 𝑕푖+1 =

π‘₯1 π‘₯2 π‘₯1 π‘₯3 π‘₯2 π‘₯3 π‘₯1 π‘₯2 π‘₯1 π‘₯3 π‘₯2 π‘₯3 𝑕푖+1 = 𝑕푖+1 βˆ— π‘₯ν‘– where π‘₯ν‘–+1 is just the flipped version of π‘₯ν‘– In other words, transpose of convolution is just (zero-padded) convolution by flipped filter (correlations for signal processing people) Property holds for 2D convolutions, backprop just flips convolutions

17

slide-18
SLIDE 18

Outline

Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks

18

slide-19
SLIDE 19

LeNet network, digit classification

The network that started it all (and then stopped for ~14 years)

19

INPUT 32x32

Convolutions Subsampling Convolutions

C1: feature maps 6@28x28

Subsampling

S2: f. maps 6@14x14 S4: f. maps 16@5x5 C5: layer 120 C3: f. maps 16@10x10 F6: layer 84

Full connection Full connection Gaussian connections

OUTPUT 10

LeNet-5 (LeCun et al., 1998) architecture, achieves 1% error in MNIST digit classification

slide-20
SLIDE 20

Image classification

Recent ImageNet classification challenges

20

slide-21
SLIDE 21

Using intermediate layers as features

Increasingly common to use later-stage layers of pre-trained image classification networks as features for image classification tasks Classify dogs/cats based upon 2000 images (1000 of each class):

  • Approach 1: Convolution network from scratch: 80%
  • Approach 2: Final-layer from VGG network -> dense net: 90%
  • Approach 3: Also fine-tune last convolution features: 94%

21

https://blog.keras.io/building-powerful-image-classification- models-using-very-little-data.html

slide-22
SLIDE 22

Neural style

Adjust input image to make feature activations (really, inner products of feature activations), match target (art) images (Gatys et al., 2016)

22

slide-23
SLIDE 23

Detecting cancerous cells in images

23

https://research.googleblog.com/2017/03/assisting- pathologists-in-detecting.html

slide-24
SLIDE 24

Outline

Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks

24

slide-25
SLIDE 25

Predicting temporal data

So far, the models we have discussed are application to independent inputs 𝑦 1 , … , 𝑦 ν‘š In practice, we often want to predict a sequence of outputs, given a sequence of inputs (predicting independently would miss correlations) Examples: time series forecasting, sentence labeling, speech to text, etc

25

x(1) x(2) x(3) y(1) y(2) y(3) Β· Β· Β·

slide-26
SLIDE 26

Recurrent neural networks

Maintain hidden state over time, hidden state is a function of current input and previous hidden state

26

x(1) x(2) x(3) Λ† y(1) Λ† y(2) Λ† y(3) z(1) z(2) z(3) Β· Β· Β· Wxz Wxz Wxz Wzy Wzy Wzy Wzz Wzz Wzz

𝑨 ν‘‘ = 𝑔푧 𝑋ν‘₯푧𝑦 ν‘‘ + 𝑋푧푧𝑨 ν‘‘βˆ’1 + 𝑐푧 Μ‚ 𝑧 ν‘‘ = 𝑔푦(𝑋푧푦𝑨 ν‘‘ + 𝑐푦)

slide-27
SLIDE 27

Training recurrent networks

Most common training approach is to β€œunroll” the RNN on some dataset, and minimize the loss function minimize

ν‘Šν‘₯ν‘§,ν‘Šν‘§ν‘§,ν‘Šν‘§ν‘¦

βˆ‘

ν‘–=1 ν‘š

β„“ Μ‚ 𝑧 ν‘‘ , 𝑧 ν‘‘ Note that the network will have the β€œsame” parameters in a lot of places in the network (e.g., the same 𝑋푧푧 matrix occurs in each step); advance of computation graph approach is that it’s easy to compute these complex gradients Some issues: initializing first hidden layer (just set it to all zeros), how long a sequence (pick something big, like >100)

27

slide-28
SLIDE 28

LSTM networks

Trouble with plain RNNs is that it is difficult to capture long-term dependencies (e.g. if we see a β€œ(β€œ character, we expected a β€œ)” to follow at some point) Problem has to do with vanishing gradient, for many activations like sigmoid, tanh, gradients get smaller and smaller over subsequent layers (and ReLU’s have their

  • wn problems)

One solution, long short term memory (LSTM) (Hochreiter and Schmidhuber, 1997), has more complex structure that specifically encodes memory and pass- through features, able to model long-term dependencies

28

slide-29
SLIDE 29

Outline

Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks

29

slide-30
SLIDE 30

Char-RNN

Excellent tutorial available at: http://karpathy.github.io/2015/05/21/rnn- effectiveness/ Basic idea is to build an RNN (using stacked LSTMs) that predicts the next character from some text given previous characters

30

slide-31
SLIDE 31

Sample code from Char-RNN

Char-RNN trained on code of Linux kernel

31

/* * Increment the size file of the new incorrect UI_FILTER group information * of the size generatively. */ static int indicate_policy(void) { int error; if (fd == MARN_EPT) { /* * The kernel blank will coeld it to userspace. */ if (ss->segment < mem_total) unblock_graph_and_set_blocked(); else ret = 1; goto bail; } segaddr = in_SB(in.addr); selector = seg / 16; setup_works = true; …

slide-32
SLIDE 32

Sample Latex from Char-RNN

Char-RNN trained on Latex source of textbook on algebraic geometry

32

slide-33
SLIDE 33

Sequence to sequence models

Idea: use LSTM without outputs on β€œinput” sequence, then auto-regressive LSTM

  • n output sequence (Sutskever et al., 2014)

33

slide-34
SLIDE 34

Machine translation

A scale-up of sequence to sequence learning, now underlying much of Google’s machine translation methods (Wu et al., 2016)

34