convolutional neural nets
play

Convolutional Neural Nets CS447 Natural Language Processing (J. - PowerPoint PPT Presentation

Lecture 8: Convolutional Neural Nets Convolutional Neural Nets CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/ 1 Convolutional Neural Nets (ConvNets, CNNs) [4 parameters, applied 3


  1. Lecture 8: 
 Convolutional Neural Nets Convolutional Neural Nets CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/ 1

  2. 
 Convolutional Neural Nets (ConvNets, CNNs) [4 parameters, applied 3 times, non-overlapping inputs] Sparse Networks Dense 
 (with shared parameters : CNNs) (Fully-Connected) Networks [last lecture] 
 [3 parameters, applied 4 times, overlapping inputs] 2 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  3. Convolutional Neural Nets 2D CNNs are a standard architecture for image data. Neocognitron (Fukushima, 1980): 
 CNN with convolutional and downsampling (pooling) layers CNNs are inspired by receptive fields in the visual cortex: Individual neurons respond to small regions (patches) of the visual field. Neurons in deeper layers respond to larger regions. Neurons in the same layer share the same weights . This parameter tying allows CNNs to handle variable size inputs with a fixed number of parameters . CNNs can be used as input to fully connected nets. In NLP, CNNs are mainly used for classification . 3 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  4. A toy example a b c d e f g h i j k l A 3x4 black-and-white image is a 3x4 matrix of pixels. 4 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  5. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz gw + hx + ky + lz N × N N × N A filter is an -size matrix that can be applied to N × N -size patches of the input image. This operation is called convolution, but it works just like a dot product of vectors. 5 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  6. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz gw + hx + ky + lz N × N N × N We can apply the same filter to all -size patches of the input image. We obtain another matrix (the next layer in our network). The elements of the filter are the parameters of this layer. 6 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  7. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz gw + hx + ky + lz N × N N × N We can apply the same filter to all -size patches of the input image. We obtain another matrix (the next layer in our network). The elements of the filter are the parameters of this layer. 7 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  8. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz gw + hx + ky + lz N × N N × N We can apply the same filter to all -size patches of the input image. We obtain another matrix (the next layer in our network). The elements of the filter are the parameters of this layer. 8 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  9. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz gw + hx + ky + lz N × N N × N We can apply the same filter to all -size patches of the input image. We obtain another matrix (the next layer in our network). The elements of the filter are the parameters of this layer. 9 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  10. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ gw + hx + ky + lz ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz N × N N × N We can apply the same filter to all -size patches of the input image. We obtain another matrix (the next layer in our network). The elements of the filter are the parameters of this layer. 10 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  11. Applying a 2x2 filter w x [ z ] a b c d y e f g h i j k l [ gw + hx + ky + lz ] aw + bx + ey + fz bw + cx + fy + gz cw + dx + gy + hz ew + fx + iy + jz fw + gx + jy + kz We’ve turned a 3x4 matrix into a 2x3 matrix , 
 so our image has shrunk. Can we preserve the size of the input? 11 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  12. Zero padding 0 0 0 0 0 w x [ z ] 0 a b a b c c d d y 0 e e f f g g h h i j k l 0 i j k l 0 0 0 0 0 0 0 w + 0 x + 0 y + az 0 w + 0 x + 0 y + az 0 w + 0 x + ay + bz 0 w + 0 x + ay + bz 0 w + 0 x + by + cz 0 w + 0 x + by + cz 0 w + 0 x + cy + dz 0 w + 0 x + cy + dz 0 0 w + ax + 0 y + ez 0 w + ax + 0 y + ez aw + bx + ey + fz aw + bx + ey + fz bw + cx + fy + gz bw + cx + fy + gz cw + dx + gy + hz cw + dx + gy + hz 0 0 w + ex + 0 y + iz 0 w + ex + 0 y + iz ew + fx + iy + jz ew + fx + iy + jz fw + gx + jy + kz fw + gx + jy + kz gw + hx + ky + lz gw + hx + ky + lz If we pad each matrix with 0s, we can maintain the same size throughout the network 12 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  13. After the nonlinear activation function w x [ z ] 0 0 0 0 0 y 0 a b c d e f g h 0 0 i j k l 0 0 0 0 0 0 g ( az ) g ( ay + bz ) g ( by + cz ) g ( cy + dz ) 0 g ( ax + ez ) g ( aw + bx + ey + fz ) g ( bw + cx + fy + gz ) g ( cw + dx + gy + hz ) 0 g ( ex + iz ) g ( ew + fx + iy + jz ) g ( fw + gx + jy + kz ) g ( gw + hx + ky + lz ) NB: Convolutional layers are typically followed by ReLUs. 13 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  14. Going from layer to layer… w x [ z ] 0 0 0 0 0 First 
 y 0 a b c d Input 
 Convolution e f g h 0 Data 0 i j k l [ z 1 ] w 1 x 1 0 0 0 0 0 Second 
 First 0 a 1 b 1 c 1 d 1 y 1 Convolution Hidden 0 e 1 f 1 g 1 h 1 Layer 0 i 1 j 1 k 1 l 1 0 0 0 0 0 One element in the 2nd Second 0 a 2 b 2 c 2 d 2 layer corresponds to a Hidden 3x3 patch in the input: e 2 f 2 g 2 h 2 0 Layer The “ receptive field ” 0 i 2 j 2 k 2 l 2 gets larger in each layer 14 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  15. Changing the stride Stride = the step size for sliding across the image Stride = 1: Consider all patches [see previous example] Stride = 2: Skip one element between patches Stride = 3: Skip two elements between patches,… A larger stride size yields a smaller output image. 0 0 0 0 a b c d w x [ z ] Input: Filter: y e f g h [Note that different zero-padding 
 i j k l may be required with a different 
 stride] [ gw + hx + ky + lz ] 0 w + 0 x + ay + bz 0 w + 0 x + cy + dz Stride = 2: ew + fx + iy + jz 15 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  16. Handling color images: channels Color images have a number of color channels: Each pixel in an RGB image is a (red, green, blue) triplet: ■ =(255, 0, 0) or ■ =(120, 5, 155) N × M N × M × 3 An RGB image is a tensor 
 height width depth 
 × × #channels = depth of the image Convolutional filters are applied to all channels 
 of the input We still specify filter size in terms of the image patch, because the #channels is a function of the data (not a parameter we control) C We still talk about 2 2 or 3 3 etc. filters, although with channels, × × N × N × C N × N × C they apply to a region (and have weights) 16 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  17. 
 Channels in internal layers N × N So far, we have just applied a single filter 
 to get to the next layer. K N × N But we could run different filters (with K different weights) to define a layer with channels. (If we initialize their weights randomly, they will learn different properties of the input) The hidden layers of CNNs have often 
 a large number of channels. (Useful trick: 1x1 convolutions increase or decrease the nr. of channels without affecting the size of the visual field) 17 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

  18. Pooling Layers Pooling layers reduce the size of the representation, and are often used following a pair of conv+ReLU layers Each pooling layer returns a 3D tensor of the same depth as its input (but with smaller height & width) and is defined by — a filter size (what region gets reduced to a single value) — a stride (step size for sliding the window across the input) — a pooling function ( max pooling , avg pooling, min pooling, …) Pooling units don’t have weights, but simply return the maximum/ minimum/average value of their inputs Typically, pooling layers only receive input from a single channel. So they don’t reduce the depth (#channels). 18 CS447 Natural Language Processing (J. Hockenmaier) https://courses.grainger.illinois.edu/cs447/

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend