a convolutional neural network for modelling sentences
play

A Convolutional Neural Network for Modelling Sentences Nal - PowerPoint PPT Presentation

A Convolutional Neural Network for Modelling Sentences Nal Kalchbrenner Edward Grefenstette Phil Blunsom Department of Computer Science, Oxford University Overview of Model Represent sentences by extracting more abstract features Input:


  1. A Convolutional Neural Network for Modelling Sentences Nal Kalchbrenner Edward Grefenstette Phil Blunsom Department of Computer Science, Oxford University

  2. Overview of Model Represent sentences by extracting more abstract features Input: sequence of word embeddings Output: classification probabilities Each layer involves 1. Convolution 2. Dynamic k -Max Pooling 3. Apply a non-linearity (tanh)

  3. One-Dimensional Convolution 1. The filter m ∈ R m 2. The sequence s ∈ R s Returns sequence c ∈ R s − m +1 c j = m T s j − m +1: j , j = 1 , ..., s − m + 1 Takes a dot product between length m subsequences of s and the filter m Wide convolution pads s with m − 1 zeros on the left.

  4. Convolution with Word Embeddings Assume word embeddings of dimension d Filter m will be in R d × m Sequence s will be in R d × s Each row of m will be convolved with the corresponding row of s

  5. k -Max Pooling (LeCun et al.) Given k and sequence p ∈ R p , p ≥ k 1. Return k largest elements of p 2. Keep elements in their original order Denoted p k max ∈ R k

  6. Dynamic k -Max Pooling “Smooth extraction of higher-order features” ✓ ⇠ L − l ⇡◆ k L = max k top , s L I k top is fixed parameter I l is current layer I L is total number of layers I s is sentence length

  7. Folding Elementwise sum of pairs rows of a matrix f : R d × n → R d / 2 × n f ( M ) = N where N [ i , j ] = M [2 i , j ] + M [2 i + 1 , j ] , i = 0 , ..., d / 2 − 1, j = 0 , ... n − 1 I Introduces dependencies between di ff erent feature rows I No added parameters

  8. Size of Network Model First Layer Second Layer * Width Filters Width Filters k -top Binary 7 6 5 14 4 Multi-class 10 6 7 12 5

  9. Training Top layer is soft-max nonlinearity to predict probability distribution L 2 regularization of parameters in objective function Parameters are word embeddings, filter weights, & fully connected layers Trained using Adagrad with mini-batches “Processes multiple millions of sentences per hour on one GPU”

  10. Experiments 1. Predicting sentiment of movie reviews - binary (Socher et al. 2013) 2. Predicting sentiment of movie reviews - multi-class (Socher et al. 2013) 3. Categorization of questions (Li and Roth 2002) 4. Sentiment of Tweets, labels based on emoticons(Go et al. 2009) Feature embedding dimensionality chosen based on size of dataset

  11. Movies accuracy

  12. First layer feature-detectors

  13. TREC 6-way classification accuracy

  14. Twitter sentiment

  15. Conclusion Dynamic Convolutional Neural Networks I Convolutions apply function to n-grams I Dynamic k -max pooling extracts most active feature, and chooses k based on layer and sentence length I Composing these two operations can be seen as feature detection I Outperformed/stayed competitive with other neural approaches, baseline models, and state-of-the-art approaches without needing handcrafted features

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend