neural network part 4
play

Neural Network Part 4: Recurrent Neural Networks Yingyu Liang - PowerPoint PPT Presentation

Neural Network Part 4: Recurrent Neural Networks Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven,


  1. Neural Network Part 4: Recurrent Neural Networks Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed by Mark Craven, David Page, Jude Shavlik, Tom Mitchell, Nina Balcan, Matt Gormley, Elad Hazan, Tom Dietterich, Pedro Domingos, and Geoffrey Hinton.

  2. Goals for the lecture you should understand the following concepts • sequential data • computational graph • recurrent neural networks (RNN) and the advantage • training recurrent neural networks • bidirectional RNNs • encoder-decoder RNNs 2

  3. Introduction

  4. Recurrent neural networks • Dates back to (Rumelhart et al. , 1986) • A family of neural networks for handling sequential data, which involves variable length inputs or outputs • Especially, for natural language processing (NLP)

  5. Sequential data • Each data point: A sequence of vectors 𝑦 (𝑢) , for 1 ≤ 𝑢 ≤ 𝜐 • Batch data: many sequences with different lengths 𝜐 • Label: can be a scalar, a vector, or even a sequence • Example • Sentiment analysis • Machine translation

  6. Example: machine translation Figure from: devblogs.nvidia.com

  7. More complicated sequential data • Data point: two dimensional sequences like images • Label: different type of sequences like text sentences • Example: image captioning

  8. Image captioning Figure from the paper “ DenseCap : Fully Convolutional Localization Networks for Dense Captioning”, by Justin Johnson, Andrej Karpathy, Li Fei-Fei

  9. Computational graphs

  10. A typical dynamic system 𝑡 (𝑢+1) = 𝑔(𝑡 𝑢 ; 𝜄) Figure from Deep Learning , Goodfellow, Bengio and Courville

  11. A system driven by external data 𝑡 (𝑢+1) = 𝑔(𝑡 𝑢 , 𝑦 (𝑢+1) ; 𝜄) Figure from Deep Learning , Goodfellow, Bengio and Courville

  12. Compact view 𝑡 (𝑢+1) = 𝑔(𝑡 𝑢 , 𝑦 (𝑢+1) ; 𝜄) Figure from Deep Learning , Goodfellow, Bengio and Courville

  13. square: one step time delay Compact view 𝑡 (𝑢+1) = 𝑔(𝑡 𝑢 , 𝑦 (𝑢+1) ; 𝜄) Key: the same 𝑔 and 𝜄 Figure from Deep Learning , for all time steps Goodfellow, Bengio and Courville

  14. Recurrent neural networks (RNN)

  15. Recurrent neural networks • Use the same computational function and parameters across different time steps of the sequence • Each time step: takes the input entry and the previous hidden state to compute the output entry • Loss: typically computed at every time step

  16. Label Recurrent neural networks Loss Output State Input Figure from Deep Learning , by Goodfellow, Bengio and Courville

  17. Recurrent neural networks Math formula: Figure from Deep Learning , Goodfellow, Bengio and Courville

  18. Advantage • Hidden state: a lossy summary of the past • Shared functions and parameters: greatly reduce the capacity and good for generalization in learning • Explicitly use the prior knowledge that the sequential data can be processed by in the same way at different time step (e.g., NLP)

  19. Advantage • Hidden state: a lossy summary of the past • Shared functions and parameters: greatly reduce the capacity and good for generalization in learning • Explicitly use the prior knowledge that the sequential data can be processed by in the same way at different time step (e.g., NLP) • Yet still powerful (actually universal): any function computable by a Turing machine can be computed by such a recurrent network of a finite size (see, e.g., Siegelmann and Sontag (1995))

  20. Training RNN • Principle: unfold the computational graph, and use backpropagation • Called back-propagation through time (BPTT) algorithm • Can then apply any general-purpose gradient-based techniques

  21. Training RNN • Principle: unfold the computational graph, and use backpropagation • Called back-propagation through time (BPTT) algorithm • Can then apply any general-purpose gradient-based techniques • Conceptually: first compute the gradients of the internal nodes, then compute the gradients of the parameters

  22. Recurrent neural networks Math formula: Figure from Deep Learning , Goodfellow, Bengio and Courville

  23. Recurrent neural networks Gradient at 𝑀 (𝑢) : (total loss is sum of those at different time steps) Figure from Deep Learning , Goodfellow, Bengio and Courville

  24. Recurrent neural networks Gradient at 𝑝 (𝑢) : Figure from Deep Learning , Goodfellow, Bengio and Courville

  25. Recurrent neural networks Gradient at 𝑡 (𝜐) : Figure from Deep Learning , Goodfellow, Bengio and Courville

  26. Recurrent neural networks Gradient at 𝑡 (𝑢) : Figure from Deep Learning , Goodfellow, Bengio and Courville

  27. Recurrent neural networks Gradient at parameter 𝑊 : Figure from Deep Learning , Goodfellow, Bengio and Courville

  28. The problem of exploding/vanishing gradient • • What happens to the magnitude of In an RNN trained on long the gradients as we backpropagate sequences ( e.g. 100 time steps) through many layers? the gradients can easily explode or vanish. – If the weights are small, the – We can avoid this by initializing gradients shrink exponentially. the weights very carefully. – If the weights are big the • gradients grow exponentially. Even with good initial weights, its very hard to detect that the current • Typical feed-forward neural nets target output depends on an input can cope with these exponential from many time-steps ago. effects because they only have a – So RNNs have difficulty few hidden layers. dealing with long-range dependencies.

  29. The Popular LSTM Cell x t h t-1 x t h t-1 æ ö æ x t ö W o W i f t = s W f ÷ + b f ç ç ÷ Input Gate i t Output Gate o t è ø h t - 1 è ø Similarly for i t , o t W x t Cell c t-1 h t h t-1 c t = f t Ä c t - 1 + æ ö x t i t Ä tanh W ç ÷ è ø h t - 1 f t Forget Gate W f h t = o t Ä tanh c t x t h t-1 29 * Dashed line indicates time-lag

  30. Some Other Variants of RNN

  31. RNN • Use the same computational function and parameters across different time steps of the sequence • Each time step: takes the input entry and the previous hidden state to compute the output entry • Loss: typically computed every time step • Many variants • Information about the past can be in many other forms • Only output at the end of the sequence

  32. Example: use the output at the previous step Figure from Deep Learning , Goodfellow, Bengio and Courville

  33. Example: only output at the end Figure from Deep Learning , Goodfellow, Bengio and Courville

  34. Bidirectional RNNs • Many applications: output at time 𝑢 may depend on the whole input sequence • Example in speech recognition: correct interpretation of the current sound may depend on the next few phonemes, potentially even the next few words • Bidirectional RNNs are introduced to address this

  35. BiRNNs Figure from Deep Learning , Goodfellow, Bengio and Courville

  36. Encoder-decoder RNNs • RNNs: can map sequence to one vector; or to sequence of same length • What about mapping sequence to sequence of different length? • Example: speech recognition, machine translation, question answering, etc

  37. Figure from Deep Learning , Goodfellow, Bengio and Courville

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend