recurrent neural network
play

Recurrent Neural Network Xiaogang Wang xgwang@ee.cuhk.edu.hk - PowerPoint PPT Presentation

Recurrent Neural Network Xiaogang Wang xgwang@ee.cuhk.edu.hk February 26, 2019 cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 1 / 52 Outline 1 Recurrent neural networks Recurrent neural networks BP on RNN Variants


  1. Recurrent Neural Network Xiaogang Wang xgwang@ee.cuhk.edu.hk February 26, 2019 cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 1 / 52

  2. Outline 1 Recurrent neural networks Recurrent neural networks BP on RNN Variants of RNN 2 Long Short-Term Memory recurrent networks Challenge of long-term dependency Combine short and long paths Long short-term memory net 3 Applications cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 2 / 52

  3. Sequential data Sequence of words in an English sentence Acoustic features at successive time frames in speech recognition Successive frames in video classification Rainfall measurements on successive days in Hong Kong Daily values of current exchange rate Nucleotide base pairs in a strand of DNA Instead of making independent predictions on samples, assume the dependency among samples and make a sequence of decisions for sequential samples cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 3 / 52

  4. Modeling sequential data Sample data sequences from a certain distribution P ( x 1 , . . . , x T ) Generate natural sentences to describe an image P ( y 1 , . . . , y T | I ) Activity recognition from a video sequence P ( y | x 1 , . . . , x T ) cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 4 / 52

  5. Modeling sequential data Speech recognition P ( y 1 , . . . , y T | x 1 , . . . , x T ) Object tracking P ( y 1 , . . . , y T | x 1 , . . . , x T ) cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 5 / 52

  6. Modeling sequential data Generate natural sentences to describe a video P ( y 1 , . . . , y T ′ | x 1 , . . . , x T ) Language translation P ( y 1 , . . . , y T ′ | x 1 , . . . , x T ) cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 6 / 52

  7. Modeling sequential data Use the chain rule to express the joint distribution for a sequence of observations T � p ( x 1 , . . . , x T ) = p ( x t | x 1 , . . . , x t − 1 ) t = 1 Impractical to consider general dependence of future dependence on all previous observations p ( x t | x t − 1 , . . . , x 0 ) ◮ Complexity would grow without limit as the number of observations increases It is expected that recent observations are more informative than more historical observations in predicting future values cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 7 / 52

  8. Markov models Markov models assume dependence on most recent observations First-order Markov model T � p ( x 1 , . . . , x T ) = p ( x t | x t − 1 ) t = 1 Second-order Markov model T � p ( x 1 , . . . , x T ) = p ( x t | x t − 1 , x t − 2 ) t = 1 cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 8 / 52

  9. Hidden Markov Model (HMM) A classical way to model sequential data Sequence pairs h 1 , h 2 , . . . , h T (hidden variables) and x 1 , x 2 , . . . , x T (observations) are generated by the following process ◮ Pick h 1 at random from the distribution P ( h 1 ) . Pick x 1 from the distribution p ( x 1 | h 1 ) ◮ For t = 2 to T ⋆ Choose h t at random from the distribution p ( h t | h t − 1 ) ⋆ Choose x t at random from the distribution p ( x t | h t ) The joint distribution is T T � � p ( x 1 , . . . , x T , h 1 , . . . , h T , θ ) = P ( h 1 ) P ( h t | h t − 1 ) p ( x t | h t ) t = 2 t = 1 cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 9 / 52

  10. Recurrent neural networks (RNN) While HMM is a generative model RNN is a discriminative model Model a dynamic system driven by an external signal x t h t = F θ ( h t − 1 , x t ) h t contains information about the whole past sequence. The equation above implicitly defines a function which maps the whole past sequence ( x t , . . . , x 1 ) to the current sate h t = G t ( x t , . . . , x 1 ) Left: physical implementation of RNN, seen as a circuit. The black square indicates a delay of 1 time step. Right: the same seen as an unfolded flow graph, where each node is now associated with one particular time instance. cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 10 / 52

  11. Recurrent neural networks (RNN) The summary is lossy, since it maps an arbitrary length sequence ( x t , . . . , x 1 ) to a fixed length vector h t . Depending on the training criterion, h t keeps some important aspects of the past sequence. Sharing parameters: the same weights are used for different instances of the artificial neurons at different time steps Share a similar idea with CNN: replacing a fully connected network with local connections with parameter sharing It allows to apply the network to input sequences of different lengths and predict sequences of different lengths cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 11 / 52

  12. Recurrent neural networks (RNN) Sharing parameters for any sequence length allows more better generalization properties. If we have to define a different function G t for each possible sequence length, each with its own parameters, we would not get any generalization to sequences of a size not seen in the training set. One would need to see a lot more training examples, because a separate model would have to be trained for each sequence length. cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 12 / 52

  13. A vanilla RNN to predict sequences from input P ( y 1 , . . . , y T | x 1 , . . . , x T ) Forward propagation equations, assuming that hyperbolic tangent non-linearities are used in the hidden units and softmax is used in output for classification problems h t = tanh( W xh x t + W hh h t − 1 + b h ) z t = softmax ( W hz h t + b z ) p ( y t = c ) = z t , c cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 13 / 52

  14. Cost function The total loss for a given input/target sequence pair ( x , y ), measured in cross entropy � � L ( x , y ) = L t = − log z t , y t t t cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 14 / 52

  15. Backpropagation on RNN Review BP on flow graph cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 15 / 52

  16. Gradients on W hz and b z ∂ L ∂ z t = ∂ L ∂ L ∂ L t ∂ z t = ∂ L t ∂ L t = 1, ∂ L t ∂ z t ∂ L ∂ L t ∂ z t ∂ L ∂ L t ∂ z t ∂ W hz = � ∂ W hz , ∂ b z = � t ∂ z t t ∂ z t ∂ b z cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 16 / 52

  17. Gradients on W hh and W xh ∂ h t ∂ L ∂ L ∂ W hh = � t ∂ h t ∂ W hh ∂ h t + 1 ∂ L ∂ L + ∂ L ∂ z t ∂ h t = ∂ h t + 1 ∂ h t ∂ z t ∂ h t cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 17 / 52

  18. Predict a single output at the end of the sequence Such a network can be used to summarize a sequence and produce a fixed-size representation used as input for further processing. There might be a target right at the end or the gradient on the output z t can be obtained by backpropagation from further downsteam modules cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 18 / 52

  19. Network with output recurrence Memory is from the prediction of the previous target, which limits its expressive power but makes it easier to train cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 19 / 52

  20. Generative RNN modeling P ( x 1 , . . . , x T ) It can generate sequences from this distribution At the training stage, each x t of the observed sequence serves both as input (for the current time step) and as target (for the previous time step) The output z t encodes the parameters of a conditional distribution P ( x t + 1 | x 1 , . . . , x t ) = P ( x t + 1 | z t ) for x t + 1 given the past sequence x 1 , . . . , x t cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 20 / 52

  21. Generative RNN modeling P ( x 1 , . . . , x T ) Cost function: negative log-likelihood of x , L = � t L t T � P ( x ) = P ( x 1 , . . . , x T ) = P ( x t | x t − 1 , . . . , x 1 ) t = 1 L t = − log P ( x t | x t − 1 , . . . , x 1 ) In generative mode, x t + 1 is sampled from the conditional distribution P ( x t + 1 | x 1 , . . . , x t ) = P ( x t + 1 | z t ) (dashed arrows) and then that generated sample x t + 1 is fed back as input for computing the next state h t + 1 cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 21 / 52

  22. Generative RNN modeling P ( x 1 , . . . , x T ) If RNN is used to generate sequences, one must also incorporate in the output information allowing to stochastically decide when to stop generating new output elements In the case when the output is a symbol taken from a vocabulary, one can add a special symbol corresponding to the end of a sequence One could also directly directly model the length T of the sequence through some parametric distribution. P ( x 1 , . . . , x T ) is decomposed into P ( x 1 , . . . , x T ) = P ( x 1 , . . . , x T | T ) P ( T ) cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 22 / 52

  23. RNNs to represent conditional distributions P ( y | x ) If x is a fixed-sized vector, we can simply make it an extra input of the RNN that generates the y sequence. Some common ways of providing the extra input ◮ as an extra input at each time step, or ◮ as the initial state h 0 , or ◮ both Example: generate caption for an image cuhk Xiaogang Wang (CUHK) Recurrent Neural Network February 26, 2019 23 / 52

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend