lecture 5 neural models for nlp
play

Lecture 5 Neural models for NLP Julia Hockenmaier - PowerPoint PPT Presentation

CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm Logistics CS546 Machine Learning


  1. CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm

  2. Logistics CS546 Machine Learning in NLP 2

  3. Schedule Week 1—Week 4: Lectures Paper presentations: Lectures 9-28 1. Word embeddings 2. Language models 3. More on RNNs for NLP 4. CNNs for NLP 5. Multitask learning for NLP 6. Syntactic parsing 7. Information extraction 8. Semantic parsing 9. Coreference resolution 10. Machine translation I 11. Machine translation II 12. Generation 13. Discourse 14. Dialog I 15. Dialog II 16. Multimodal NLP 17. Question answering 18. Entailment recognition 19. Reading comprehension 20. Knowledge graph modeling 3 CS546 Machine Learning in NLP

  4. Machine learning fundamentals CS546 Machine Learning in NLP 4

  5. Learning scenarios Supervised learning: Learning to predict labels/structures 
 from correctly annotated data Unsupervised learning: Learning to find hidden structure (e.g. clusters) in [unannotated] input data Semi-supervised learning: Learning to predict labels/structures from (a little) annotated 
 and (a lot of) unannotated data Reinforcement learning: Learning to act through feedback for actions 
 (rewards/punishments) from the environment 5 CS546 Machine Learning in NLP

  6. Input Output x ∈ X y ∈ Y System y = f( x ) An item x 
 An item y 
 drawn from an drawn from an input space X output space Y In (supervised) machine learning, we deal with systems whose f( x ) is learned from (labeled) examples.

  7. Supervised learning Input Output Target function y = f( x ) x ∈ X y ∈ Y Learned Model 
 y = g( x ) An item y 
 An item x 
 drawn from a drawn from an label space Y instance space X ^ You often seen f( x ) instead of g( x ), but PowerPoint can’t really typeset that, so g( x ) will have to do.

  8. Supervised learning Regression : Y is continuous Classification : Y is discrete (and finite) Binary classification: Y = {0,1} or {+1, -1} Multiclass classification: Y = {1,…,K} (with K>2) Structured prediction : Y consists of structured objects Y often has some sort of compositional structure and may be infinite

  9. Supervised learning: Training Labeled Training Data 
 D train 
 Learned Learning ( x 1 , y 1 ) model Algorithm g( x ) ( x 2 , y 2 ) … ( x N , y N ) Give the learner examples in D train The learner returns a model g( x )

  10. Supervised learning: Testing Labeled Test Data 
 D test 
 ( x’ 1 , y’ 1 ) ( x’ 2 , y’ 2 ) … ( x’ M , y’ M ) Reserve some labeled data for testing

  11. Supervised learning: Testing Labeled Test Data 
 D test 
 Raw Test Test 
 Data 
 Labels 
 ( x’ 1 , y’ 1 ) X test 
 Y test 
 ( x’ 2 , y’ 2 ) x’ 1 
 y’ 1 … x’ 2 y’ 2 ( x’ M , y’ M ) …. ... x’ M y’ M

  12. Supervised learning: Testing Apply the model to the raw test data Raw Test Predicted 
 Test 
 Data 
 Labels 
 Labels 
 X test 
 g( X test ) 
 Y test 
 Learned x’ 1 
 g( x’ 1 ) 
 y’ 1 model x’ 2 g( x’ 2 ) y’ 2 g( x ) …. …. ... x’ M g( x’ M ) y’ M

  13. Supervised learning: Testing Evaluate the model by comparing the predicted labels against the test labels Raw Test Predicted 
 Test 
 Data 
 Labels 
 Labels 
 X test 
 g( X test ) 
 Y test 
 Learned x’ 1 
 g( x’ 1 ) 
 y’ 1 model x’ 2 g( x’ 2 ) y’ 2 g( x ) …. …. ... x’ M g( x’ M ) y’ M

  14. Design decisions What data do you use to train/test your system? Do you have enough training data? How noisy is it? What evaluation metrics do you use to test your system? Do they correlate with what you want to measure? What features do you use to represent your data X ? Feature engineering used to be really important What kind of a model do you want to use? What network architecture do you want to use? What learning algorithm do you use to train your system? How do you set the hyperparameters of the algorithm?

  15. Linear classifiers: f( x ) = w 0 + wx f( x ) = 0 f( x ) > 0 x 2 f( x ) < 0 x 1 Linear classifiers are defined over vector spaces Every hypothesis f( x ) is a hyperplane: 
 f( x ) = w 0 + wx f( x ) is also called the decision boundary – Assign ŷ = 1 to all x where f( x ) > 0 – Assign ŷ = -1 to all x where f( x ) < 0 ŷ = sgn(f( x ))

  16. Learning a linear classifier f( x ) = 0 f( x ) > 0 x 2 x 2 f( x ) < 0 x 1 x 1 Input: Labeled training data Output: A decision boundary f( x ) = 0 
 D = {( x 1 , y 1 ),…,( x D , y D )} 
 that separates the training data 
 y i ·f( x i ) > 0 plotted in the sample space X = R 2 with : y i = +1, : y i = 1 CS446 Machine Learning 16

  17. Which model should we pick? We need a metric (aka an objective function) We would like to minimize the probability of misclassifying unseen examples, but we can’t measure that probability. Instead: minimize the number of misclassified training examples CS446 Machine Learning 17

  18. Which model should we pick? We need a more specific metric: 
 There may be many models that are consistent with the training data. Loss functions provide such metrics. CS446 Machine Learning 18

  19. y·f( x ) > 0: Correct classification f( x ) = 0 f( x ) > 0 x 2 f( x ) < 0 x 1 An example ( x , y) is correctly classified by f( x ) 
 if and only if y·f( x ) > 0: Case 1 (y = +1 = ŷ ): f( x ) > 0 ⇒ y·f( x ) > 0 Case 2 (y = -1 = ŷ ): f( x ) < 0 ⇒ y·f( x ) > 0 Case 3 (y = +1 ≠ ŷ = -1): f( x ) > 0 ⇒ y·f( x ) < 0 Case 4 (y = -1 ≠ ŷ = +1): f( x ) < 0 ⇒ y·f( x ) < 0

  20. Loss functions for classification Loss = What penalty do we incur if we misclassify x ? L( y , f ( x )) is the loss (aka cost) of classifier f 
 on example x when the true label of x is y . We assign label ŷ = sgn(f( x )) to x Plots of L( y , f ( x )): x-axis is typically y·f( x ) Today: 0-1 loss and square loss (more loss functions later) CS446 Machine Learning 20

  21. 0-1 Loss L( y, f( x )) = 0 iff y = ŷ 
 = 1 iff y ≠ ŷ L( y ·f( x ) ) = 0 iff y ·f( x ) > 0 (correctly classified) = 1 iff y ·f( x ) < 0 (misclassified) CS446 Machine Learning 21

  22. 
 Square Loss: ( y – f( x )) 2 L( y, f( x )) = ( y – f( x )) 2 Note: L(-1, f( x )) = (-1 – f( x )) 2 = ( 1 + f( x )) 2 = L(1, -f( x )) (the loss when y=-1 [red] is the mirror of the loss when y=+1 [blue]) CS446 Machine Learning 22

  23. The square loss is a convex 
 upper bound on 0-1 Loss CS446 Machine Learning 23

  24. Batch learning: Gradient Descent for Least Mean Squares (LMS) CS446 Machine Learning CS546 Machine Learning in NLP 24

  25. 
 Gradient Descent Iterative batch learning algorithm: – Learner updates the hypothesis 
 based on the entire training data – Learner has to go multiple times 
 over the training data Goal: Minimize training error/loss – At each step: move w in the direction of 
 steepest descent along the error/loss surface CS446 Machine Learning 25

  26. Gradient Descent Error( w ): Error of w on training data w i : Weight at iteration i Error( w ) w w 4 w 3 w 2 w 1 CS446 Machine Learning 26

  27. Least Mean Square Error Err( w ) = 1 y d ) 2 − ˆ ( y d ∑ 2 d ∈ D LMS Error: 
 Sum of square loss over all training items (multiplied by 0.5 for convenience) D is fixed, so no need to divide by its size 
 Goal of learning: Find w* = argmin(Err( w )) CS446 Machine Learning 27

  28. Iterative batch learning Initialization: Initialize w 0 (the initial weight vector) 
 For each iteration: for i = 0…T: Determine by how much to change w 
 based on the entire data set D Δ w = computeDelta( D , w i ) 
 Update w: w i+1 = update( w i , Δ w ) CS446 Machine Learning 28

  29. 
 
 Gradient Descent: Update 1. Compute ∇ Err( w i ), the gradient of the training error at w i This requires going over the entire training data 
 T ⎛ ⎞ ∇ Err( w ) = ∂ Err( w ) , ∂ Err( w ) ,..., ∂ Err( w ) ⎜ ⎟ ∂ w 0 ∂ w ∂ w N ⎝ ⎠ 1 2. Update w : 
 w i+1 = w i − α ∇ Err( w i ) α >0 is the learning rate CS446 Machine Learning 29

  30. What’s a gradient? T ⎛ ⎞ ∇ Err( w ) = ∂ Err( w ) , ∂ Err( w ) ,..., ∂ Err( w ) ⎜ ⎟ ∂ w 0 ∂ w ∂ w N ⎝ ⎠ 1 The gradient is a vector of partial derivatives It indicates the direction of steepest increase 
 in Err( w ) Hence the minus in the upgrade rule: w i − α ∇ Err( w i ) CS446 Machine Learning 30

  31. Computing ∇ Err( w i ) ∂ Err( w ) = ∂ 1 − f( x d )) 2 ( y d ∑ Err( w (j) )= 1 ∂ w i ∂ w i 2 − f( x ) d ) 2 (y d ∑ d ∈ D 2 d ∈ D = 1 ∂ − f( x d )) 2 ( y d ∑ 2 ∂ w i d ∈ D = 1 − f( x d )) ∂ 2(y d (y d − w ⋅ x d ) ∑ 2 ∂ w i d ∈ D ∑ ( y d − f ( x d )) x di = − d ∈ D CS446 Machine Learning 31

  32. Gradient descent (batch) Initialize w 0 randomly for i = 0…T: Δ w = (0, …., 0) for every training item d = 1…D: f( x d ) = w i · x d for every component of w j = 0…N: Δ w j += α (y d − f( x d ))·x dj w i+1 = w i + Δ w return w i+1 when it has converged CS446 Machine Learning 32

  33. The batch update rule for each component of w D − w i ⋅ x d ) x di Δ w i = α ( y d ∑ d = 1 Implementing gradient descent: 
 As you go through the training data, 
 you can just accumulate the change 
 in each component w i of w 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend