word2vec
play

Word2Vec Michael Collins, Columbia University Motivation We can - PowerPoint PPT Presentation

Word2Vec Michael Collins, Columbia University Motivation We can easily collect very large amounts of unlabeled text data Can we learn useful representations (e.g., word embeddings) from unlabeled data? Bigrams from Unlabeled Data


  1. Word2Vec Michael Collins, Columbia University

  2. Motivation ◮ We can easily collect very large amounts of unlabeled text data ◮ Can we learn useful representations (e.g., word embeddings) from unlabeled data?

  3. Bigrams from Unlabeled Data ◮ Given a corpus, extract a training set { x ( i ) , y ( i ) } for i = 1 . . . n , where each x ( i ) ∈ V , y ( i ) ∈ V , where V is the vocabulary ◮ For example, Hispaniola quickly became an important base from which Spain expanded its empire into the rest of the Western Hemisphere . Given a window size of + / − 3 , for x = base we get the pairs ( base , became ) , ( base , an ) , ( base , important ) , ( base , from ) , ( base , which ) , ( base , Spain )

  4. Learning Word Embeddings ◮ Given a corpus, extract a training set { x ( i ) , y ( i ) } for i = 1 . . . n , where each x ( i ) ∈ V , y ( i ) ∈ V , where V is the vocabulary ◮ For each word w ∈ V , define word embeddings θ ′ ( w ) ∈ R d and θ ( w ) ∈ R d ◮ Define Θ ′ , Θ to be the two matrices of embeddings parameters ◮ Can then define p ( y ( i ) | x ( i ) ; Θ , Θ ′ ) = exp { θ ′ ( x ( i ) ) · θ ( y ( i ) ) } Z ( x ( i ) ; Θ , Θ ′ ) where Z ( x ( i ) ; Θ , Θ ′ ) = � y ∈V exp { θ ′ ( x ( i ) ) · θ ( y ) }

  5. Learning Word Embeddings (Continued) ◮ Can define p ( y ( i ) | x ( i ) ; Θ , Θ ′ ) = exp { θ ′ ( x ( i ) ) · θ ( y ( i ) ) } Z ( x ( i ) ; Θ , Θ ′ ) where Z ( x ( i ) ; Θ , Θ ′ ) = � y ∈V exp { θ ′ ( x ( i ) ) · θ ( y ) } ◮ A first objective function that can be maximized using stochastic gradient: n � log p ( y ( i ) | x ( i ) ; Θ , Θ ′ ) L (Θ , Θ ′ ) = i =1     n   � �  θ ′ ( x ( i ) ) · θ ( y ( i ) ) } − log exp { θ ′ ( x ( i ) ) · θ ( y ) }  =     i =1 y ∈V   � �� � Expensive!

  6. An Alternative: Negative Sampling ◮ Given a corpus, extract a training set { x ( i ) , y ( i ) } for i = 1 . . . n , where each x ( i ) ∈ V , y ( i ) ∈ V , where V is the vocabulary ◮ In addition, for each i sample y ( i,k ) for k = 1 . . . K from a “noise” distribution p n ( y ) . E.g., p n ( y ) is the unigram distribution over words y ◮ A new loss function: n exp { θ ′ ( x ( i ) ) · θ ( y ( i ) ) } � L (Θ ′ , Θ) = log 1 + exp { θ ′ ( x ( i ) ) · θ ( y ( i ) ) } i =1 n K 1 � � + log 1 + exp { θ ′ ( x ( i ) ) · θ ( y ( i,k ) ) } i =1 k =1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend