4csll5 ibm translation models
play

4CSLL5 IBM Translation Models Martin Emms October 29, 2020 4CSLL5 - PowerPoint PPT Presentation

4CSLL5 IBM Translation Models 4CSLL5 IBM Translation Models Martin Emms October 29, 2020 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Efficient EM via p (( j , i ) a | o , s ) 4CSLL5 IBM


  1. 4CSLL5 IBM Translation Models 4CSLL5 IBM Translation Models Martin Emms October 29, 2020

  2. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Efficient EM via p (( j , i ) ∈ a | o , s )

  3. 4CSLL5 IBM Translation Models Avoiding Exponential Cost

  4. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Outline Parameter learning (efficient) How to sum alignments efficiently Efficient EM via p (( j , i ) ∈ a | o , s )

  5. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost?

  6. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost? ◮ the learnability of translation probabilites in an unsupervised fashion from just a corpus of pairs is a remarkable thing

  7. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost? ◮ the learnability of translation probabilites in an unsupervised fashion from just a corpus of pairs is a remarkable thing ◮ however, as we have formulated it, each possible alignment has to be considered in turn, and each contributes increments to expected counts

  8. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost? ◮ the learnability of translation probabilites in an unsupervised fashion from just a corpus of pairs is a remarkable thing ◮ however, as we have formulated it, each possible alignment has to be considered in turn, and each contributes increments to expected counts ◮ it was already noted that the number of possible alignments is ( ℓ s + 1) ℓ o – ie. exponential in the length of o . For ℓ s + 1 = ℓ o = 10, this is 10 10 , or 10,000 million

  9. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost? ◮ the learnability of translation probabilites in an unsupervised fashion from just a corpus of pairs is a remarkable thing ◮ however, as we have formulated it, each possible alignment has to be considered in turn, and each contributes increments to expected counts ◮ it was already noted that the number of possible alignments is ( ℓ s + 1) ℓ o – ie. exponential in the length of o . For ℓ s + 1 = ℓ o = 10, this is 10 10 , or 10,000 million ◮ so unless a way can be found to make the EM process on this model much more efficient, its learnability in principle would just be an interesting curiosity

  10. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently but what about Exponential cost? ◮ the learnability of translation probabilites in an unsupervised fashion from just a corpus of pairs is a remarkable thing ◮ however, as we have formulated it, each possible alignment has to be considered in turn, and each contributes increments to expected counts ◮ it was already noted that the number of possible alignments is ( ℓ s + 1) ℓ o – ie. exponential in the length of o . For ℓ s + 1 = ℓ o = 10, this is 10 10 , or 10,000 million ◮ so unless a way can be found to make the EM process on this model much more efficient, its learnability in principle would just be an interesting curiosity ◮ it turns out that by studying a little more closely the formula where alignments are summed over, and doing some conversions of ’sums-over-products’ to ’products-over-sums’, it is indeed possible to make the EM process on this model much more efficient.

  11. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments 2 in the formula for p ( � o , a , ℓ o , s � ) everything except the translation probs is going to cancel out when you take ratios . . .

  12. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments Looking at the brute-force EM algorithm, need to calculate p ( a | o , s ) – call this γ d ( a ). 2 in the formula for p ( � o , a , ℓ o , s � ) everything except the translation probs is going to cancel out when you take ratios . . .

  13. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments Looking at the brute-force EM algorithm, need to calculate p ( a | o , s ) – call this γ d ( a ). its fairly easy to see that this is 2 � j p ( o j | s a ( j ) ) γ d ( a ) = (11) � � j p ( o j | s a ′ ( j ) ) a ′ 2 in the formula for p ( � o , a , ℓ o , s � ) everything except the translation probs is going to cancel out when you take ratios . . .

  14. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments Looking at the brute-force EM algorithm, need to calculate p ( a | o , s ) – call this γ d ( a ). its fairly easy to see that this is 2 � j p ( o j | s a ( j ) ) γ d ( a ) = (11) � � j p ( o j | s a ′ ( j ) ) a ′ ◮ The numerator is a product. 2 in the formula for p ( � o , a , ℓ o , s � ) everything except the translation probs is going to cancel out when you take ratios . . .

  15. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments Looking at the brute-force EM algorithm, need to calculate p ( a | o , s ) – call this γ d ( a ). its fairly easy to see that this is 2 � j p ( o j | s a ( j ) ) γ d ( a ) = (11) � � j p ( o j | s a ′ ( j ) ) a ′ ◮ The numerator is a product. ◮ It turns out the denominator can also be turned into product of sums 2 in the formula for p ( � o , a , ℓ o , s � ) everything except the translation probs is going to cancel out when you take ratios . . .

  16. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments contd each j can be aligned to any i between 0 and I , hence I I J � � � � � t ( o j | s a ( j ) ) = . . . t ( o j | s a ( j ) ) j =1 a j a (1)=0 a ( J )=0 = = =

  17. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments contd each j can be aligned to any i between 0 and I , hence I I J � � � � � t ( o j | s a ( j ) ) = . . . t ( o j | s a ( j ) ) j =1 a j a (1)=0 a ( J )=0 I I � � = [ t ( o 1 | s a (1) ) . . . t ( o J | s a ( J ) )] . . . a (1)=0 a ( J )=0 = =

  18. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments contd each j can be aligned to any i between 0 and I , hence I I J � � � � � t ( o j | s a ( j ) ) = . . . t ( o j | s a ( j ) ) j =1 a j a (1)=0 a ( J )=0 I I � � = [ t ( o 1 | s a (1) ) . . . t ( o J | s a ( J ) )] . . . a (1)=0 a ( J )=0 each � I a ( j )=0 () effects just one t ( o j | s a ( j ) ) term, and this means we can use a sum-of-products to product-of-sums conversion, hence = =

  19. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments contd each j can be aligned to any i between 0 and I , hence I I J � � � � � t ( o j | s a ( j ) ) = . . . t ( o j | s a ( j ) ) j =1 a j a (1)=0 a ( J )=0 I I � � = [ t ( o 1 | s a (1) ) . . . t ( o J | s a ( J ) )] . . . a (1)=0 a ( J )=0 each � I a ( j )=0 () effects just one t ( o j | s a ( j ) ) term, and this means we can use a sum-of-products to product-of-sums conversion, hence J I � � = [ t ( o j | s a ( j ) )] j =1 a ( j )=0 =

  20. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Summing over alignments contd each j can be aligned to any i between 0 and I , hence I I J � � � � � t ( o j | s a ( j ) ) = . . . t ( o j | s a ( j ) ) j =1 a j a (1)=0 a ( J )=0 I I � � = [ t ( o 1 | s a (1) ) . . . t ( o J | s a ( J ) )] . . . a (1)=0 a ( J )=0 each � I a ( j )=0 () effects just one t ( o j | s a ( j ) ) term, and this means we can use a sum-of-products to product-of-sums conversion, hence J I � � = [ t ( o j | s a ( j ) )] j =1 a ( j )=0 J I � � = [ t ( o j | s i )] j =1 i =0

  21. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Pause: did you believe that? the key step above was a conversion from a sum-of-products to a product-of-sums. for the case of o s having length 2 can relatively easily verify by brute force

  22. 4CSLL5 IBM Translation Models Parameter learning (efficient) How to sum alignments efficiently Pause: did you believe that? the key step above was a conversion from a sum-of-products to a product-of-sums. for the case of o s having length 2 can relatively easily verify by brute force 2 2 2 � � � t ( o j | s a ( j ) ) = a (1)=0 a (2)=0 j =1 = t ( o 1 | s 0 ) t ( o 2 | s 0 ) + t ( o 1 | s 0 ) t ( o 2 | s 1 ) + t ( o 1 | s 0 ) t ( o 2 | s 2 ) + = =

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend