plug and play autoencoders for conditional text generation
play

Plug and Play Autoencoders for Conditional Text Generation Florian - PowerPoint PPT Presentation

Plug and Play Autoencoders for Conditional Text Generation Florian Mai , Nikolaos Pappas Ivan Montero Noah A. Smith , James Henderson Idiap Research Institute, EPFL, Switzerland University of Washington,


  1. Plug and Play Autoencoders for Conditional Text Generation Florian Mai † , ♠ Nikolaos Pappas ♣ Ivan Montero ♣ Noah A. Smith ♣ , ♦ James Henderson † † Idiap Research Institute, ♠ EPFL, Switzerland ♣ University of Washington, Seattle, USA ♦ Allen Institute for Artificial Intelligence, Seattle, USA fl orian.mai@idiap.ch

  2. The Problem with Conditional Text Generation text lives in a messy, discrete space discrete space florian.mai@idiap.ch 1

  3. The Problem with Conditional Text Generation text lives in a messy, discrete space discrete space conditional text generation requires mapping from discrete y input to discrete output x florian.mai@idiap.ch 1

  4. The Problem with Conditional Text Generation text lives in a messy, discrete space discrete space conditional text generation requires mapping from discrete Usual way complex function � input to discrete output y x task specific training U sual way: learning a complex, task-specific function, which is difficult to train in discrete space florian.mai@idiap.ch 1

  5. The Problem with Conditional Text Generation text lives in a messy, discrete space conditional text generation discrete space requires mapping from discrete Usual input to discrete output way complex function � y Usual way: x task specific training learning a complex, task-specific function, which is difficult to train in discrete space Our way : florian.mai@idiap.ch 1

  6. The Problem with Conditional Text Generation text lives in a messy, discrete space conditional text generation discrete space requires mapping from discrete Usual input to discrete output way complex function � y Usual way: x task specific training learning a complex, task-specific function, which is difficult to train in discrete space Our way : obtain a continuous space by training an autoencoder continuous autoencoder space florian.mai@idiap.ch 1

  7. The Problem with Conditional Text Generation text lives in a messy, discrete space conditional text generation discrete space requires mapping from discrete Usual input to discrete output way complex function � y Usual way: x task specific training learning a complex, task-specific function, which is difficult to train in discrete space Our way : Our way obtain a continuous space by training an autoencoder pretraining continuous autoencoder space florian.mai@idiap.ch 1

  8. The Problem with Conditional Text Generation text lives in a messy, discrete space conditional text generation discrete space requires mapping from discrete Usual input to discrete output way complex function � y Usual way: x task specific training learning a complex, task-specific function, which is difficult to train in discrete space Our way : Our way obtain a continuous space by simple mapping � training an autoencoder pretraining reduce task-specific learning to continuous autoencoder space the continuous space florian.mai@idiap.ch 1

  9. The Problem with Conditional Text Generation text lives in a messy, discrete space conditional text generation discrete space requires mapping from discrete Usual input to discrete output way complex function � y Usual way: x task specific training learning a complex, task-specific function, which is difficult to train in discrete space Our way : Our way obtain a continuous space by simple mapping � training an autoencoder pretraining reduce task-specific learning to continuous autoencoder space the continuous space florian.mai@idiap.ch 1

  10. Framework Overview Our framework ( Emb2Emb ) consists of three stages: florian.mai@idiap.ch 2

  11. Framework Overview Pretraining : Train a model of the form A ( x ) = Dec(Enc( x )) on corpus of sentences Assume a fixed-size continuous embedding z x := Enc( x ) ∈ R d Enc and Dec can be any function trained with any objective so long as A ( x ) ≈ x training corpus can be any unlabeled corpus ⇒ large-scale pretraining? florian.mai@idiap.ch 2

  12. Framework Overview Pretraining : Train a model of the form A ( x ) = Dec(Enc( x )) on corpus of sentences Assume a fixed-size continuous embedding z x := Enc( x ) ∈ R d Enc and Dec can be any function trained with any objective so long as A ( x ) ≈ x training corpus can be any unlabeled corpus ⇒ large-scale pretraining? Plug and Play Our framework is plug and play because any autoencoder can be used with it. florian.mai@idiap.ch 2

  13. Framework Overview Task Training : Supervised case: L task ( ˆ z y , z y ) = d ( ˆ z y , z y ) where d is a distance function (cosine distance loss in our experiments). florian.mai@idiap.ch 2

  14. Framework Overview Task Training : Supervised case: L task ( ˆ z y , z y ) = d ( ˆ z y , z y ) where d is a distance function (cosine distance loss in our experiments). ? Training objective: L = L task + λ adv · L adv florian.mai@idiap.ch 2

  15. Framework Overview Inference : compose inference model as Enc ◦ Φ ◦ Dec but: Dec not involved in training. Can it handle outputs of Φ ? ⇒ yes, if using L adv . florian.mai@idiap.ch 2

  16. What can happen when learning in the embedding space? (0,0) florian.mai@idiap.ch 3

  17. What can happen when learning in the embedding space? A prediction may end up off the manifold, and by definition, the decoder cannot handle off-manifold data well, but ... (0,0) florian.mai@idiap.ch 3

  18. What can happen when learning in the embedding space? A prediction may end up off the manifold, and by definition, the decoder cannot handle off-manifold data well, but ... ... but the predicted embedding may still have the same angle as the true output embedding... (0,0) florian.mai@idiap.ch 3

  19. What can happen when learning in the embedding space? A prediction may end up off the manifold, and by definition, the decoder cannot handle off-manifold data well, but ... ... but the predicted embedding may still have the same angle as the true output embedding... (0,0) resulting in zero cosine distance loss despite being off the manifold. florian.mai@idiap.ch 3

  20. What can happen when learning in the embedding space? A prediction may end up off the manifold, and by definition, the decoder cannot handle off-manifold data well, but ... ... but the predicted embedding may still have the same angle as the true output embedding... (0,0) resulting in zero cosine distance loss despite being off the manifold. Similar problems arise for L2 distance - how do we keep the embeddings on the manifold? florian.mai@idiap.ch 3

  21. Adversarial Loss Term train a discriminator disc to distinguish between embeddings produced by the encoder and embeddings resulting from the mapping: N log( disc ( z ˜ y i )) + log( 1 − disc (Φ( z x i )) � max disc i = 1 using the adversarial learning framework, mapping acts as the adversary and tries to fool the discriminator: L adv (Φ( z x i ); θ ) = − log( disc (Φ( z x i ); θ )) at convergence, the mapping should only produce embeddings that are on the manifold florian.mai@idiap.ch 4

  22. Supervised Style Transfer Experiments WikiLarge dataset: transform “normal“ English to “simple“ English parallel sentences (input and output) are available Model BLEU (relative imp.) SARI (relative imp.) Emb2Emb (no L adv ) 15.7 (-) 21.1 (-) Emb2Emb 34.7 (+121%) 25.4 (+20.4%) The adversarial loss term L adv is crucial for embedding-to-embedding training! florian.mai@idiap.ch 5

  23. Supervised Style Transfer Experiments we conducted controlled experiments of models with a fixed-size bottleneck best Seq2Seq model: best performing variant among fixed-size bottleneck models that are trained end-to-end via token-level cross-entropy loss (like Seq2Seq) Model BLEU (relative imp.) SARI (relative imp.) Speedup Best Seq2Seq model 23.3 ( ± 0 % ) 22.4 ( ± 0 % ) - Emb2Emb 34.7 (+48.9%) 25.4 (+13.4%) 2.2 × Training models with a fixed-size bottleneck may be easier, faster , and more effective when training embedding-to-embedding! florian.mai@idiap.ch 6

  24. Unsupervised Task Training Fixed-size bottleneck autoencoders are commonly used for unsupervised style transfer florian.mai@idiap.ch 7

  25. Unsupervised Task Training Fixed-size bottleneck autoencoders are commonly used for unsupervised style transfer The goal is to change the sty le of a text, but retain the cont ent: e.g., in machine translation, sentence simplification, sentiment transfer florian.mai@idiap.ch 7

  26. Unsupervised Task Training Fixed-size bottleneck autoencoders are commonly used for unsupervised style transfer The goal is to change the sty le of a text, but retain the cont ent: e.g., in machine translation, sentence simplification, sentiment transfer training objective: L = L task + λ adv · L adv florian.mai@idiap.ch 7

  27. Unsupervised Task Training Fixed-size bottleneck autoencoders are commonly used for unsupervised style transfer The goal is to change the sty le of a text, but retain the cont ent: e.g., in machine translation, sentence simplification, sentiment transfer training objective: L = L task + λ adv · L adv z y , z x ) = λ sty L sty ( ˆ z y ) + ( 1 − λ sty ) L cont ( ˆ z y , z x ) L task ( ˆ florian.mai@idiap.ch 7

Recommend


More recommend