breaking the softmax bottleneck via monotonic functions
play

Breaking the Softmax Bottleneck via Monotonic Functions Octavian - PowerPoint PPT Presentation

Breaking the Softmax Bottleneck via Monotonic Functions Octavian Ganea, Sylvain Gelly, Gary Bcigneul, Aliaksei Severyn Softmax Layer (for Language Models) Natural language as conditional distributions next word context Parametric


  1. Breaking the Softmax Bottleneck via Monotonic Functions Octavian Ganea, Sylvain Gelly, Gary Bécigneul, Aliaksei Severyn

  2. Softmax Layer (for Language Models) ● Natural language as conditional distributions next word context ● Parametric distributions & softmax:

  3. Softmax Layer (for Language Models) ● ● Natural language as conditional distributions Natural language as conditional distributions next word context ● ● Parametric distributions & softmax: Parametric distributions & softmax: ● Challenge: Can we always find 𝛴 s.t. for all c : ?

  4. Softmax Layer (for Language Models) ● ● ● Natural language as conditional distributions Natural language as conditional distributions Natural language as conditional distributions next word context ● ● ● Parametric distributions & softmax: Parametric distributions & softmax: Parametric distributions & softmax: ● ● Challenge: Can we always find 𝛴 s.t. for all c : ? No , when embedding size < label cardinality (vocab size) !

  5. What is the Softmax Bottleneck (Yang et al, ‘18) ? ● log-P matrix : Label cardinality = Vocabulary size Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  6. What is the Softmax Bottleneck (Yang et al, ‘18) ? ● log-P matrix : Number of labels = ● Then: Vocabulary size Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  7. What is the Softmax Bottleneck (Yang et al, ‘18) ? ● log-P matrix : ● Then: Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  8. Breaking the Softmax Bottleneck [1] ● MoS [1] : Mixture of K Softmaxes [1] Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  9. Breaking the Softmax Bottleneck [1] ● MoS [1] : Mixture of K Softmaxes ● Improves perplexity [1] Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  10. Breaking the Softmax Bottleneck [1] ● MoS [1] : Mixture of K Softmaxes ● Improves perplexity ● Slower than vanilla softmax: 2 - 6.4x ● GPU Memory: M x N x K tensor Vanilla softmax [1] Breaking the Softmax Bottleneck: A High-Rank RNN Language Model, Yang et al., ICLR 2018

  11. Breaking the Softmax Bottleneck [2] ● Sig-Softmax [2] : [2] Sigsoftmax: Reanalysis of the Softmax Bottleneck, S. Kanai et al., NIPS 2018

  12. Breaking the Softmax Bottleneck [2] ● Sig-Softmax [2] : ● Small improvement over vanilla Softmax [2] Sigsoftmax: Reanalysis of the Softmax Bottleneck, S. Kanai et al., NIPS 2018

  13. Breaking the Softmax Bottleneck [2] ● Sig-Softmax [2] : Can we learn the best non-linearity to deform the logits ? ● Small improvement over vanilla Softmax [2] Sigsoftmax: Reanalysis of the Softmax Bottleneck, S. Kanai et al., NIPS 2018

  14. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits:

  15. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be:

  16. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be: 1. With unbounded image set -- to model sparse distributions

  17. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be: 1. With unbounded image set -- to model sparse distributions 2. Continuous and (piecewise) differentiable -- for backprop

  18. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be: 1. With unbounded image set -- to model sparse distributions 2. Continuous and (piecewise) differentiable -- for backprop 3. Non-linear -- to break the softmax bottleneck

  19. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be: 1. With unbounded image set -- to model sparse distributions 2. Continuous and (piecewise) differentiable -- for backprop 3. Non-linear -- to break the softmax bottleneck 4. Monotonic -- to preserve the ranking of logits

  20. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: should be: 1. With unbounded image set -- to model sparse distributions 2. Continuous and (piecewise) differentiable -- for backprop 3. Non-linear -- to break the softmax bottleneck 4. Monotonic -- to preserve the ranking of logits 5. Fast and memory efficient -- comparable w/ vanilla softmax

  21. Can we do better ? ● Our idea - learn a pointwise monotonic function on top of logits: Theorem : these properties are not restrictive in terms of rank deficiency should be: 1. With unbounded image set -- to model sparse distributions 2. Continuous and (piecewise) differentiable -- for backprop 3. Non-linear -- to break the softmax bottleneck 4. Monotonic -- to preserve the ranking of logits 5. Fast and memory efficient -- comparable w/ vanilla softmax

  22. Learnable parametric monotonic real functions ● A neural network with 1 hidden layer and positive (constrained) weights [3] ● Universal approximator for all monotonic functions (when K is large enough !) [3] Monotone and Partially Monotone Neural Networks, Daniels and Velikova, 2010, IEEE TRANSACTIONS ON NEURAL NETWORKS

  23. Synthetic Experiment ● Goal : separate softmax bottleneck from context embedding bottleneck

  24. Synthetic Experiment ● Goal : separate softmax bottleneck from context embedding bottleneck ● Sample N different categorical distributions with M outcomes:

  25. Synthetic Experiment ● Goal : separate softmax bottleneck from context embedding bottleneck ● Sample N different categorical distributions with M outcomes: ● Goal:

  26. Synthetic Experiment ● Goal : separate softmax bottleneck from context embedding bottleneck ● Sample N different categorical distributions with M outcomes: ● Goal: ● Independent context embeddings; shared word embeddings

  27. Synthetic Experiments - Mode Matching ( 𝜷 =0.01) ● Percentage of contexts c for which Vanilla Softmax Monotonic fn (K=100) Ratio 2nd / 1st num words M num words M num words M

  28. Synthetic Experiments - Mode Matching ( 𝜷 =0.01) ● Percentage of contexts c for which Vanilla Softmax Monotonic fn (K=100) Ratio 2nd / 1st num words M num words M num words M ● Similar results for cross-entropy and other values of 𝜷

  29. Piecewise Linear Increasing Functions (PLIF) ● NN w/ 1 hidden layer ⇒ memory hungry: ○ Tensor of size N x M x K on GPU , where K >= 1000

  30. Piecewise Linear Increasing Functions (PLIF) ● NN w/ 1 hidden layer ⇒ memory hungry: ○ Tensor of size N x M x K on GPU , where K >= 1000 ● PLIF:

  31. Piecewise Linear Increasing Functions (PLIF) ● NN w/ 1 hidden layer ⇒ memory hungry: ○ Tensor of size N x M x K on GPU , where K >= 1000 ● PLIF : ● Forward & backward passes: just a lookup in two K dim vectors ● Memory and running time very efficient (comparable with Vanilla Softmax)

  32. Language Modeling Results GPU Memory: N x M GPU Memory: N x M x K

  33. Thank you! Poster #23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend