Why You Should Care About Byte-Level Seq2Seq Models in NLP South - - PowerPoint PPT Presentation
Why You Should Care About Byte-Level Seq2Seq Models in NLP South - - PowerPoint PPT Presentation
Why You Should Care About Byte-Level Seq2Seq Models in NLP South England Natural Language Processing Meetup Alan Turing Institute Monday March 4, 2019 Tom Kenter TTS Research Google UK, London Based on internship at Google Research in
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Based on internship at Google Research in Mountain View Byte-level Machine Reading across Morphologically Varied Languages Tom Kenter, Llion Jones, Daniel Hewlett Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018 https://ai.google/research/pubs/pub47437 Medium blogpost Why You Should Care About Byte-Level Sequence-to-Sequence Models in NLP
Proprietary + Confidential
Is it advantageous, when processing morphologically rich languages, to use bytes rather than words as input and output to RNNs in a machine reading task?
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Machine Reading
Computer reads text and has to answer questions about it. WikiReading datasets
- English WikiReading dataset
(Hewlett, et al, ACL, 2016)
- Two extra datasets — Russian and Turkish —
(Kenter et al, AAAI, 2018)
https://github.com/google-research-datasets/wiki-reading
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Byte-level Machine Reading
W h e r e i s A m s t e r d a m I n N e t h e r l a n d s t h e
word-level
1 1 1 1
byte-level
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Morphologically rich languages
Turkish kolay → easy kolaylaştırabiliriz → we can make it easier kolaylaştıramıyoruz → we cannot make it easier Russian В прошлом году Дмитрий переехал в Москву. Где теперь живет Дмитрий? В Москве.
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
1 1 1 1
byte-level
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Small input vocabulary → small model size Longer unroll length for RNN ⟷ read less input Allows for apples-to-apples comparison between models Universal encoding scheme across languages No out-of-vocabulary problem
Why should you care about byte-level seq2seq models in NLP?
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Models
Bidirectional RNN Multi-level RNN
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Models
Hybrid word-byte model
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Models
Convolutional recurrent
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Models
Memory network/encoder-transformer-decoder Memory network Encoder-transformer-decoder
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Results
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Results
Source: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis non erat sem
Conclusions
Reading and outputting bytes, instead of words, works. Byte-level models provide an elegant way of dealing with the
- ut-of-vocabulary problem.
Byte-level models perform on par with the state-of-the-art word-level model on English, and better on morphologically more involved languages. This is good news, as byte-level models have far fewer parameters.
Are you interested in machine reading/question answering/NLU, and looking for a new challenge? Try your approach on 3 languages at once! WikiReading English & Russian & Turkish https://github.com/google-research-datasets/wiki-reading
Thank you
Byte-level Machine Reading across Morphologically Varied Languages Tom Kenter, Llion Jones, Daniel Hewlett Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018 https://ai.google/research/pubs/pub47437 Medium blogpost Why You Should Care About Byte-Level Sequence-to-Sequence Models in NLP