learning quantities from vision and language
play

Learning quantities from vision and language Raffaella Bernardi - PowerPoint PPT Presentation

Learning quantities from vision and language Raffaella Bernardi University of Trento March 23, 2017 Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 1 / 44 Cardinals and Quantifiers Three of the animals are dogs.


  1. Learning quantities from vision and language Raffaella Bernardi University of Trento March 23, 2017 Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 1 / 44

  2. Cardinals and Quantifiers Three of the animals are dogs. vs. Most of the animals are dogs Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 2 / 44

  3. Cardinals and Quantifiers Three of the animals are dogs. vs. Most of the animals are dogs Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 2 / 44

  4. Quantifiers: are they in a scale? Expected abstract scale: < no, few, some, most, all > Q. How do we learn they are in this order? Q. Do we take this order into account when using them? Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 3 / 44

  5. Quantifiers: are they in a scale? Expected abstract scale: < no, few, some, most, all > Q. How do we learn they are in this order? Q. Do we take this order into account when using them? Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 3 / 44

  6. Quantifiers: are they in a scale? Expected abstract scale: < no, few, some, most, all > Q. How do we learn they are in this order? Q. Do we take this order into account when using them? Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 3 / 44

  7. Litteral vs. Pragmatic meaning What do we learn from language, what from vision, what from both? Conjecture 1: we can learn their litteral meaning (respecting the abstract scale) from images . Conjecture 2: they can be represented by a cross-modal function . Conjecture 3: text corpora could help learning their use . Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 4 / 44

  8. Litteral vs. Pragmatic meaning What do we learn from language, what from vision, what from both? Conjecture 1: we can learn their litteral meaning (respecting the abstract scale) from images . Conjecture 2: they can be represented by a cross-modal function . Conjecture 3: text corpora could help learning their use . Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 4 / 44

  9. Litteral vs. Pragmatic meaning What do we learn from language, what from vision, what from both? Conjecture 1: we can learn their litteral meaning (respecting the abstract scale) from images . Conjecture 2: they can be represented by a cross-modal function . Conjecture 3: text corpora could help learning their use . Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 4 / 44

  10. New Challenge for CV From content words to Function words Most tasks considered so far involve processing of objects and lexicalised relations amongst objects ( content words ). Humans (even pre-school children) can abstract over raw data to perform certain types of higher-level reasoning, expressed in natural language by function words . Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 5 / 44

  11. Operations inolved in quatifying A logical strategy Quantifiers require: 1 an approximate number estimation mechanism, acting over the relevant sets in the image; 2 a quantification comparison step. A “logical” strategy: 1 from raw data to abstract set representation 2 from the latter to quantifiers. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 6 / 44

  12. Operations inolved in quatifying A logical strategy Quantifiers require: 1 an approximate number estimation mechanism, acting over the relevant sets in the image; 2 a quantification comparison step. A “logical” strategy: 1 from raw data to abstract set representation 2 from the latter to quantifiers. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 6 / 44

  13. Comparison step Look, some green circles!: Learning to quantify from images (Sorodoc et al., 2016): Very high results: NNs should be able to learn the second subtask quite easily. Is the “logical” strategy a good one? Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 7 / 44

  14. Comparison step Look, some green circles!: Learning to quantify from images (Sorodoc et al., 2016): Very high results: NNs should be able to learn the second subtask quite easily. Is the “logical” strategy a good one? Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 7 / 44

  15. Learning quantification from images Layout Learning quantification from images 1 Quantifiers vs. Cardinals 2 Behavioral Study 3 Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 8 / 44

  16. Learning quantification from images Learning quantification from images Pay attention to those sets! Learning quantification from images Sorodoc et. al. just submitted. (a) (b) (c) (d) (e) Query: fish are red. Answers: (a) All, (b) Most, (c) Some, (d) Few, (e) No. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 9 / 44

  17. Learning quantification from images Learning quantification from images Pay attention to those sets! Learning quantification from images Sorodoc et. al. just submitted. (a) (b) (c) (d) (e) Query: fish are red. Answers: (a) All, (b) Most, (c) Some, (d) Few, (e) No. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 9 / 44

  18. Learning quantification from images Not raw data: All sorts of variances in place The system cannot memorize correlations between type of objects and quantifiers property of objects and quantifiers number of objects and quantifiers Quite challenging! Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 10 / 44

  19. Learning quantification from images Quantifiers as proportions Q of the fish are red . ���� ���� restrictor scope We take quantifiers to be a fiexed relation: | scope ∩ restrictor | ( e . g . | red ∩ fish | ) | restrictor | | fish | Prevalence estimates (Khemlain et al 2009): No: 0% Few: 1% - 17% (inc) Some: 17 % - 70% Most: 70% (inc) - 99% (inc.) All: 100% Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 11 / 44

  20. Learning quantification from images Quantifiers as proportions Q of the fish are red . ���� ���� restrictor scope We take quantifiers to be a fiexed relation: | scope ∩ restrictor | ( e . g . | red ∩ fish | ) | restrictor | | fish | Prevalence estimates (Khemlain et al 2009): No: 0% Few: 1% - 17% (inc) Some: 17 % - 70% Most: 70% (inc) - 99% (inc.) All: 100% Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 11 / 44

  21. Learning quantification from images Quantifiers as proportions Q of the fish are red . ���� ���� restrictor scope We take quantifiers to be a fiexed relation: | scope ∩ restrictor | ( e . g . | red ∩ fish | ) | restrictor | | fish | Prevalence estimates (Khemlain et al 2009): No: 0% Few: 1% - 17% (inc) Some: 17 % - 70% Most: 70% (inc) - 99% (inc.) All: 100% Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 11 / 44

  22. Learning quantification from images Computer Vision Models Start simple: concatenation. CNN+BOW Zhou et al. Simple Baseline for Visual Question Answering 2015 (iBOWIMG) Memorize correlations, no higher level abstraction Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 12 / 44

  23. Learning quantification from images Computer Vision Models Lesson learned from SoA: Memory and Attention Memory process new information based on previous ones. (LSTM, GRU) Attention Mechanism Use language to help making the representation of the image more focused Stacked Attention use language to focus the visual representation and use the later to focus the linguistic representation. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 13 / 44

  24. Learning quantification from images Computer Vision Models Lesson learned from SoA: Memory and Attention Memory process new information based on previous ones. (LSTM, GRU) Attention Mechanism Use language to help making the representation of the image more focused Stacked Attention use language to focus the visual representation and use the later to focus the linguistic representation. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 13 / 44

  25. Learning quantification from images Computer Vision Models Lesson learned from SoA: Memory and Attention Memory process new information based on previous ones. (LSTM, GRU) Attention Mechanism Use language to help making the representation of the image more focused Stacked Attention use language to focus the visual representation and use the later to focus the linguistic representation. Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 13 / 44

  26. Learning quantification from images Sequential Processing CNN+LSTM model Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 14 / 44

  27. Learning quantification from images Attention Mechanism: SAN’s attention layer Yang, Z., et al. (CVPR 2016). Stacked attention networks (SAN) for image question answering. Linguistic Nonlinear input transformation Linear Tanh Softmax transformation transformation transformation + + Visual + input + + + Gist Raffaella Bernardi (University of Trento) LaVi: quantifiers March 23, 2017 15 / 44

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend