devise a deep visual semantic embedding model
play

DeViSE: A Deep Visual-Semantic Embedding Model Frome et al., Google - PowerPoint PPT Presentation

DeViSE: A Deep Visual-Semantic Embedding Model Frome et al., Google Research Presented by: Tushar Nagarajan The year is 2012... Krizhevsky et al. 2012 The year is 2012... Koala? Cat? Giraffe? Yes Of course! Dont be silly. Whats


  1. DeViSE: A Deep Visual-Semantic Embedding Model Frome et al., Google Research Presented by: Tushar Nagarajan

  2. The year is 2012... Krizhevsky et al. 2012

  3. The year is 2012... Koala? Cat? Giraffe? Yes Of course! Don’t be silly. What’s that?

  4. The year is 2012... Horse? :| Collect more giraffe data

  5. Imagenet 1k - Only 1000 classes - 3 year olds have a 1k word vocabulary Re-training networks is annoying Getting data is hard Doesn’t scale easily Label: “This thing”

  6. Structure in Labels

  7. Label Structure - Similarity Hospital Room Crevasse Formal Garden Dorm Room Snowfield Vegetable Garden SUN dataset, Xiao et al.

  8. Label Structure - Similarity Hospital Room Crevasse Formal Garden Crevasse-like? Dorm Room Vegetable Garden SUN dataset, Xiao et al.

  9. Label Structure - Similarity similar(Crevasse, Snowfield) similar(Guitar, Harp) Visual Semantic

  10. Label Structure - Hierarchy

  11. Label Structure - Hierarchy siblings? parents? Hwang et al., 2011

  12. Does Softmax Care? Dog Clock Chair

  13. Does Softmax Care? Guitar Completely independent? Clock Harp

  14. Does Softmax Care? Are labels independent? Not really - guitar and harp are more closely related than guitar and clock. Guitar Abandon softmax - move to Harp label space Clock

  15. Regress to Label Space Step 1: Train a CNN for classification - Regular CNN for object classification - 1000 way softmax output Hu et al., Remote Sens. 2015

  16. Step 1: Train a CNN for classification Regress to Label Space Step 2: Abandon Softmax Hu et al., Remote Sens. 2015

  17. Step 1: Train a CNN for classification Regress to Label Space Step 2: Abandon Softmax What regression labels? Hu et al., Remote Sens. 2015

  18. Label Space We didn’t think this through… Where do we get this space from? Guitar Hint: Imagenet classes are words! Harp Clock

  19. Word Embeddings - Skip-gram The quick brown fox jumps over the lazy dog. quick brown fox jumps over Mikolov et al., 2013

  20. Word Embeddings - Skip-gram Gender encoded into subspace comparative - superlative info Mikolov et al., 2013

  21. Word Embeddings - Skip-gram Sebastian Ruder

  22. Step 1: Train a CNN for classification Step 2: Abandon Softmax Word Embeddings - Skip-gram Step 3: Train a LM on 5.7M documents from wikipedia - 20 word window - Hierarchical Softmax - 500D vectors Q: What about multi-word classes like “snow leopard”? Frome et al., 2013

  23. Step 1: Train a CNN for classification Step 2: Abandon Softmax Step 3: Train a skip-gram LM Word Embeddings - Skip-gram Tiger Shark Car Bull shark Cars Blacktip shark Muscle car Shark Sports car Blue shark Automobile ... ... Frome et al., 2013

  24. Step 1: Train a CNN for classification Step 2: Abandon Softmax Step 3: Train a skip-gram LM Step 4: Surgery v image Contrastive loss Image “Guitar” v label

  25. Step 1: Train a CNN for classification Step 2: Abandon Softmax Step 3: Train a skip-gram LM Step 4: Surgery v image Contrastive loss v label margin random incorrect class

  26. Inference - ZSL When a new image comes in: v image 1. Push it through the CNN, get v image

  27. Inference - ZSL v harp When a new image comes in: 1. Push it through the CNN, get v image v banjo v violin v guitar

  28. Inference - ZSL v harp When a new image comes in: 1. Push it through the CNN, get v image 2. Find the nearest v label to v image v banjo v violin v guitar Potentially unseen labels!

  29. Results

  30. Evaluation Metrics - Flat hit @ k : Regular precision - Hierarchical precision @ k: k=1 k=3 k=8

  31. Results on Imagenet Softmax is hard to beat on raw classification on 1k classes DeViSE gets pretty close with a regression model! Frome et al., 2013

  32. Results - Imagenet Classification Hierarchical precision tells a different story DeViSE finds labels that are semantically relevant Frome et al., 2013

  33. Results - Imagenet ZSL Correct label @1 garbage? Frome et al., 2013

  34. Results - Imagenet ZSL Frome et al., 2013

  35. Results - Imagenet ZSL 3-hop: Unknown classes 3 hops away from imagenet labels Imagenet 21k: ALL unknown classes Chance: 0.00047 168x better! Frome et al., 2013

  36. Summary v image Step 1: Train a CNN for classification Step 2: Abandon Softmax Step 3: Train a skip-gram LM Step 4: Surgery Image Step 5: Profit? “Guitar” v label The Register, 2013

  37. Discussion Embeddings are not fine-tuned during training Semantic similarity is a happy coincidence - sim(cat, kitten) = 0.746 - sim(cat, dog) = 0.761 (!!) Semantic similarity is a depressing coincidence sim(happy, depressing) = ?

  38. Discussion Nearest neighbors of pineapple : Pineapples, papaya, mango, avocado, banana ... Frome et al., 2013

  39. Discussion Categories are fine-grained We TRUST softmax to distinguish them Stanford Dogs Dataset - Khosla et al., 2011

  40. Conclusion Label spaces to embed semantic information Shared embedding spaces background knowledge for ZSL Zedonk

  41. Thank you Questions?

  42. Bonus: ConSE 0.2 harp 0.01 chair 0.5 guitar Norouzi et al., 2013

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend