Connecting Images with Natural Language Andrej Karpathy CVPR 2016. - - PowerPoint PPT Presentation

connecting images with natural language
SMART_READER_LITE
LIVE PREVIEW

Connecting Images with Natural Language Andrej Karpathy CVPR 2016. - - PowerPoint PPT Presentation

Connecting Images with Natural Language Andrej Karpathy CVPR 2016. Deep Vision workshop. July 1, 2016 Visual domain Domain of Natural Language [Cho et al. 2014] Connecting Images and Natural Language A man sitting on A person sitting on a


slide-1
SLIDE 1

Connecting Images with Natural Language

Andrej Karpathy

CVPR 2016. Deep Vision workshop. July 1, 2016

slide-2
SLIDE 2

Visual domain

slide-3
SLIDE 3

Domain of Natural Language

[Cho et al. 2014]

slide-4
SLIDE 4

Connecting Images and Natural Language

slide-5
SLIDE 5

A man sitting on a red bouncing ball A person sitting on a chair and eating a sandwich. a black bag on the floor. door handle

slide-6
SLIDE 6
slide-7
SLIDE 7

reasons

slide-8
SLIDE 8

8

Computing Machine and Intelligence Alan Turing, 1950

“Can machines think?”

slide-9
SLIDE 9

9

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

Computing Machine and Intelligence Alan Turing, 1950

slide-10
SLIDE 10

10

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

Computing Machine and Intelligence Alan Turing, 1950

slide-11
SLIDE 11

11

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

Computing Machine and Intelligence Alan Turing, 1950

slide-12
SLIDE 12

12

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

Computing Machine and Intelligence Alan Turing, 1950

slide-13
SLIDE 13

SHRDLU

13

Person: Pick up a big red block. Computer: OK. Person: Grasp the pyramid. Computer: I don't understand which pyramid you mean. Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box. Computer: By "it", I assume you mean the block which is taller than the one I am holding. Computer: OK. Person: What does the box contain? Computer: The blue pyramid and the blue block. Person: What is the pyramid supported by? Computer: The box. Person: How many blocks are not in the box? Computer: Four of them. Person: Is at least one of them narrower than the one which I told you to pick up? Computer: Yes, the red cube.

Winograd, 1971

slide-14
SLIDE 14

14

Images and Language

  • 1. Stepping stone towards AI

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.

slide-15
SLIDE 15

15

Data

Caltech 101 Fei-Fei et al., 2004 Pascal VOC Everingham et al. 2006-2012 ImageNet, Deng et al., 2009 LabelMe, Russel, Torralba et al., 2007

slide-16
SLIDE 16

16

Compute

Moore’s Law GPU

slide-17
SLIDE 17

17

Convolutional Neural Networks

LeCun et al., 1998

Hubel & Wiesel, 1959 Fukushima 1980 Riesenhuber, Poggio, 1999

slide-18
SLIDE 18

Infrastructure

slide-19
SLIDE 19

19

ImageNet Image Classification Challenge Top-5 Error

5 10 15 20 25 30

2010 2011 2012 2013 2014 2015

ILSVRC, Russakovsky et al., 2015

slide-20
SLIDE 20

20

ImageNet Image Classification Challenge Top-5 Error

5 10 15 20 25 30

2010 2011 2012 2013 2014 2015

ILSVRC, Russakovsky et al., 2015

Estimated human accuracy: 2 - 5%

slide-21
SLIDE 21

21

True or False: is this a face? This is an image of (Mark one):

  • mountains
  • city
  • playground
  • sea shore

Short answer: describe what you see.

slide-22
SLIDE 22

22

Images and Language

  • 1. Stepping stone towards AI
  • 2. Unsolved; a critical next

frontier in Computer Vision

True or False: is this a face? This is an image of (Mark one):

  • mountains
  • city
  • playground
  • sea shore

Short answer: describe what you see.

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.

slide-23
SLIDE 23

Natural Language as a Label Space

Rich.

nouns (objects, people, scenes) adjectives (attributes) verbs (actions) prepositions (relationships)

slide-24
SLIDE 24

Natural Language as a Label Space

Rich.

man A wearing helmet a root jumps

  • n

bike his near beach a

det det det partmod dobj nsubj prep pobj poss prep pobj

compositional

slide-25
SLIDE 25

Natural Language as a Label Space

Natural.

People (already fluent in natural language) are the end users of our Computer Vision systems.

slide-26
SLIDE 26

Natural Language as a Label Space

Natural.

“Pictures of me scuba diving next to a giant turtle.”

  • bject: person

action: scuba diving relation: next to

  • bject: turtle

attribute: giant turtle

something hacky

Classifiers

slide-27
SLIDE 27

Natural Language as a Label Space

Natural.

“Pictures of me scuba diving next to a giant turtle.” “Pictures of me scuba diving next to a giant turtle.”

  • bject: person

action: scuba diving relation: next to

  • bject: turtle

attribute: giant turtle

something hacky

Classifiers

slide-28
SLIDE 28

Natural Language as a Label Space

Natural.

“Pictures of me scuba diving next to a giant turtle.” “Pictures of me scuba diving next to a giant turtle.”

  • bject: person

action: scuba diving relation: next to

  • bject: turtle

attribute: giant turtle

something hacky

Classifiers

slide-29
SLIDE 29

Natural Language as a Label Space

Pervasive.

It’s all in Natural Language!

slide-30
SLIDE 30

“A camel is an even-toed ungulate within the genus Camelus, bearing distinctive fatty deposits known as "humps" on its back.”

slide-31
SLIDE 31

31

Images and Language

  • 1. Stepping stone towards AI
  • 2. Unsolved; a critical next

frontier in Computer Vision

True or False: is this a face? This is an image of (Mark one):

  • mountains
  • city
  • playground
  • sea shore

Short answer: describe what you see.

  • 3. Rich, Natural, Pervasive

label space

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.

slide-32
SLIDE 32

32

  • 3. Generating multiple localized captions [3]

“Dog jumping over a hurdle.” “A black and white dog in mid-air” “A hand of a person” “A blue and white hurdle”

  • 1. Matching images with sentences [1]

“Dog jumping over a hurdle.”

matching score

  • 2. Generating captions for images [2]

“Dog jumping over a hurdle.”

[1] Grounded Compositional Semantics for Finding and Describing Images with Sentences. Socher, Karpathy, Le, Manning, Ng, TACL 2013. [2] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015. [3] DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Johnson*, Karpathy*, Fei-Fei, CVPR 2016.

Outline

slide-33
SLIDE 33

33

Outline

  • 3. Generating multiple localized captions [3]

“Dog jumping over a hurdle.” “A black and white dog in mid-air” “A hand of a person” “A blue and white hurdle”

  • 1. Matching images with sentences [1]

“Dog jumping over a hurdle.”

matching score

  • 2. Generating captions for images [2]

“Dog jumping over a hurdle.”

[1] Grounded Compositional Semantics for Finding and Describing Images with Sentences. Socher, Karpathy, Le, Manning, Ng, TACL 2013. [2] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015. [3] DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Johnson*, Karpathy*, Fei-Fei, CVPR 2016.

slide-34
SLIDE 34

34

Matching Images with Sentences

Problem Statement

Dog jumping over a hurdle. Man in blue wetsuit surfing. Baseball player throwing the ball.

slide-35
SLIDE 35

35

Matching Images with Sentences

Problem Statement

Dog jumping over a hurdle. Man in blue wetsuit surfing. Baseball player throwing the ball.

1 2 3

slide-36
SLIDE 36

36

Matching Images with Sentences

Problem Statement

  • 1. Dog jumping over a hurdle.
  • 3. Man in blue wetsuit surfing.
  • 2. Baseball player throwing the ball.
slide-37
SLIDE 37

37

Model outline

“A dog jumping

  • ver a hurdle”
  • 1. How do we

process images?

  • 2. How do we

process sentences?

  • 3. How do we

compare an image and a sentence?

slide-38
SLIDE 38

38

End-to-end learning

“A dog jumping

  • ver a hurdle”
  • 1. How do we

process images?

  • 2. How do we

process sentences?

  • 3. How do we

compare an image and a sentence?

single differentiable function

slide-39
SLIDE 39

39

“A dog jumping

  • ver a hurdle”
  • 2. How do we

process sentences?

  • 3. How do we

compare an image and a sentence?

Convolutional Network

Model outline

slide-40
SLIDE 40

40

“A dog jumping

  • ver a hurdle”
  • 2. How do we

process sentences?

  • 3. How do we

compare an image and a sentence?

Convolutional Network

Model outline

slide-41
SLIDE 41

41

“A man wearing a helmet jumps on his bike near a beach.”

Processing Sentences

Q: How can we encode the sentence into a fixed-sized vector?

slide-42
SLIDE 42

42

“A man wearing a helmet jumps on his bike near a beach.”

Idea 1: Bag of Words

slide-43
SLIDE 43

43

“A man wearing a helmet jumps on his bike near a beach.”

Idea 2: Bag of n-grams

e.g. n = 2:

slide-44
SLIDE 44

44

man A wearing helmet a root jumps

  • n

bike his near beach a

det det det partmod dobj nsubj prep pobj poss prep pobj

Our approach

  • 1. Extract a dependency tree [1]

[Marneffe and Manning 2008]

  • 2. Compute representation

recursively with a Recursive Tensor Neural Network apply recursive formula:

slide-45
SLIDE 45

45

“A dog jumping

  • ver a hurdle”
  • 3. How do we

compare an image and a sentence?

Convolutional Network

Recursive Tensor Neural Network

Model outline

slide-46
SLIDE 46

46

Matching Image and Sentence

“A dog jumping

  • ver a hurdle”

Convolutional Network

Recursive Tensor Neural Network

x

score

slide-47
SLIDE 47

47

“A dog jumping

  • ver a hurdle”

Convolutional Network

Recursive Tensor Neural Network

are there fur textures? is a “dog” mentioned?

slide-48
SLIDE 48

48

dog jumping

  • ver a hurdle

man in blue wetsuit surfing baseball player throwing the ball

0.5 0.1

  • 1.5
  • 1.5

2.0 0.9 0.3 0.6 2.1

slide-49
SLIDE 49

49

dog jumping

  • ver a hurdle

man in blue wetsuit surfing baseball player throwing the ball

0.5 0.1

  • 1.5
  • 1.5

2.0 0.9 0.3 0.6 2.1

Given image and sentence vectors the (structured, max-margin) loss becomes: rank sentences (columns) rank images (rows)

slide-50
SLIDE 50

50

[1] Pascal 1K: 1,000 images [2] Flickr8K: 8,000 images [3] Flickr30K: 30,000 images [4] MSCOCO: 115,000 images

Image Sentence Datasets

(5 sentences per image)

[1] Rashtchian et al., 2010 [2] Hodosh et al., 2013 [3] Young et al., 2014 [4] Lin et al., 2015

slide-51
SLIDE 51

51

Example results: sentence retrieval

showing top 4 matching sentences (out of 5,000)

slide-52
SLIDE 52

Image Sentence Fragment ranking

Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015.

slide-53
SLIDE 53
slide-54
SLIDE 54
slide-55
SLIDE 55

55

Limitations of Ranking

  • 1. Limiting. We cannot describe images with

sentences that are not in the training data.

  • 2. Inefficient. Have to loop over and test all

sentences one by one when annotating an image.

  • 3. Unsatisfying. Especially when compared to

humans who can easily generate descriptions.

slide-56
SLIDE 56

56

Outline

  • 3. Generating multiple localized captions [3]

“Dog jumping over a hurdle.” “A black and white dog in mid-air” “A hand of a person” “A blue and white hurdle”

  • 1. Matching images with sentences [1]

“Dog jumping over a hurdle.”

matching score

  • 2. Generating captions for images [2]

“Dog jumping over a hurdle.”

[1] Grounded Compositional Semantics for Finding and Describing Images with Sentences. Socher, Karpathy, Le, Manning, Ng, TACL 2013. [2] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015. [3] DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Johnson*, Karpathy*, Fei-Fei, CVPR 2016.

slide-57
SLIDE 57

57

Generating Captions for Images

Problem Statement

“A dog jumping

  • ver a hurdle”
slide-58
SLIDE 58

58

Generating Descriptions: Prior work

Baby Talk (Kulkarni et al. 2011)

_____ in _____ is ______ in _______.

(noun) (noun) (verb) (noun)

[Yao ’10] [Yang ’11] [Barbu ’12] [Mitchell ’12] [Gupta & Mannem ’12] [Elliott & Keller ’13] [Yatskar ’14] [Kiros ’14]

example template:

[Barnard ’03] [Duygulu ’02] [Frome ’13]

slide-59
SLIDE 59

59

differentiable function

Core Challenge

how can we predict sentences?

“A dog jumping

  • ver a hurdle”

???

slide-60
SLIDE 60

60

Core Challenge

how can we predict sentences?

“A dog jumping

  • ver a hurdle”

???

Convolutional Network

differentiable function

slide-61
SLIDE 61

61

Core Challenge

how can we predict sentences?

Convolutional Network

dog

image classification

differentiable function

slide-62
SLIDE 62

62

Core Challenge

???

Convolutional Network

sentences have variable number of words => output not fixed size!

a dog jumping

  • ver

a hurdle <end>

slide-63
SLIDE 63

63

Language Model

words P(word | previous words)

slide-64
SLIDE 64

64

RNN

Recurrent Neural Network Language Model

predict next word distribution feed in words one at a time e.g. “one-hot” encoding

slide-65
SLIDE 65

65

Image Classification

dog

image classification

Convolutional Network

slide-66
SLIDE 66

66

Convolutional Network

a dog jumping

  • ver

a hurdle <end>

h1 h2 h3 h4 h5 h6 h7

Image Captioning

Q: how do we condition the generative process on the image information?

slide-67
SLIDE 67

67

Convolutional Network

a dog jumping

  • ver

a hurdle <end>

h1 h2 h3 h4 h5 h6 h7

Image Captioning

slide-68
SLIDE 68

68

[1] Pascal 1K: 1,000 images [2] Flickr8K: 8,000 images [3] Flickr30K: 30,000 images [4] MSCOCO: 115,000 images

Image Sentence Datasets

(5 sentences per image)

[1] Rashtchian et al., 2010 [2] Hodosh et al., 2013 [3] Young et al., 2014 [4] Lin et al., 2015

slide-69
SLIDE 69
slide-70
SLIDE 70

70

“a woman in a bikini is jumping over a hurdle.”

slide-71
SLIDE 71

71

Limitations

“A group of people in an office.”

slide-72
SLIDE 72

72

Outline

  • 3. Generating multiple localized captions [3]

“Dog jumping over a hurdle.” “A black and white dog in mid-air” “A hand of a person” “A blue and white hurdle”

  • 1. Matching images with sentences [1]

“Dog jumping over a hurdle.”

matching score

  • 2. Generating captions for images [2]

“Dog jumping over a hurdle.”

[1] Grounded Compositional Semantics for Finding and Describing Images with Sentences. Socher, Karpathy, Le, Manning, Ng, TACL 2013. [2] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015. [3] DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Johnson*, Karpathy*, Fei-Fei, CVPR 2016.

slide-73
SLIDE 73

73

Classification Cat Captioning A cat riding a skateboard Detection Cat Skateboard Dense Captioning

Orange spotted cat Skateboard with red wheels Cat riding a skateboard Brown hardwood flooring

label density

Whole Image Image Regions

label complexity

Single Label

Sequence

Dense Captioning

slide-74
SLIDE 74

74

Our approach

End-to-end learning: Formulate a single differentiable function from inputs to outputs.

differentiable function

slide-75
SLIDE 75

Visual Genome Dataset, Krishna et al. 2016

75

# Images: 108,077 # Region descriptions: 4,297,502

Region-level descriptions data

“red frisbee” “frisbee is mid air” “frisbee is flying” …

example annotations:

slide-76
SLIDE 76

76

Convolutional Network

RNN Dense Captioning Architecture

slide-77
SLIDE 77

77

Dense Captioning Architecture

Convolutional Network

slide-78
SLIDE 78

78

Convolutional Network

Localization layer

Dense Captioning Architecture

[Ren et al., 2015] [Girshick et al., 2015] [Szegedy et al., 2015]

slide-79
SLIDE 79

79

Convolutional Network

Localization layer

Dense Captioning Architecture

  • 1. Propose regions of interest

Predict 300 scored boxes: [(x1,y1,x2,y2,score), …] (300*5 = 1500 numbers total)

slide-80
SLIDE 80

80

Convolutional Network

Localization layer

Dense Captioning Architecture

  • 2. Align predictions to true boxes

True boxes

slide-81
SLIDE 81

81

Convolutional Network

Localization layer

Dense Captioning Architecture

  • 3. Crop out the aligned regions

crop

reuse computation!

[Fast R-CNN, Girshick et al., 2015]

slide-82
SLIDE 82

82

black computer monitor man wearing a blue shirt sitting on a chair

people are in the background

computer monitor on a desk silver handle

  • n the wall

man with black hair black bag

  • n the floor

red and brown chair wall is white

slide-83
SLIDE 83

83

Quantitative Results

1.5 3 4.5 6

Image Captioning with Region proposals [3] DenseCap

Dense Captioning mAP (high = good)

1.25 2.5 3.75 5

DenseCap

4.26 5.39 0.31 4.17

Better performance and 13x speedup!

Throughput in frames per second (high = good)

Image Captioning with Region proposals [3]

[3] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015.

slide-84
SLIDE 84

84

slide-85
SLIDE 85

Find DenseCap on Github!

  • Implemented in Torch
  • Code for training
  • Pretrained models
  • Webcam demo code
slide-86
SLIDE 86

86

Finding regions given descriptions

“head of a giraffe”

search

slide-87
SLIDE 87

87

Finding regions given descriptions

“head of a giraffe”

slide-88
SLIDE 88

88

“white tennis shoes”

Finding regions given descriptions

slide-89
SLIDE 89

89

“hands holding a phone”

Finding regions given descriptions

slide-90
SLIDE 90

90

“front wheel of a bus”

Finding regions given descriptions

slide-91
SLIDE 91

91

Outline

  • 3. Generating multiple localized captions [3]

“Dog jumping over a hurdle.” “A black and white dog in mid-air” “A hand of a person” “A blue and white hurdle”

  • 1. Matching images with sentences [1]

“Dog jumping over a hurdle.”

matching score

  • 2. Generating captions for images [2]

“Dog jumping over a hurdle.”

[1] Grounded Compositional Semantics for Finding and Describing Images with Sentences. Socher, Karpathy, Le, Manning, Ng, TACL 2013. [2] Deep Visual-Semantic Alignments for Generating Image Descriptions. Karpathy, Fei-Fei, CVPR 2015. [3] DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Johnson*, Karpathy*, Fei-Fei, CVPR 2016.

slide-92
SLIDE 92

92

Summary

  • 1. Matching images with sentences

“A dog jumping over a hurdle”

Convolutional Network

Recursive Tensor Neural Network

x

score

“man in blue wetsuit surfing” “baseball player throwing the ball”

slide-93
SLIDE 93

93

  • 2. Generating captions for images

Summary

Convolutional Network

RNN

a dog jumping

  • ver a hurdle
slide-94
SLIDE 94

94

  • 3. Generating multiple localized captions for images

Summary

Convolutional Network

Localization layer

slide-95
SLIDE 95

Going Forward…

  • Evaluation innovations
  • Dataset/Task innovations
  • Modeling innovations
slide-96
SLIDE 96

Going Forward…

  • Evaluation innovations
  • Dataset/Task innovations
  • Modeling innovations

(or lack there of)

slide-97
SLIDE 97

97

Evaluation

Test image and 5 reference sentences: Candidate generated caption: “A red car with two people next to it.”

compare

slide-98
SLIDE 98

Fixing evaluation

:s …

slide-99
SLIDE 99

Learning the evaluation metric?

Supervision:

  • sentences for each image should be closer to each other

than sentences from other images?

  • or: directly use human judgements as supervision?
slide-100
SLIDE 100

Going Forward…

  • Evaluation innovations
  • Dataset/Task innovations
  • Modeling innovations
slide-101
SLIDE 101

Visual Genome

Krishna et al. 2015

108,249 Images 4.2 Million Region Descriptions 1.7 Million Visual Question Answers 2.1 Million Object Instances 1.8 Million Attributes 1.8 Million Relationships Everything Mapped to Wordnet Synsets

Classification Cat Captioning A cat riding a skateboard Detection Cat Skateboard Dense Captioning Orange spotted cat Skateboard with red wheels Cat riding a skateboard Brown hardwood flooring label density

Whole Image Image Regions

label complexity

Single Label

Sequence
slide-102
SLIDE 102

Visual Madlibs

Visual Madlibs: Fill in the blank Image Generation and Question Answering Licheng Yu, Eunbyung Park, Alexander C. Berg, Tamara L. Berg

360,001 focused natural language descriptions for 10,738 images.

  • fill in the blanks templates
  • multiple choice evaluation

More structured Image-NLP tasks:

slide-103
SLIDE 103

Unambiguous descriptions

Generation and Comprehension of Unambiguous Object Descriptions, Mao et al., 2016

Natural Language Object Retrieval: Hu et al. 2016 Resolving References to Objects in Photographs using the Words-As-Classifiers Model: Schlangen et al., 2016

slide-104
SLIDE 104

Video Captioning

Translating Videos to Natural Language Using Deep Recurrent Neural Networks, Venugopalan et al., 2015

slide-105
SLIDE 105

Visual Question & Answering

VQA: Visual Question Answering Agrawal, Lu, Antol, et al., 2015

www.visualqa.org

∼0.25M images, ∼0.76M questions, ∼10M answers Visual7W Zhu et al., 2015

web.stanford.edu/~yukez/visual7w

∼50K images, ∼1M questions/answers

slide-106
SLIDE 106

Visual Question & Answering

(in video)

MovieQA: Understanding Stories in Movies through Question-Answering, Tapaswi et al., 2015

MovieQA dataset contains 7702 questions about 294 movies

slide-107
SLIDE 107

Generating Images based on Text

Generating Images from Captions with Attention, Mansimov et al., 2016 Generative Adversarial Text to Image Synthesis, Reed et al., 2016

slide-108
SLIDE 108

Going Forward…

  • Evaluation innovations
  • Dataset/Task innovations
  • Modeling innovations
slide-109
SLIDE 109

Show and Tell: A Neural Image Caption Generator Vinyals et al., 2015

Sequence to Sequence paradigm

Sequence to Sequence Learning with Neural Networks, Sutskever et al., 2015

VQA

Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, Malinowski et al., 2015

slide-110
SLIDE 110

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, Xu et al., 2015 Generating Images from Captions with Attention, Mansimov et al., 2016 Aligning where to see and what to tell: image caption with region-based attention and scene factorization, Jin et al., 2016 Image Captioning with Semantic Attention, You et al., 2016 Encode, Review, and Decode: Reviewer Module for Caption Generation, Yang et al., 2016

Sequence to Sequence with attention paradigm

Neural machine translation by jointly learning to align and translate Bahdanau et al., 2015 Describing Multimedia Content using Attention-based Encoder--Decoder Networks, Cho et al., 2015

slide-111
SLIDE 111

Visual Question & Answering

(with attention)

Zhu et al., 2016 Yang et al., 2016 Xu and Saenko, 2016 Chen et al., 2016

slide-112
SLIDE 112

Video Captioning

Translating Videos to Natural Language Using Deep Recurrent Neural Networks, Venugopalan et al., 2015 Sequence to Sequence -- Video to Text, Venugopalan et al., 2015 Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text, Venugopalan et al., 2016 Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning, Pan et al., 2015 Jointly Modeling Embedding and Translation to Bridge Video and Language, Pan et al., 2015 The Long-Short Story of Movie Description, Rohrbach et al., 2015 Movie Description, Rohrbach et al., 2016 Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al., 2016 Bidirectional Long-Short Term Memory for Video Description, Bin et al., 2016 Describing Videos by Exploiting Temporal Structure, Yao et al., 2015

slide-113
SLIDE 113

Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, Xu et al., 2015

Sequence to Sequence with hard attention paradigm

Recurrent Models of Visual Attention, Mnih et al., 2014

slide-114
SLIDE 114

(see karpathy.github.io)

Policy Gradients

slide-115
SLIDE 115

Learning to Compose Neural Networks for Question Answering Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, 2016

Dynamic Neural Module Networks paradigm

slide-116
SLIDE 116

Generative Model Paradigms

Autoregressive models Variational Autoencoders (VAEs) Generative Adversarial Networks (GANs, LAPGANs, DCGANs)

van der Oord et al., Kalchbrenner et al.

(PixelRNNs)

Kingma et al., Rezende et al, Salimans et al., Goodfellow et al., Denton et al., Radford et al.

slide-117
SLIDE 117

Summary

  • connecting images and natural language is exciting!
  • lets fix evaluation
  • major paradigms:
  • seq-to-seq
  • soft attention
  • hard attention
  • dynamic neural module nets
  • generative models: VAE, GAN, Autoregressive
  • want to do well in computer vision? pay attention to

what happens in machine translation :)

slide-118
SLIDE 118

Find related work: www.arxiv-sanity.com

slide-119
SLIDE 119

Thank you! Questions?