Anticipating the Unseen and Unheard for Embodied Perception Kristen - - PowerPoint PPT Presentation

anticipating the unseen and unheard for embodied
SMART_READER_LITE
LIVE PREVIEW

Anticipating the Unseen and Unheard for Embodied Perception Kristen - - PowerPoint PPT Presentation

Anticipating the Unseen and Unheard for Embodied Perception Kristen Grauman University of Texas at Austin Facebook AI Research Visual recognition: significant recent progress Big labeled Deep learning datasets ImageNet top-5 error (%) GPU


slide-1
SLIDE 1

Anticipating the Unseen and Unheard for Embodied Perception

Kristen Grauman University of Texas at Austin Facebook AI Research

slide-2
SLIDE 2

Visual recognition: significant recent progress

Big labeled datasets Deep learning GPU technology

ImageNet top-5 error (%)

Kristen Grauman

slide-3
SLIDE 3

The Web photo perceptual experience

Caltech 101 (2004), Caltech 256 (2006) PASCAL (2007-12) ImageNet (2009) LabelMe (2007) MS COCO (2014) SUN (2010) Places (2014) BSD (2001) Visual Genome (2016)

A “disembodied” well-curated moment in time A “disembodied” well-curated moment in time

slide-4
SLIDE 4

Egocentric perceptual experience

A tangle of relevant and irrelevant multi-sensory information A tangle of relevant and irrelevant multi-sensory information

Kristen Grauman

slide-5
SLIDE 5

Big picture goal: Embodied perception

Status quo: Learning and inference with “disembodied” snapshots. On the horizon: Visual learning in the context of action, motion, and multi-sensory

  • bservations.

Kristen Grauman

slide-6
SLIDE 6

Anticipating the unseen and unheard

Audio-visual learning Affordance learning Look-around policies Towards embodied perception

Kristen Grauman

slide-7
SLIDE 7

Active perception

Bajcsy 1985, Aloimonos 1988, Ballard 1991, Wilkes 1992, Dickinson 1997, Schiele & Crowley 1998, Tsotsos 2001, Denzler 2002, Soatto 2009, Ramanathan 2011, Borotschnig 2011, …

From learning representations to learning policies

Kristen Grauman

slide-8
SLIDE 8

mug? bowl? pan? mug Perception Perception Action selection Evidence fusion

End-to-end active recognition

Jayaraman and Grauman, ECCV 2016, PAMI 2018

Main idea: Deep reinforcement learning approach that anticipates visual changes as a function of egomotion

Kristen Grauman

slide-9
SLIDE 9

T=2 Predicted label: T=1 T=3

[Jayaraman and Grauman, ECCV 2016, PAMI 2018]

End-to-end active recognition

Kristen Grauman

slide-10
SLIDE 10

Goal: Learn to “look around”

reconnaissance search and rescue

recognition

Can we learn look-around policies for visual agents that are curiosity-driven, exploratory, and generic?

vs.

task predefined task unfolds dynamically

Kristen Grauman

slide-11
SLIDE 11

Key idea: Active observation completion

Completion objective: Learn policy for efficiently inferring (pixels of) all yet-unseen portions of environment Agent must choose where to look before looking there.

Jayaraman and Grauman, CVPR 2018 Kristen Grauman

slide-12
SLIDE 12

Completing unseen views

Encoder-decoder model to infer unseen viewpoints

“supervision”: actual 360 scene

  • utput viewgrid

Jayaraman and Grauman, CVPR 2018; Ramakrishnan & Grauman, ECCV 2018

Kristen Grauman

slide-13
SLIDE 13

Encoder Actor Decoder belief

visualized model

Reward for fast completion

Actively selecting observations

Non-myopic: Train to target a budget of observation time

Jayaraman and Grauman, CVPR 2018; Ramakrishnan & Grauman, ECCV 2018

Kristen Grauman

slide-14
SLIDE 14

Two scenarios

Kristen Grauman

slide-15
SLIDE 15

Active “look around” results

18 23 28 33 38 1 2 3 4 5 6

per-pixel MSE (x1000) Time

SUN360 1-view random large-action large-action+ peek-saliency*

  • urs

3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 1 2 3 4

Time

ModelNet (seen cls)

5.5 5.7 5.9 6.1 6.3 6.5 6.7 6.9 7.1 7.3 7.5 1 2 3 4

Time

ModelNet (unseen cls)

*Saliency -- Harel et al, Graph based Visual Saliency, NIPS’07

Learned active look-around policy: quickly grasp environment independent of a specific task

Jayaraman and Grauman, CVPR 2018

slide-16
SLIDE 16

Active “look around” results

slide-17
SLIDE 17

Active “look around”

Agent’s mental model for 360 scene evolves with actively accumulated glimpses

Jayaraman and Grauman, CVPR 2018; Ramakrishnan & Grauman, ECCV 2018

slide-18
SLIDE 18

Agent’s mental model for 3D object evolves with actively accumulated glimpses

Active “look around”

Jayaraman and Grauman, CVPR 2018; Ramakrishnan & Grauman, ECCV 2018

slide-19
SLIDE 19

Look-around policy transfer

Look-around encoder Look-around Policy Decoder Task-specific encoder Task-specific Policy Predictor “beach”

Unsupervised Supervised

Look-around Policy

Plug observation completion policy in for new task

Kristen Grauman

slide-20
SLIDE 20

Jayaraman and Grauman, CVPR 2018

Look-around policy transfer

Plug observation completion policy in for active recognition task SUN 360 Scenes ModelNet Objects

Unsupervised exploratory policy approaches supervised task-specific policy accuracy! Unsupervised exploratory policy approaches supervised task-specific policy accuracy!

Kristen Grauman

slide-21
SLIDE 21

Look-around policy transfer

Ramakrishnan et al. 2019

Multiple perception tasks

Kristen Grauman

slide-22
SLIDE 22

Agent navigates 3d environment leveraging active exploration

Look-around policy transfer

Kristen Grauman

slide-23
SLIDE 23

Extreme relative pose from RGB-D scans

Input: Pair of RGB-D scans with little or no overlap Output: Rigid transformation (R,t) that separates them Approach: Alternate between completion and matching

Transform Transform

scan 1 scan 2

Yang et al. CVPR 2019

Kristen Grauman

slide-24
SLIDE 24

GT Ours 4PCS

Outperform existing methods on SUNCG / Matterport / ScanNet, particularly for small overlap case (10% to 50%)

Yang et al. CVPR 2019

Extreme relative pose from RGB-D scans

Kristen Grauman

slide-25
SLIDE 25

360° video: a “look around” problem for people

Where to look when?

Control by mouse

Kristen Grauman

slide-26
SLIDE 26

AutoCam

Input 360° Video Output NFOV Video

Automatically select FOV and viewing direction

[Su & Grauman, ACCV 2016, CVPR 2017]

Kristen Grauman

slide-27
SLIDE 27

Anticipating the unseen and unheard

Audio-visual learning Affordance learning Look-around policies Towards embodied perception

Kristen Grauman

slide-28
SLIDE 28

Object interaction

Turn on Replace lightbulb Move lamp Increase height

Embodied perception system Object manipulation

Kristen Grauman

slide-29
SLIDE 29

Toggle-able Adjustable Replaceable Movable

What actions does an object afford?

Embodied perception system Object manipulation

Kristen Grauman

slide-30
SLIDE 30

Captures annotators’ expectations of what is important

Current approaches: affordance as semantic segmentation

Label “holdable” regions

Sawatzky et al. (CVPR 17), Nguyen et al. (IROS 17), Roy et al. (ECCV 16), Myers et al. (ICRA 15), …

Kristen Grauman

slide-31
SLIDE 31

…but real human behavior is complex

Kristen Grauman

slide-32
SLIDE 32

How to learn object affordances?

Manually curated affordances Real human interactions? V S.

Sawatzky et al. (CVPR 17), Nguyen et al. (IROS 17), Roy et al. (ECCV 16), Myers et al. (ICRA 15), …

Kristen Grauman

slide-33
SLIDE 33

Our idea: Learn directly by watching people (video)

[Nagarajan et al. 2019]

Kristen Grauman

slide-34
SLIDE 34

Action Classifier

t=0 T

“open” Anticipation network

LSTM

Aggregated state for the action

Learning affordances from video

[Nagarajan et al. 2019]

Object at rest

Kristen Grauman

slide-35
SLIDE 35

t=0 T Action Classifier

Hypothesize for action a = “pullable”

?

activations gradients

“Pullable” Hotspot Map

Extracting interaction hotspot maps

[Nagarajan et al. 2019] Anticipation network

Activation mapping to identify responsible spatial regions

Kristen Grauman

slide-36
SLIDE 36

Action recognition + Grad-CAM Ours

Wait, is this just action recognition?

No: Hotspot anticipation model maps

  • bject at rest to potential for interaction

Kristen Grauman

slide-37
SLIDE 37

Evaluating interaction hotspots

OPRA (Fang et al., CVPR 18) EPIC Kitchens (Damen et al., ECCV 18) MS COCO (Lin et al., ECCV 14)

Train on video datasets, generate heatmaps on novel images--- even from unseen categories

Kristen Grauman

slide-38
SLIDE 38

Weakly Supervised

Up to 24% increase vs. weakly supervised methods

Strongly Supervised

OPRA data EPIC data

Given static image of object at rest, infer affordance regions

Results: interaction hotspots

[Nagarajan et al. 2019]

Kristen Grauman

slide-39
SLIDE 39

Results: interaction hotspots

Kristen Grauman

slide-40
SLIDE 40

Better low-shot object recognition by anticipating object function

Results: hotspots for recognition

Kristen Grauman

slide-41
SLIDE 41

Anticipating the unseen and unheard

Audio-visual learning Affordance learning Look-around policies Towards embodied perception

Kristen Grauman

slide-42
SLIDE 42

woof meow ring

Goal: a repertoire of objects and their sounds Challenge: a single audio channel mixes sounds of multiple objects

Listening to learn

clatter

Kristen Grauman

slide-43
SLIDE 43

Learning to separate object sounds

Our idea: Leverage visual objects to learn from unlabeled video with multiple audio sources

Unlabeled video Object sound models

Violin Dog Cat

Disentangle

[Gao, Feris, & Grauman, ECCV 2018]

Apply to separate simultaneous sounds in novel videos

Kristen Grauman

slide-44
SLIDE 44

Train on 100,000 unlabeled multi-source video clips, then separate audio for novel video

Results: audio-visual source separation

[Gao et al. ECCV 2018]

Dataset: AudioSet [Gemmeke et al. 2017]

Kristen Grauman

slide-45
SLIDE 45

Train on 100,000 unlabeled multi-source video clips, then separate audio for novel video

Results: audio-visual source separation

[Gao et al. ECCV 2018]

Kristen Grauman

slide-46
SLIDE 46

Spatial effects in audio

Cues for spatial hearing:

  • Interaural time difference (ITD)
  • Interaural level difference (ILD)
  • Spectral detail (from pinna reflections)

Image Credit: Michael Mandel

Spatial effects absent in monaural audio

Kristen Grauman

slide-47
SLIDE 47

“Lift”

+

Monaural Binaural

“Lift” mono audio to spatial audio via visual cues

[Gao & Grauman, CVPR 2019]

Our idea: 2.5D visual sound

Kristen Grauman

slide-48
SLIDE 48

Our idea: 2.5D visual sound

left channel

visual frame = spatial cues mono audio Mono2Binaural spectrogram

right channel

predicted left channel predicted right channel

“Lift” mono audio to spatial audio via visual cues

[Gao & Grauman, CVPR 2019]

Kristen Grauman

slide-49
SLIDE 49

New: FAIR-Play dataset

Capture ~5 hours video and binaural sound in music room Binaural microphone rig linked to camera and monoaural mic

[Gao & Grauman, CVPR 2019] https://github.com/facebookresearch/FAIR-Play

Kristen Grauman

slide-50
SLIDE 50

Datasets

FAIR-Play Binaural Ambisonics Datasets

[Morgado et al. NIPS 2018]

  • 10 musical instruments, e.g.,

cello, guitar, harp, ukulele, trumpet, etc.

  • ~5 hours of performances
  • Streets, random YouTube
  • ~1000 360◦ video clips
  • Converted to binaural audio

using decoder

Kristen Grauman

slide-51
SLIDE 51

Left channel Monaural input Right channel

Results: 2.5D visual sound

vision.cs.utexas.edu/projects/2.5D_visual_sound/

[Gao & Grauman, CVPR 2019]

Listen with headphones!

Kristen Grauman

slide-52
SLIDE 52

Left channel Monaural input Right channel

Results: 2.5D visual sound

vision.cs.utexas.edu/projects/2.5D_visual_sound/

[Gao & Grauman, CVPR 2019]

Kristen Grauman

slide-53
SLIDE 53

Binaural audio generation error, all four datasets

Binaural audio offers “embodied” 3D sensation.

User studies: perceived realism

Results: 2.5D visual sound

Ambisonics: Morgado et al. NIPS 2018

…and improves sound source separation!

[Gao & Grauman, CVPR 2019]

slide-54
SLIDE 54

Anticipating the unseen and unheard

Audio-visual learning Affordance learning Look-around policies Towards embodied perception

Kristen Grauman

slide-55
SLIDE 55

Summary

Towards embodied perception

– self-supervised learning via anticipation – learning to autonomously direct the camera – multi-sensory observations (audio, motion, visual) – object interaction from video

Ruohan Gao Dinesh Jayaraman Tushar Nagarajan Santhosh Ramakrishnan Christoph Feichtenhofer

Kristen Grauman