COMP 150: Probabilistic Robotics for Human-Robot Interaction - - PowerPoint PPT Presentation

comp 150 probabilistic robotics for human robot
SMART_READER_LITE
LIVE PREVIEW

COMP 150: Probabilistic Robotics for Human-Robot Interaction - - PowerPoint PPT Presentation

COMP 150: Probabilistic Robotics for Human-Robot Interaction Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov Language Acquisition How would you describe this object? It is a small orange spray can My model of the word orange


slide-1
SLIDE 1

COMP 150: Probabilistic Robotics for Human-Robot Interaction

Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov

slide-2
SLIDE 2

Language Acquisition

How would you describe this object? It is a small orange spray can My model of the word ‘orange’ has improved!

slide-3
SLIDE 3

Something fun...

slide-4
SLIDE 4

Announcements

slide-5
SLIDE 5

Project Deadlines

  • Project Presentations: Apr 23 and 25
  • Final Report + Deliverables: May 10
  • Deliverables:

– Presentation slides + videos – Final Report (PDF) – Source code (link to github repositories)

slide-6
SLIDE 6

Presentation Guidelines

  • Length:

– Individual projects: 5 minutes talk + 2 min for questions – Team projects: 8 minutes talk + 3 min for questions

  • Practice! Time your presentation when you

practice and use a timer during the actual presentation as well

  • My advice: find another group and practice to each
  • ther
  • Format: Google Slides (so that we don’t have to

switch computers)

slide-7
SLIDE 7

Language Acquisition

How would you describe this object? It is a small orange spray can My model of the word ‘orange’ has improved!

slide-8
SLIDE 8

The Turing Test

slide-9
SLIDE 9

The Turing Test

slide-10
SLIDE 10

The Turing Test

slide-11
SLIDE 11

The First ChatBot (~1966)

slide-12
SLIDE 12

ELIZA

  • http://psych.fullerton.edu/mbirnbaum/psych101/

Eliza.htm

slide-13
SLIDE 13

Discussion: what is missing from programs like ELIZA?

slide-14
SLIDE 14

Natural Language Processing

  • The study of algorithms and data structures

used to manipulate text and text-like data

  • Applications in information retrieval, web

search, dialogue agents, text mining, etc.

  • Traditionally, not concerned with connecting

semantic representations to the real world

slide-15
SLIDE 15

Example: Computing Parse Trees

slide-16
SLIDE 16

Example: Document Classification

https://abbyy.technology/_media/en:features:classification- scheme.png

slide-17
SLIDE 17

Example: Word Embeddings

https://image.slidesharecdn.com/introductiontowordembeddings-160405062343/95/a-simple-introduction-to-word-embeddings-5-638.jpg?cb=1494520542

slide-18
SLIDE 18

The Symbol Grounding Problem

“How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis

  • f their (arbitrary)shapes, be grounded in

anything but other meaningless symbols?”

  • Steven Hamas, 1990
slide-19
SLIDE 19

Deb Roy, “Grounding Language in the World: Schema Theory Meets Semiotics” (2005)

slide-20
SLIDE 20

Circular Definitions

slide-21
SLIDE 21

Grounding

slide-22
SLIDE 22

Sensor Projections

slide-23
SLIDE 23

Sensor Projections

INPUT IMAGE Color Histogram

slide-24
SLIDE 24

Transformer Projection

slide-25
SLIDE 25

Transformer Projection

Color Histogram Entropy of Histogram

slide-26
SLIDE 26

Categorizer

Entropy of Histogram “Multicolored”

slide-27
SLIDE 27

Action Projector

slide-28
SLIDE 28
slide-29
SLIDE 29
slide-30
SLIDE 30

Schemas for Actions

slide-31
SLIDE 31

Schemas for Objects

slide-32
SLIDE 32

Spatial Relations

slide-33
SLIDE 33

Deb Roy’s Definition of Grounding

  • “I define grounding as a causal-predictive cycle

by which an agent maintains beliefs about its world.” (p. 8)

  • “An agent’s basic grounding cycle cannot

require mediation by another agent.” (p. 9)

  • “An autonomous robot simply cannot afford to

have a human in the loop interpreting sensory data on its behalf.” (p. 9)

slide-34
SLIDE 34
  • “Cyclic interactions between robots and their

environment, when well designed, enable a robot to learn, verify, and use world knowledge to pursue goals. I believe we should extend this design philosophy to the domain of language and intentional communication.” (p. 5)

slide-35
SLIDE 35
  • “causality alone is not a sufficient basis for

grounding beliefs. Grounding also requires prediction of the future with respect to the agent’s own actions.” (p. 10)

  • “The problem with ignoring the predictive part of

the grounding cycle has sometimes been called the ”homunculus problem”.”

slide-36
SLIDE 36
slide-37
SLIDE 37

Take Home Message

Language should be grounded in terms of the robot’s own perceptual and sensorimotor capabilities

slide-38
SLIDE 38

Thomason, J., Sinapov, J., Svetlik, M., Stone, P., and Mooney, R. (2016) Learning Multi-Modal Grounded Linguistic Semantics by Playing I, Spy In proceedings of the 2016 International Joint Conference on Artificial Intelligence (IJCAI)

slide-39
SLIDE 39

39

Motivation: Grounded Language Learning

Robot, fetch me the green empty bottle

slide-40
SLIDE 40

40

Exploratory Behaviors in our Robot

slide-41
SLIDE 41

41

Video

slide-42
SLIDE 42

42

Video

slide-43
SLIDE 43

43

Video

slide-44
SLIDE 44

44

Sensorimotor Feature Extraction

Time Joint Efforts (Haptics) . . . . . .

slide-45
SLIDE 45

45

Sensorimotor Contexts

grasp lift hold lower drop

proprio- ception

push press

haptics

look

audio shape color VGG

slide-46
SLIDE 46

46

Sensorimotor Contexts

grasp lift hold lower drop

proprio- ception

push press

haptics

look

audio shape color VGG

slide-47
SLIDE 47

47

Feature Extraction: Color

Color Histogram (4 x 4 x 4 = 64 bins)

Object Segmentation

slide-48
SLIDE 48

48

Feature Extraction: Shape

3D Object Point Cloud Histogram of Shape Features

slide-49
SLIDE 49

49

Joint-Torque values for all joints Joint-Torque Features

Feature Extraction: Haptics

slide-50
SLIDE 50

50

Feature Extraction: Audio

audio spectrogram Spectro-temporal Features

slide-51
SLIDE 51

51

Feature Extraction: VGG

slide-52
SLIDE 52

52

Feature Extraction: VGG

slide-53
SLIDE 53

53

Data from a single exploratory trial

grasp lift hold lower drop

proprio- ception

push press

haptics

look

audio shape color VGG

x 5 per object

slide-54
SLIDE 54

54

Category Recognition Overview

Category Recognition Models

Sensorimotor Feature Extraction Interaction with Object Category Estimates

. . . Empty? Red? Container?

Sinapov, J., Schenck, C., and Stoytchev, A. (2014). Learning Relational Object Categories Using Behavioral Exploration and Multimodal Perception In the Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA)

slide-55
SLIDE 55

55

Key Questions

How can the robot learn object-related words from everyday human users? Do human users use non-visual object descriptors when referring to objects?

slide-56
SLIDE 56

56

Object Exploration Dataset

32 common household and

  • ffice items

Each object was explored a total of 5 times with 7 different behaviors The robot perceived objects using the visual, auditory, and haptic sensory modalities

Thomason, J., Sinapov, J., Svetlik, M., Stone, P., and Mooney, R. (2016). Learning Multi-Modal Grounded Linguistic Semantics by Playing I, Spy In proceedings of the 2016 International Joint Conference on Artificial Intelligence (IJCAI)

slide-57
SLIDE 57

57

Our attempt: I-Spy game

slide-58
SLIDE 58

58

Learning Words via Game-play

Human: “an empty metallic aluminum container”

slide-59
SLIDE 59

59

Semantic Parsing

slide-60
SLIDE 60

60

Example Words for an Object

slide-61
SLIDE 61

61

Learning Words via Game-play

slide-62
SLIDE 62

62

Learning Words via Game-play

Human: “a tall blue cylindrical container”

slide-63
SLIDE 63

63

Learning Words via Game-play

Robot: “open half-full container”

slide-64
SLIDE 64

64

Asking Verification Questions

slide-65
SLIDE 65

65

Results

slide-66
SLIDE 66

66

“can” “tall” “half-full” “pink” WORD F-measure improvement as a result of adding non- visual modalities 0.857 0.516 0.463

. . . . . . . .

slide-67
SLIDE 67

67

Summary of Experiment

  • The robot learned over 80 words through interactive

game play

  • The robot's word representations were grounded in

multiple behaviors and sensory modalities

  • Future Work:

– Active action selection when classifying a new object – Active action selection when learning a new words – Actively seek humans out for help with learning about

  • bjects
slide-68
SLIDE 68

68

“Opportunistic” Active Learning

Thomason, J., Padmakumar, A., Sinapov, J., Hart, J., Stone, P., and Mooney, R. (2017) Opportunistic Active Learning for Grounding Natural Language Descriptions In proceedings of the 1st Annual Conference on Robot Learning (CoRL 2017)

slide-69
SLIDE 69

69

“Opportunistic” Active Learning

Thomason, J., Padmakumar, A., Sinapov, J., Hart, J., Stone, P., and Mooney, R. (2017) Opportunistic Active Learning for Grounding Natural Language Descriptions In proceedings of the 1st Annual Conference on Robot Learning (CoRL 2017)

slide-70
SLIDE 70

70

What actions should the robot perform when learning a new word?

  • Baseline: perform all actions on a set of labeled
  • bjects and estimate which ones work well
  • But can we do better?
slide-71
SLIDE 71

71

Sensorimotor Word Embeddings

Sinapov, J., Schenck, C., and Stoytchev, A. (2014). Learning Relational Object Categories Using Behavioral Exploration and Multimodal Perception In the Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA)

slide-72
SLIDE 72

72

slide-73
SLIDE 73

73

Sensorimotor Word Embeddings

Sinapov, J., Schenck, C., and Stoytchev, A. (2014). Learning Relational Object Categories Using Behavioral Exploration and Multimodal Perception In the Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA)

slide-74
SLIDE 74

74

Behavior Scores for Words

slide-75
SLIDE 75

75

Word Embeddings

Thomason, J., Sinapov, J., Stone, P., and Mooney, R. (2018) Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions In proceedings of the 32nd Conference of the Association for the Advancement of Artificial Intelligence (AAAI)

slide-76
SLIDE 76

76

Word Embeddings

Thomason, J., Sinapov, J., Stone, P., and Mooney, R. (2018) Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions In proceedings of the 32nd Conference of the Association for the Advancement of Artificial Intelligence (AAAI)

slide-77
SLIDE 77

77

Word Embeddings

Thomason, J., Sinapov, J., Stone, P., and Mooney, R. (2018) Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions In proceedings of the 32nd Conference of the Association for the Advancement of Artificial Intelligence (AAAI)

slide-78
SLIDE 78

78

Results

slide-79
SLIDE 79

79

Results

slide-80
SLIDE 80

80

Putting it all together...

Thomason, J., Padmakumar, A., Sinapov, J., Walker, N., Jiang, Y., Yedidsion, H., Hart, J., Stone, P., and Mooney, R. (2019) Improving Grounded Natural Language Understanding through Human-Robot Dialog Accepted to the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019.

slide-81
SLIDE 81

81

Putting it all together...

Thomason, J., Padmakumar, A., Sinapov, J., Walker, N., Jiang, Y., Yedidsion, H., Hart, J., Stone, P., and Mooney, R. (2019) Improving Grounded Natural Language Understanding through Human-Robot Dialog Accepted to the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019.

slide-82
SLIDE 82

Discussion

  • What are some of the limitations of these

approaches?

  • When will they fail?
slide-83
SLIDE 83

Project Breakout

slide-84
SLIDE 84
slide-85
SLIDE 85