human robot dialogue and collaboration
play

Human-Robot Dialogue and Collaboration in Search and Navigation - PowerPoint PPT Presentation

Human-Robot Dialogue and Collaboration in Search and Navigation Claire Bonial , Stephanie M. Lukin, Ashley Foots, Cassidy Henry, Matthew Marge, Kimberly A. Pollard, Ron Artstein, David Traum & Clare R. Voss. LREC 2018 7 May 2018 C. Bonial |


  1. Human-Robot Dialogue and Collaboration in Search and Navigation Claire Bonial , Stephanie M. Lukin, Ashley Foots, Cassidy Henry, Matthew Marge, Kimberly A. Pollard, Ron Artstein, David Traum & Clare R. Voss. LREC 2018 7 May 2018 C. Bonial | US Army Research Laboratory | UNCLASSIFIED 1

  2. Road Map How do we get from actions expressed in unconstrained natural language to robot execution in the physical world? 1. Intro: Motivation and Research Overview 2. Corpus Collection: Our series of experiments 3. Today’s Focus – Corpus Features: – Annotations – Actions in the data 4. Conclusions and Future Work C. Bonial | US Army Research Laboratory | UNCLASSIFIED 2

  3. 1. Introduction Motivation: robots must be able to communicate effectively with humans in shared tasks (e.g. search-and-rescue) • Ideally, two-way spoken dialogue I see a hole in a brick wall… What do you see ahead? C. Bonial | US Army Research Laboratory | UNCLASSIFIED 3

  4. 1. Introduction Knepper et al. 2015 How might people talk to a robot in a collaborative search/navigation task? – Wizard-of-Oz (WoZ) methodology : human “wizard” stands in for automated components – Phased WoZ used in virtual human dialogue systems ( DeVault et al. 2014, Artstein et al. 2015 ) – WoZ objectives: refine and evaluate the domain and provide training data for automated natural language understanding C. Bonial | US Army Research Laboratory | UNCLASSIFIED 4

  5. 1. Introduction Experiment 1 Exploratory Data Collection Experiment 2 Automate Some Beyond… “Wizard” Labor Full Automation Of “Wizard” C. Bonial | US Army Research Laboratory | UNCLASSIFIED 5

  6. 1. Introduction Experiment 1 Exploratory Data Collection Experiment 2 Automate Some Beyond… “Wizard” Labor Full Automation Of “Wizard” C. Bonial | US Army Research Laboratory | UNCLASSIFIED 6

  7. 2. Corpus Collection Commander Human Participant Commander VIEWS VERBAL COMMANDS ROBOT (remote from Commander) Marge et al. IEEE RO-MAN - 2016 C. Bonial | US Army Research Laboratory | UNCLASSIFIED 7

  8. 2. Corpus Collection • Dialogue Manager (DM- Commander Wizard) is the “brains” Participant of the robot in natural language interactions VIEWS VERBAL DM-WIZARD RN MOVES COMMANDS ROBOT • Robot Navigator (experimenter) navigates robot based on “ Behind the instructions from DM- Robot Navigator scenes ” Wizard C. Bonial | US Army Research Laboratory | UNCLASSIFIED 8

  9. 2. Corpus Collection C. Bonial | US Army Research Laboratory | UNCLASSIFIED 9

  10. 3. Corpus Details Preparing Freely Available Release of Exp 1, 2 data: • 20 Participants • 20 Hours of experimental interactions (audio transcribed and aligned with text chat messages) • 3,573 Participant utterances totaling 18,336 words (tokens) • 13,550 Dialogue Manager Wizard words in text messages C. Bonial | US Army Research Laboratory | UNCLASSIFIED 10

  11. 3. Corpus: Annotations C. Bonial | US Army Research Laboratory | UNCLASSIFIED 11

  12. 3. Corpus: Actions in the Data Most dialogue moves are commands : • Send-image • Rotate • Drive Each clear, unambiguous instruction is realized in 3 ways: 1. Spoken natural language instructions (from participant) 2. Simplified text message instructions (DM Translation for wizard navigator) 3. Actual executed action (i.e. data from robot: turns, changes in location, images sent) ➢ Current work: developing annotation schema relating these C. Bonial | US Army Research Laboratory | UNCLASSIFIED 12

  13. 3. Corpus: Actions in the Data Participant DM->Participant DM->RN RN (Audio Stream 1) (Chat Room 1) (Chat Room 2) (Audio Stream 2) turn ninety degrees to the left ok turn left 90 degrees turning … done done C. Bonial | US Army Research Laboratory | UNCLASSIFIED 13

  14. 3. Corpus: Actions in the Data Participant DM->Participant DM->RN RN (Audio Stream 1) (Chat Room 1) (Chat Room 2) (Audio Stream 2) take� a� picture� of� what's� behind� you turn� 180,� photo executing... image� sent Move back to the wall behind you; Can you go around and take a photo behind the TV? C. Bonial | US Army Research Laboratory | UNCLASSIFIED 14

  15. 3. Corpus: Actions in the Data Participant DM->Participant DM->RN RN (Audio Stream 1) (Chat Room 1) (Chat Room 2) (Audio Stream 2) go� into� the� center� of� the� room� in� front� of� you and� then� take� a� picture� at� the� <pause>� east� south� west� and� north� position move� into� the� center� of� the� room� in� front� of� you,� take� photos� at� east,� south,� west,� north� positions executing... done done go� into� the� room� behind� you and� do� the� same C. Bonial | US Army Research Laboratory | UNCLASSIFIED 15

  16. 4. Conclusions & Open Questions • Phased, Wizard-of-Oz approach allows us to develop technology in a data-driven fashion — observing the range of language people will use when addressing a robot • This language can be translated into a tractable set for an automated system while maintaining good coverage of the language in the domain – Preliminary Classifier: automating the DM- Wizard’s first response to participant instructions (either translating to the RN or clarifying to participant) – Uses string divergence measures

  17. 4. Conclusions & Open Questions Gervits et al. ACL-Demos 2018 C. Bonial | US Army Research Laboratory | UNCLASSIFIED 17

  18. 4. Conclusions & Open Questions Annotation schema under development will make explicit relation between NL instructions and physical execution; questions remain… • Can natural language — execution mapping be acquired through machine learning? • Is a more complex spatial/dialogue model needed, and/or a symbolic representation? We invite this community to utilize this data in considering aspects of action modeling in robots. 18

  19. thank you C. Bonial | US Army Research Laboratory | UNCLASSIFIED 19

  20. Collaborators Project Members at ARL Claire Bonial Linguistics (Adelphi) Ashley Foots Audiology (APG) Cory Hayes Human-Robot Interaction (Adelphi) Susan Hill Human-Robot Interaction (APG) Stephanie Lukin Computational Linguistics (ARL West) Matthew Marge Computational Linguistics (Adelphi) Kimberly Pollard Neurobiology (ARL West) Clare Voss Computer Sci., Linguistics (Adelphi) Cassidy Henry Linguistics (SMART Scholar) Project Members at USC/Institute for Creative Technologies Ron Artstein Linguistics Anton Leuski Computer Science David Traum Computational Linguistics More from David And a host of interns! Traum: LREC May 9, Session O4: Dialogue 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend