why and how to model multi modal interaction for a mobile
play

Why and how to model multi-modal interaction for a mobile robot - PowerPoint PPT Presentation

Why and how to model multi-modal interaction for a mobile robot companion Shuyin Li, Julia Peltason and Britta Wrede Bielefeld University Germany Li, Peltason and Wrede, March 2007 1 Outline Introduction to Human-Robot Interaction (HRI)


  1. Why and how to model multi-modal interaction for a mobile robot companion Shuyin Li, Julia Peltason and Britta Wrede Bielefeld University Germany Li, Peltason and Wrede, March 2007 1

  2. Outline Introduction to Human-Robot Interaction (HRI) Observations in a user study A multi-modal interaction framework Summary Li, Peltason and Wrede, March 2007 2

  3. Introduction: HRI with a personal robot Robot Requirements characteristics for the system Robots should be aware of situation- situated environmental changes awareness Li, Peltason and Wrede, March 2007 3

  4. Introduction: HRI with a personal robot Robot Requirements characteristics for the system Robots should be aware of situation- situated environmental changes awareness Users expect human-like social anthropomorphic behaviors behaviors Li, Peltason and Wrede, March 2007 4

  5. Introduction: HRI with a personal robot Robot Requirements characteristics for the system Robots should be aware of situation- situated environmental changes awareness Users expect human-like social anthropomorphic behaviors behaviors Both users and robots have multi-modal visual access to their embodied interaction interaction partener's body Li, Peltason and Wrede, March 2007 5

  6. Introduction: HRI with a personal robot Robot Requirements characteristics for the system Robots should be aware of situation- situated environmental changes awareness Users expect human-like social anthropomorphic behaviors behaviors Both users and robots have multi-modal visual access to their embodied interaction interaction partener's body Li, Peltason and Wrede, March 2007 6

  7. Introduction: HRI with a personal robot Robot Requirements characteristics for the system Robots should be aware of situation- situated environmental changes awareness Users expect human-like social anthropomorphic behaviors behaviors Both users and robots have multi-modal visual access to their embodied interaction interaction partener's body Li, Peltason and Wrede, March 2007 7

  8. Outline Introduction to Human-Robot Interaction (HRI) Observations in a user study Observations in a user study BIRON@Home BIRON@Home Quiet speakers Quiet speakers Meta-commentators Meta-commentators A multi-modal interaction framework Summary Li, Peltason and Wrede, March 2007 8

  9. The user study: BIRON@Home Li, Peltason and Wrede, March 2007 9

  10. The user study: BIRON@Home Experimental setup with BIRON 14 subjects, each interaction about 7 min. Non-task behaviors of BIRON: 1. situation awareness 2. social behavior: Only output-modality of BIRON: speech Li, Peltason and Wrede, March 2007 10

  11. The user study: situation awareness Face Face recognition recognition sound source detection Human leg detection Li, Peltason and Wrede, March 2007 11

  12. Observation I: quiet speakers Li, Peltason and Wrede, March 2007 12

  13. Observation I: quiet speakers The problem No means to communicate pre-interaction attention Possible reason inapproprateness of speech modality (“legs detected, face detected, face lost again, face detected, ...”) Solution use non-verbal modalities (because they are suitable to represent static information which is only occasionally updated) Li, Peltason and Wrede, March 2007 13

  14. The user study: social behavior ... User: Follow me. BIRON: OK, I'm following you. User: This is a cup. BIRON: It's nice. Performance You are really doing very well! remark User: (laugh) BIRON: Come here. Li, Peltason and Wrede, March 2007 14

  15. Observation II: meta-commentators Li, Peltason and Wrede, March 2007 15

  16. Observation II: meta-commentators Problem users reply to social comments using out-of-vocabulary words Possible reason reciprocity and obtrusiveness of the speech modality Solution making remarks using non-verbal modalities (because they are unobtrusive and do not impose strong obligation to reply) Li, Peltason and Wrede, March 2007 16

  17. Outline Introduction to Human-Robot Interaction (HRI) Observations in a user study A multi-modal interaction framework A multi-modal interaction framework Currently popular approaches Currently popular approaches Our approach Our approach Summary Li, Peltason and Wrede, March 2007 17

  18. Interaction framework: existing works Currently popular approaches address differences between multi-modal information by grouping it into categories and handle different categories separately Li, Peltason and Wrede, March 2007 18

  19. Interaction framework: existing works Cassell: generic architecture for embodied conversational agents Li, Peltason and Wrede, March 2007 19

  20. Interaction framework: existing works Traum: dialog model for multi-modal, multi-party dialog in virtual world (based on information-state theory) Li, Peltason and Wrede, March 2007 20

  21. Interaction framework: multi-modal grounding Our approach address a common feature of multi-modal information: evocative functions Li, Peltason and Wrede, March 2007 21

  22. Interaction framework: multi-modal grounding Evocative functions of conversational behaviors (CBs) Definition: CBs evoke a reaction from the interaction partner Validity: for both propositional and interactional info Li, Peltason and Wrede, March 2007 22

  23. Interaction framework: multi-modal grounding Grounding Definition: the process of establishing shared understanding during a conversation. Basic idea: for each account (Presentation) issued in a conversation, there needs to be a feedback (Acceptance) from the interaction partner Application area: traditionally adopted to model evocative functions of propositional information → to be extended! Li, Peltason and Wrede, March 2007 23

  24. Interaction framework: multi-modal grounding Our approach: extending grounding Modeling both propositional and interactional contribution with Interaction Unit Organizing IUs based on the principle of grounding Li, Peltason and Wrede, March 2007 24

  25. Interaction framework: multi-modal grounding Interaction Unit (IU) Behavior Layer Motivation Layer Li, Peltason and Wrede, March 2007 25

  26. Interaction framework: multi-modal grounding Interaction Unit (IU) verbal non-verbal generator generator (motivation conception) Li, Peltason and Wrede, March 2007 26

  27. Interaction framework: multi-modal grounding Grounding speech gesture Motivation speech gaze Motivation speech Motivation facial expression Motivation ... ... Motivation Li, Peltason and Wrede, March 2007 27

  28. Interaction framework: multi-modal grounding Discourse Grounding models: [Clark92] [Traum94] [Cahn&Brennan99] Our approach: [Li2006] Ex3 IU 4 R(Ex4,Ex2) = support Ex2 IU 3 S. Li, B. Wrede, and G. Sagerer. A computational model of multi-modal grounding. In Proc. ACL SIGdial workshop R(Ex3,Ex2) = default on discourse and dialog, in conjunction with IU_2 COLING/ACL 2006, pages 153-160. ACL Ex1 IU 1 Press, 2006. Li, Peltason and Wrede, March 2007 28

  29. Interaction framework: multi-modal grounding Pre-interaction attention: solving the quiet-speaker-problem User: (shows legs) uninstantiated shows legs U1 unintentional motivation Li, Peltason and Wrede, March 2007 29

  30. Interaction framework: multi-modal grounding Pre-interaction attention: solving the quiet-speaker-problem User: (shows legs) uninstantiated shows legs U1 unintentional motivation BIRON: (opens eyes) uninstantiated opens eyes U2 provides acceptance to user IU Li, Peltason and Wrede, March 2007 30

  31. Interaction framework: multi-modal grounding Pre-interaction attention: solving the quiet-speaker-problem User: (shows face) uninstantiated shows face and legs U3 unintentional motivation BIRON: (raises head) uninstantiated raises head U4 provides acceptance to user IU Li, Peltason and Wrede, March 2007 31

  32. Interaction framework: multi-modal grounding Making social comments: solving the meta-commentator-problem User: This is a cup. “This is a cup” deictic gesture U5 shows BIRON a cup Li, Peltason and Wrede, March 2007 32

  33. Interaction framework: multi-modal grounding Making social comments: solving the meta-commentator-problem User: This is a cup. “This is a cup” deictic gesture U5 shows BIRON a cup BIRON: I beg your pardon? “I beg your pardon” uninstantiated U6 initiates conversational repair Li, Peltason and Wrede, March 2007 33

  34. Interaction framework: multi-modal grounding Making social comments: solving the meta-commentator-problem User: This is a cup. “This is a cup” deictic gesture U5 shows BIRON a cup BIRON: I beg your pardon? “I beg your pardon” uninstantiated U6 initiates conversational repair uninstantiated looking embarrassed BIRON: (looking embarrassed) U7 shows social awareness Li, Peltason and Wrede, March 2007 34

  35. Interaction framework: multi-modal grounding Implemented on systems BIRON and BARTHOC BIRON BARTHOC Li, Peltason and Wrede, March 2007 35

  36. Summary Two cases studies revealing the importance of multi- modality in situatedness and social behaviors of a robot A multi-modal interaction framework that addresses the evocative function of conversational behaviors Li, Peltason and Wrede, March 2007 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend