comp 150 probabilistic robotics for human robot
play

COMP 150: Probabilistic Robotics for Human-Robot Interaction - PowerPoint PPT Presentation

COMP 150: Probabilistic Robotics for Human-Robot Interaction Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov Today: Perception beyond Vision Announcements Project Deadlines Project Presentations: Tuesday May 5 th 3:30- 6:30 pm


  1. COMP 150: Probabilistic Robotics for Human-Robot Interaction Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov

  2. Today: Perception beyond Vision

  3. Announcements

  4. Project Deadlines ● Project Presentations: Tuesday May 5 th 3:30- 6:30 pm ● Final Report + Deliverables: May 10 ● Deliverables: – Presentation slides + videos – Final Report (PDF) – Source code (link to github repositories + README)

  5. ZCam (RGB+D) Microphones in the head Logitech Webcam Torque sensors in the joints 3-axis accelerometer Sinapov, J., Schenck, C., Staley, K., Sukhoy, V., and Stoytchev, A. (2014) Grounding Semantic Categories in Behavioral Interactions: Experiments with 100 Objects Robotics and Autonomous Systems, Vol. 62, No. 5, pp. 632-645, May 2014.

  6. 100 objects from 20 categories

  7. Exploratory Behaviors grasp lift hold shake drop tap poke push press

  8. Coupling Action and Perception Action: poke … … … Perception: optical flow … … … Time

  9. Sensorimotor Contexts audio haptics (joint proprioception Optical Color SURF (DFT) torques) (finger pos.) flow look grasp lift hold shake drop tap poke push press

  10. Overview Interaction with Object Category Estimates … Sensorimotor Feature Category Extraction Recognition Model

  11. Context-specific Category Recognition M poke-audio Observation from Recognition model Distribution over poke-audio context for poke-audio category labels context

  12. Recognition Rates with SVM Audio Proprioception Color Optical Flow SURF All look 58.8 58.9 67.7 grasp 45.7 38.7 12.2 57.1 65.2 lift 48.1 63.7 5.0 65.9 79.0 hold 30.2 43.9 5.0 58.1 67.0 shake 49.3 57.7 32.8 75.6 76.8 drop 47.9 34.9 17.2 57.9 71.0 tap 63.3 50.7 26.0 77.3 82.4 push 72.8 69.6 26.4 76.8 88.8 poke 65.9 63.9 17.8 74.7 85.4 press 62.7 69.7 32.4 69.7 77.4

  13. Combining Model Outputs . . . . . . . . M look-color M tap-audio M lift-SURF M press-prop. Weighted Combination

  14. Combining Multiple Behaviors and Modalities

  15. Deep Models for Non-Visual Perception Tatiya, G., and Sinapov, J. (2019) Deep Multi-Sensory Object Category Recognition Using Interactive Behavioral Exploration 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019.

  16. Haptics

  17. Sensorimotor Word Embeddings Sinapov, J., Schenck, C., and Stoytchev, A. (2014). Learning Relational Object Categories Using Behavioral Exploration and Multimodal Perception In the Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA)

  18. Sensorimotor Word Embeddings Thomason, J., Sinapov, J., Stone, P., and Mooney, R. (2018) Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions In proceedings of the 32nd Conference of the Association for the Advancement of Artificial Intelligence (AAAI)

  19. Integration into Service Robotics Platform Robot, fetch me the green empty bottle

  20. Thomason, J., Padmakumar, A., Sinapov, J., Walker, N., Jiang, Y., Yedidsion, H., Hart, J., Stone, P. and Mooney, R.J. (2020) Jointly improving parsing and perception for natural language commands through human-robot dialog Journal of Artificial Intelligence Research 67 (2020)

  21. How do we extend this approach beyond individual words applied to individual objects?

  22. Language Related to Space and Locations

  23. Example: Offices

  24. Labs

  25. Kitchen

  26. Lobby

  27. Adjectives: clean vs messy

  28. Crowded vs Empty

  29. Computer Vision Solution: Scene Recognition [https://people.csail.mit.edu/bzhou/image/cover_places2.png]

  30. Breakout Activity ● Brainstorm: how can multimodal perception be used to ground language related to locations and places? ● What are some modalities / sources of information beyond visual input that may come useful? ● What are some nouns / adjectives related to locations and places that may be difficult to ground using vision alone? ● Take notes! After 15 minutes we’ll reconvene and share what we found

  31. Student-led Paper Presentation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend