comp 150 probabilistic robotics for human robot
play

COMP 150: Probabilistic Robotics for Human-Robot Interaction - PowerPoint PPT Presentation

COMP 150: Probabilistic Robotics for Human-Robot Interaction Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov Today: Perception beyond Vision Announcements Project Deadlines Project Presentations: Tuesday May 5 th 3:30- 6:30 pm


  1. COMP 150: Probabilistic Robotics for Human-Robot Interaction Instructor: Jivko Sinapov www.cs.tufts.edu/~jsinapov

  2. Today: Perception beyond Vision

  3. Announcements

  4. Project Deadlines ● Project Presentations: Tuesday May 5 th 3:30- 6:30 pm ● Final Report + Deliverables: May 10 ● Deliverables: – Presentation slides + videos – Final Report (PDF) – Source code (link to github repositories + README)

  5. Today: Perception beyond Vision

  6. Language Acquisition How would you describe this object? It is a small orange spray can My model of the word ‘orange’ has improved!

  7. Current Solution: connect the symbol with visual input Sridharan et al . 2008 Collet et al . 2009 Rusu et al . 2009 Lai et al . 2011

  8. Current Solution: connect the symbol with visual input Redmon et al . 2016

  9. Modality Exclusivity Norms for common English nouns and adjectives

  10. “Robot, I am thirsty, fetch me the yellow juice carton ”

  11. Solution: Lift the Object

  12. “Fetch me the pill bottle ”

  13. Solution: Shake the Object

  14. Solution: Shake the Object Exploratory behaviors give us information about objects that vision cannot!

  15. [Power, 2000] [Lederman and Klatzky, 1987]

  16. Object Exploration in Infancy

  17. The “5” Senses

  18. The “5” Senses [http://edublog.cmich.edu/meado1bl/files/2013/03/Five-Senses2.jpg]

  19. Why sound for robotics?

  20. What just happened?

  21. What just happened? What actually happened: The robot dropped a soda-can

  22. Why Natural Sound is Important “…natural sound is as essential as visual information because sound tells us about things that we can't see , and it does so while our eyes are occupied elsewhere. “ “Sounds are generated when materials interact, and the sounds tell us whether they are hitting, sliding, breaking, tearing, crumbling, or bouncing . “ “Moreover, sounds differ according to the characteristics of the objects , according to their size, solidity, mass, tension, and material. “ Don Norman, “ The Design of Everyday Things ”, p.103

  23. Why Natural Sound is Important Sound Producing Event [Gaver, 1993]

  24. What is a Sound Wave?

  25. What is a Sound Wave? ….from a computer's point of view, raw audio is a sequence of 44.1K floating point numbers arriving each second

  26. Sine Curve [http://clem.mscd.edu/~talmanl/HTML/SineCurve.html]

  27. Amplitude (vertical stretch) 3 sin(x) [http://www.sparknotes.com/math/trigonometry/graphs/section4.rhtml]

  28. Frequency (horizontal stretch) [http://www.sparknotes.com/math/trigonometry/graphs/section4.rhtml]

  29. Sinusoidal waves of various frequencies Low Frequency High Frequency [http://en.wikipedia.org/wiki/Frequency]

  30. Fourier Series A Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines

  31. Approximation [http://en.wikipedia.org/wiki/Fourier_series]

  32. Discrete Fourier Transform . . . .

  33. Discrete Fourier Transform Frequency bin Time

  34. Object Exploration in Infancy

  35. Object Exploration by a Robot Sinapov, J., Wiemer, M., and Stoytchev, A. (2009). Interactive Learning of the Acoustic Properties of Household Objects In proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA)

  36. Objects

  37. Behaviors Grasp : Push : Shake : Tap: Drop :

  38. Audio Feature Extraction Behavior Execution: WAV file recorded: Discrete Fourier Transform:

  39. Audio Feature Extraction 1. Training a self-organizing map (SOM) using DFT column vectors:

  40. Audio Feature Extraction 2. Use SOM to convert DFT spectrogram to a sequence:

  41. Audio Feature Extraction 2. Use SOM to convert DFT spectrogram to a sequence: S i : (3,2) ->

  42. Audio Feature Extraction 2. Use SOM to convert DFT spectrogram to a sequence: S i : (3,2) -> (2,2) ->

  43. Audio Feature Extraction 2. Use SOM to convert DFT spectrogram to a sequence: S i : (3,2) -> (2,2) -> (4,4) -> ….

  44. Audio Feature Extraction 1. Training a self-organizing map 2. Discretization of a DFT of a (SOM) using column vectors: sound using a trained SOM is the sequence of activated SOM nodes over the duration of the sound

  45. Detecting Acoustic Similarity Auditory SOM Auditory SOM Sequence X i Sequence Y j Global Sequence Alignment Similarity very similar sim(X i ,Y j ) = 0.89

  46. Detecting Acoustic Similarity Auditory SOM Auditory SOM Sequence X i Sequence Y j Global Sequence Alignment Similarity not similar sim(X i ,Y j ) = 0.23

  47. Problem Formulation Model predictions: Object Recognition Model Sound S i Sequence: Behavior drop Recognition Model

  48. Acoustic Object Recognition Auditory Data Object Probability Estimates Dimensionality Reduction using SOM Auditory Recognition Model Discrete Auditory Sequence

  49. Recognition Model • k-NN: memory-based learning algorithm With k = 3: 2 neighbors 1 neighbors Test point ? Therefore, Pr(red) = 0.66 Pr(blue) = 0.33

  50. Recognition Model • SVM: discriminative learning algorithm

  51. Evaluation Results Chance accuracy = 2.7 %

  52. Evaluation Results

  53. Recognition Video

  54. Estimating Acoustic Object Similarity using Confusion Matrix : similar Predicted → : similar 40 4 0 0 Actual 6 42 0 0 : different 0 0 21 6 0 0 8 35 : different

  55. (mostly) metal (mostly) metal Balls Balls objects objects Plastic Objects Plastic Objects Objects with Objects with (mostly) wooden (mostly) wooden Paper Objects Paper Objects contents inside contents inside objects objects

  56. Recognizing the sounds of objects manipulated by other agents

  57. Sinapov, J., Bergquist, T., Schenck, C., Ohiri, U., Griffith, S., and Stoytchev, A. (2011) Interactive Object Recognition Using Proprioceptive and Auditory Feedback International Journal of Robotics Research, Vol. 30, No. 10, pp. 1250-1262, September 2011

  58. Objects

  59. The Proprioceptive / Haptic Modality J 1 . . . J 7 Time

  60. Feature Extraction Training a self-organizing map Training an SOM using sampled (SOM) using sampled joint torques: frequency distributions:

  61. Feature Extraction Discretization of joint-torque Discretization of the DFT of a records using a trained SOM sound using a trained SOM

  62. Accuracy vs. Number of Behaviors

  63. 1 Behavior Multiple Behaviors

  64. Sinapov, J., and Stoytchev, A. (2010). The Boosting Effect of Exploratory Behaviors In Proceedings of the 24-th National Conference on Artificial Intelligence (AAAI), 2010.

  65. ZCam (RGB+D) Microphones in the head Logitech Webcam Torque sensors in the joints 3-axis accelerometer Sinapov, J., Schenck, C., Staley, K., Sukhoy, V., and Stoytchev, A. (2014) Grounding Semantic Categories in Behavioral Interactions: Experiments with 100 Objects Robotics and Autonomous Systems, Vol. 62, No. 5, pp. 632-645, May 2014.

  66. Exploratory Behaviors grasp lift hold shake drop tap poke push press

  67. Coupling Action and Perception Action: poke … … … Perception: optical flow … … … Time

  68. Sensorimotor Contexts audio haptics (joint proprioception Optical Color SURF (DFT) torques) (finger pos.) flow look grasp lift hold shake drop tap poke push press

  69. Overview Interaction with Object Category Estimates … Sensorimotor Feature Category Extraction Recognition Model

  70. Context-specific Category Recognition M poke-audio Observation from Recognition model Distribution over poke-audio context for poke-audio category labels context

  71. Combining Model Outputs . . . . . . . . M look-color M tap-audio M lift-SURF M press-prop. Weighted Combination

  72. Deep Models for Non-Visual Perception Tatiya, G., and Sinapov, J. (2019) Deep Multi-Sensory Object Category Recognition Using Interactive Behavioral Exploration 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 20-24, 2019.

  73. Take-home Message • Behaviors allow robots not only to affect the world, but also to perceive it • Non-visual sensory feedback improves object classification and perception tasks that are typically solved using vision alone • A diverse sensorimotor repertoire is necessary for scaling up object recognition, categorization, and individuation to a large number of objects

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend