third person visual imitation learning via decoupled
play

Third-Person Visual Imitation Learning via Decoupled Hierarchical - PowerPoint PPT Presentation

Third-Person Visual Imitation Learning via Decoupled Hierarchical Control Pratyusha Sharma, Deepak Pathak, Abhinav Gupta Problem / Goal Can our robot manipulate a new object given a single human video alone? Why is it hard? Inferring


  1. Third-Person Visual Imitation Learning via Decoupled Hierarchical Control Pratyusha Sharma, Deepak Pathak, Abhinav Gupta

  2. Problem / Goal Can our robot manipulate a new object given a single human video alone?

  3. Why is it hard? ● Inferring useful information in the video ● Handling domain shift ● Every major part of the sequence needs to be executed correctly - Ex: For pouring, it needs to reach the cup before twisting its hand ● The manipulation is challenging. (6D, novel objects and positioning, no force feedback)

  4. Issue Scenario 1: Sequentially predict the states of the robot arm Output Input: Human demonstration + first image of object Issue: Not closed loop. No understanding of how the positions of the objects placed in front of the robot change with time!

  5. Issue Scenario 2: Sequentially predict the states of the robot arm Output Input: Human Demo + Robot visual state How do we force it to use task information from Human demonstration alone but condition its action on current observable state?

  6. We want to build a model that can infer the intent from the Human Demonstration of a task and act in the Robot’s current environment to then accomplish the task.

  7. Approach Training Goal Generator 
 (high-level) We decouple the task of Goal Inference from Local Control Controller 
 (low-level) a t

  8. Training and Test Scenarios - Data Availability Training Test (deployment) ● Human demo video ● Human demo video ● Robot demo video ● Current visible image ● Robot joint angles of the table

  9. Approach - Training Training Goal Generator: Given human demo and Goal present visual state of Generator 
 (high-level) the robot we hallucinate the next step

  10. Approach - Training Training Goal Generator: Given human demo and present visual state of Goal Generator 
 (high-level) the robot we hallucinate the next step Inverse Model : Use the hallucinated Controller 
 prediction with the (low-level) current visual state to predict the action! a t

  11. Train Time:The Goal Generator and Inverse Model are trained separately Test Time: The Goal Generator and Inverse Model are executed alternatingly

  12. Approach - Test

  13. Approach - Train Vs Test

  14. Experiments and Results We evaluate the models trained as follows: ● Goal generation model with a perfect inverse model ● Inverse model with a perfect goal generation model ● Goal generation model and inverse model in tandem

  15. Results: Goal generation model with perfect inverse model

  16. Results: Inverse model with perfect goal generator GT trajectory Predicted trajectory from GT-images

  17. Results: Final experiment runs

  18. Results: Final Experimental Runs : Placing in a box

  19. Shortcomings: 1. Robot trajectory is shaky: The robot trajectory looks shaky because of the absence of any temporal knowledge. Though trajectories predicted by inverse models with memory units(LSTM) look far less shaky but the models then over fit to the task

  20. Thank you!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend