optimal control solution
play

Optimal Control Solution Mode 1: Training example Cost Map 2-D - PDF document

11/22/16 X X Learning Y Y (Sensor Data) (Input) (Output) (Path to goal) CSE 571 Inverse Optimal Control (Inverse Reinforcement Learning) Many slides by Drew Bagnell Carnegie Mellon University Optimal Control Solution Mode 1: Training


  1. 11/22/16 X X Learning Y Y (Sensor Data) (Input) (Output) (Path to goal) CSE 571 Inverse Optimal Control (Inverse Reinforcement Learning) Many slides by Drew Bagnell Carnegie Mellon University Optimal Control Solution Mode 1: Training example Cost Map 2-D Learning Planner Y (Path to goal) Mode 1: Training example Mode 1: Learned behavior 1

  2. 11/22/16 Mode 1: Learned behavior Mode 1: Learned cost map Mode 2: Training example Mode 2: Training example Mode 2: Learned behavior Mode 2: Learned behavior 2

  3. 11/22/16 Mode 2: Learned cost map Feature vector Cost = w' F Weighting vector Ratliff, Bagnell, Zinkevich 2005 Ratliff, Bradley, Bagnell, Chestnutt, NIPS 2006 Silver, Bagnell, Stentz, RSS 2008 w=[], F=[] w=[w 1 ], F=[F 1 ] ( , High Cost) ( , High Cost) Learn F 1 Learn F 2 ( , Low Cost) ( , Low Cost) Ratliff, Bagnell, Zinkevich, ICML 2006 Ratliff, Bagnell, Zinkevich, ICML 2006 Ratliff, Bradley, Bagnell, Chestnutt, NIPS 2006 Ratliff, Bradley, Bagnell, Chestnutt, NIPS 2006 Silver, Bagnell, Stentz, RSS 2008 Silver, Bagnell, Stentz, RSS 2008 Ratliff, Bradley, Chesnutt, Bagnell 06 Zucker, Ratliff, Stolle, Chesnutt, Bagnell, Atkeson, Kuffner 09 3

  4. 11/22/16 Learned Cost Function Examples Learned Cost Function Examples Learning Manipulation Preferences Learned Cost Function Examples • Input: Human demonstrations of preferred behavior (e.g., moving a cup of water upright without spilling) • Output: Learned cost function that results in trajectories satisfying user preferences 22 Demonstration(s) Demonstration(s) Graph 23 24 4

  5. 11/22/16 Demonstration(s) Graph Demonstration(s) Graph Projection 25 26 Demonstration(s) Graph Projection Demonstration(s) Graph Projection Learned cost 27 28 Demonstration(s) Graph Projection Demonstration(s) Graph Projection Discrete sampled Output Discrete sampled Learned cost Learned cost paths trajectories paths 29 30 5

  6. 11/22/16 Demonstration(s) Graph Projection Demonstration(s) Graph Projection Discrete Local Trajectory MaxEnt IOC Optimization Output Discrete sampled Output Discrete sampled Learned cost Learned cost trajectories paths trajectories paths 31 32 Graph generation 2D obstacle avoidance task • Goal: Construct a graph in the robot’s configuration space providing good coverage 2D state: (x,y) 33 34 Projection Learning the cost function • Goal: Project the continuous demonstration onto the • Goal: Given projected demonstrations, learn the cost graph, resulting in a discrete graph path function • Learn feature weights ( *) using softened value θ • Use a modified Dijkstra’s algorithm minimizing sum of: iteration on the discrete graph (MaxEnt IOC - Ziebart et al., 2008) – Length of discrete path (Euclidean) – State dependent features (eg: Distance to obstacles) – Distance to continuous demonstration 35 36 6

  7. 11/22/16 Setup • Binary state-dependent features (~95) • Histograms of distances to objects Experimental Results • Histograms of end-effector orientation • Object specific features (electronic vs non-electronic) • Approach direction w.r.t goal • Comparison : • Human demonstrations • Obstacle avoidance planner (CHOMP) • Locally optimal IOC approach (similar to Max-Margin planning, Ratliff et. al., 2007) 37 38 Laptop task: Demonstration Laptop task: LTO + Discrete graph path ( Not part of training set) 39 40 Statistics for Laptop task Laptop task: LTO + Smooth random path 41 42 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend