autonomous grasp and manipulation
play

Autonomous Grasp and Manipulation Planning using a ToF Camera - PowerPoint PPT Presentation

Autonomous Grasp and Manipulation Planning using a ToF Camera Zhixing Xue, Steffen Ruehl, Andreas Hermann, Thilo Kerscher and Ruediger Dillmann Presenter: Sven R. Schmidt-Rohr Research Center for Information Technology (FZI) at the University


  1. Autonomous Grasp and Manipulation Planning using a ToF Camera Zhixing Xue, Steffen Ruehl, Andreas Hermann, Thilo Kerscher and Ruediger Dillmann Presenter: Sven R. Schmidt-Rohr Research Center for Information Technology (FZI) at the University of Karlsruhe Karlsruhe, Germany

  2. Content  Motivation  Time-of-Flight Camera • Calibration • Segmentation  Applications • Motion Planning Mesa • Grasping SwissRanger SR4000 • Sensorbased Motion Planning Manipulation  Conclusion Manipulation of Cream-like Mass Grasping of Unknown Objects 2

  3. Motivation  To sense and understand its 3D environment is an important ability for a service robot to grasp and manipulate objects in a dynamic and cluttered environment.  The Time-of-Flight (ToF) cameras can capture range information at video frame rates.  Use the sensed depth information for grasping and manipulation tasks: • Motion Planning: to avoid collision with the detected obstacles • Grasping: to grasp objects using the captured models • Manipulation: to plan manipulation actions adapted to object surface  Use of impedance control to compensate the uncertainties due to sensor error. 3

  4. Time-of-Flight Principle  Sensor emits amplitude modulated near-infrared light which is reflected by objects in the scene and projected onto the chip Mesa  In each pixel, the phase shift of SwissRanger reference and received signal is SR4000 determined (correlation) and the distance is computed  2.5D depth map and intensity/amplitude image as near-infrared image of ambient illuminance and reflectance 4

  5. Measurement Characteristics Advantages: Disadvantages: + 3D information without scanning - Limited Resolution (176x144 pixels) Video frame rate (20 – 50 fps) + - Various factors affect measurement Viewing frustum ~ 45 ° + accuracy (~ 10 cm): - + Solid state sensor internal: noise (thermal, electronic, photon shot), propagation delay in + Varying ambient light conditions the chip’s circuits, the exact form of yield same data due to illumination the diode’s signal, lens distortion, … unit - external: temperature, ambient light, + Eye safety reflective properties of viewed scene, …  Calibration of the sensed depth data is necessary  Segmentation of a priori known objects from the sensed depth data 5

  6. Experiment Setup  A Swissranger SR4000 • For 3D modeling of the workspace • Mounted direct above the manipulation region to reduce occlusion  Two Pike Cameras • For object recognition and localization  Two KUKA Light Weighted Robotic Arms • 7 DoFs, with Impedance Control  Two DLR/HIT Five Finger Hands • 15 DoFs, with Impedance Control  A touch screen for Human-Machine- Interaction 6

  7. Calibration of SwissRanger 4k  Stage 1: Estimation of intrinsic/extrinsic camera parameters using state-of-the-art tools • lens distortion, misalignment of the chip  Stage 2: Multi-plane calibration for per- pixel depth correction (offline generated) • accuracy 5 cm (on average)  Stage 3: Usage of “landmarks” in environment (e.g. wall, table) for per- Errors in depth map of pixel depth correction (online by means planar checkerboard of best-fitting) • accuracy 1 cm (on average) 7

  8. Segmentation of Known Objects Z-Buffer Rendering of Known Objects Object Localization Depth Comparison Camera Pictures Segmented Objects Depth Point Clouds Information 8

  9. Sensor-based Motion Planning  Environment is represented by three kinds of data in the environment model:  Doors, walls, tables, … are static  Triangle meshes corresponding to the recognized and localized objects  Segmented triangle meshes of the Time-of- Flight camera approximate obstacles  During the transport phase, the grasped object is treated as a part of the kinematic chain  A probabilistic collision-free path planer is used to find a trajectory to the desired arm position  The arm is operated in impedance mode to comply with environment deviations 9 9

  10. Sensor-based Motion Planning 10

  11. Grasp Planning for Unknown Objects  The object is modeled using the ToF camera and segmented from the scene  Generate the approach directions from approximations of the object’s geometry  Hand within a predefined hand preshape moves along an approach direction and closes the fingers in the simulation  Use Force-Closure checking to find feasible grasps  Use joint based finger impedance control to apply grasping forces and to comply with model deviations 11

  12. Grasp Planning with CATCH  CATCH [Zhang2007] (Continuous Collision Detection for Articulated Models using Taylor Models and Temporal Culling) has been used for grasp planning  Continuous Collision Detection takes the motion of the objects into Using Newton-Raphson in GraspIt! account and computes the first time of contact  CATCH is 4 ~ 10 times faster than the extended PQP version in GraspIt!  At least 10 grasp candidates can be tested within one second Using CATCH 12

  13. Grasping of Unknown Objects 13

  14. Manipulation of Cream-Like Mass  Manipulation of cream-like mass is a further manipulation action beyond the pick-and-place operations  We have implemented an ice cream serving scenario, that the robot serves equally sized ice cream scoops  Use the ToF camera to detect the surface Real ice cream surface of the mass and plan the manipulation trajectories for the tool  Segmentation and calibration of the detected ice cream surfaces Segmented and calibrated 14 ice cream surface

  15. Manipulation of Cream-Like Mass  The scoop trajectories are generated from the ice cream surface  The highest trajectory is selected to be performed  Compute the intrusion depth of the scoop into the ice cream surface using the volume of the scoop  Use Cartesian impedance control of the arm to scoop the ice cream  Compute the reference trajectory using the stiffness factor of impedance control 15

  16. Manipulation of Cream-Like Mass 16

  17. Conclusion  The Time-of-Flight camera can provide useful depth information for service robots • Sensor-based Motion Planning • Grasping of Unknown Objects • Manipulation of Cream-Like Mass  The measurement accuracy can be improved by calibration and compensated using arm impedance control  Future work • Combination of multiple ToF cameras for complete 3D environment • Combination of color cameras and ToF cameras for better object recognition and localization • Observation of both robot itself and its environment 17

  18. Thank you for your attention! 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend