Autonomous Grasp and Manipulation Planning using a ToF Camera - - PowerPoint PPT Presentation

autonomous grasp and manipulation
SMART_READER_LITE
LIVE PREVIEW

Autonomous Grasp and Manipulation Planning using a ToF Camera - - PowerPoint PPT Presentation

Autonomous Grasp and Manipulation Planning using a ToF Camera Zhixing Xue, Steffen Ruehl, Andreas Hermann, Thilo Kerscher and Ruediger Dillmann Presenter: Sven R. Schmidt-Rohr Research Center for Information Technology (FZI) at the University


slide-1
SLIDE 1

Autonomous Grasp and Manipulation Planning using a ToF Camera

Zhixing Xue, Steffen Ruehl, Andreas Hermann, Thilo Kerscher and Ruediger Dillmann Presenter: Sven R. Schmidt-Rohr Research Center for Information Technology (FZI) at the University of Karlsruhe Karlsruhe, Germany

slide-2
SLIDE 2
  • Motivation
  • Time-of-Flight Camera
  • Calibration
  • Segmentation
  • Applications
  • Motion Planning
  • Grasping
  • Manipulation
  • Conclusion

2

Content

Mesa SwissRanger SR4000 Sensorbased Motion Planning Grasping of Unknown Objects Manipulation of Cream-like Mass

slide-3
SLIDE 3
  • To sense and understand its 3D environment is an important

ability for a service robot to grasp and manipulate objects in a dynamic and cluttered environment.

  • The Time-of-Flight (ToF) cameras can capture range information

at video frame rates.

  • Use the sensed depth information for grasping and manipulation

tasks:

  • Motion Planning: to avoid collision with the detected obstacles
  • Grasping: to grasp objects using the captured models
  • Manipulation: to plan manipulation actions adapted to object surface
  • Use of impedance control to compensate the uncertainties due to

sensor error.

3

Motivation

slide-4
SLIDE 4
  • Sensor emits amplitude modulated

near-infrared light which is reflected by objects in the scene and projected

  • nto the chip
  • In each pixel, the phase shift of

reference and received signal is determined (correlation) and the distance is computed

  • 2.5D depth map and

intensity/amplitude image as near-infrared image of ambient illuminance and reflectance

4

Time-of-Flight Principle

Mesa SwissRanger SR4000

slide-5
SLIDE 5

Measurement Characteristics

Advantages: + 3D information without scanning + Video frame rate (20 – 50 fps) + Viewing frustum ~ 45° + Solid state sensor + Varying ambient light conditions yield same data due to illumination unit + Eye safety Disadvantages:

  • Limited Resolution (176x144 pixels)
  • Various factors affect measurement

accuracy (~ 10 cm):

  • internal: noise (thermal, electronic,

photon shot), propagation delay in the chip’s circuits, the exact form of the diode’s signal, lens distortion, …

  • external: temperature, ambient light,

reflective properties of viewed scene, …

5

  • Calibration of the sensed depth data is necessary
  • Segmentation of a priori known objects from the sensed depth

data

slide-6
SLIDE 6
  • A Swissranger SR4000
  • For 3D modeling of the workspace
  • Mounted direct above the manipulation

region to reduce occlusion

  • Two Pike Cameras
  • For object recognition and localization
  • Two KUKA Light Weighted Robotic Arms
  • 7 DoFs, with Impedance Control
  • Two DLR/HIT Five Finger Hands
  • 15 DoFs, with Impedance Control
  • A touch screen for Human-Machine-

Interaction

6

Experiment Setup

slide-7
SLIDE 7

Calibration of SwissRanger 4k

  • Stage 1: Estimation of intrinsic/extrinsic

camera parameters using state-of-the-art tools

  • lens distortion, misalignment of the chip
  • Stage 2: Multi-plane calibration for per-

pixel depth correction (offline generated)

  • accuracy 5 cm (on average)
  • Stage 3: Usage of “landmarks” in

environment (e.g. wall, table) for per- pixel depth correction (online by means

  • f best-fitting)
  • accuracy 1 cm (on average)

Errors in depth map of planar checkerboard

7

slide-8
SLIDE 8

Segmentation of Known Objects

8

Z-Buffer Rendering

  • f Known Objects

Depth Information Camera Pictures Segmented Objects Point Clouds Object Localization Depth Comparison

slide-9
SLIDE 9
  • Environment is represented by three kinds of

data in the environment model:

  • Doors, walls, tables, … are static
  • Triangle meshes corresponding to the

recognized and localized objects

  • Segmented triangle meshes of the Time-of-

Flight camera approximate obstacles

  • During the transport phase, the grasped object

is treated as a part of the kinematic chain

  • A probabilistic collision-free path planer is used

to find a trajectory to the desired arm position

  • The arm is operated in impedance mode to

comply with environment deviations

9

Sensor-based Motion Planning

9

slide-10
SLIDE 10

10

Sensor-based Motion Planning

slide-11
SLIDE 11
  • The object is modeled using the ToF camera and segmented from

the scene

  • Generate the approach directions from approximations of the
  • bject’s geometry
  • Hand within a predefined hand preshape moves along an

approach direction and closes the fingers in the simulation

  • Use Force-Closure checking to find feasible grasps
  • Use joint based finger impedance control to apply grasping forces

and to comply with model deviations

11

Grasp Planning for Unknown Objects

slide-12
SLIDE 12
  • CATCH [Zhang2007] (Continuous

Collision Detection for Articulated Models using Taylor Models and Temporal Culling) has been used for grasp planning

  • Continuous Collision Detection

takes the motion of the objects into account and computes the first time of contact

  • CATCH is 4 ~ 10 times faster than

the extended PQP version in GraspIt!

  • At least 10 grasp candidates can

be tested within one second

12

Grasp Planning with CATCH

Using CATCH Using Newton-Raphson in GraspIt!

slide-13
SLIDE 13

13

Grasping of Unknown Objects

slide-14
SLIDE 14
  • Manipulation of cream-like mass is a

further manipulation action beyond the pick-and-place operations

  • We have implemented an ice cream

serving scenario, that the robot serves equally sized ice cream scoops

  • Use the ToF camera to detect the surface
  • f the mass and plan the manipulation

trajectories for the tool

  • Segmentation and calibration of the

detected ice cream surfaces

14

Manipulation of Cream-Like Mass

Real ice cream surface Segmented and calibrated ice cream surface

slide-15
SLIDE 15
  • The scoop trajectories are generated from the

ice cream surface

  • The highest trajectory is selected to be

performed

  • Compute the intrusion depth of the scoop into

the ice cream surface using the volume of the scoop

  • Use Cartesian impedance control of the arm to

scoop the ice cream

  • Compute the reference trajectory using the

stiffness factor of impedance control

15

Manipulation of Cream-Like Mass

slide-16
SLIDE 16

16

Manipulation of Cream-Like Mass

slide-17
SLIDE 17
  • The Time-of-Flight camera can provide useful depth information

for service robots

  • Sensor-based Motion Planning
  • Grasping of Unknown Objects
  • Manipulation of Cream-Like Mass
  • The measurement accuracy can be improved by calibration and

compensated using arm impedance control

  • Future work
  • Combination of multiple ToF cameras for complete 3D environment
  • Combination of color cameras and ToF cameras for better object

recognition and localization

  • Observation of both robot itself and its environment

17

Conclusion

slide-18
SLIDE 18

18

Thank you for your attention!