Computer Vision for Mobile Robots in GPS Denied Areas Michael - - PowerPoint PPT Presentation

computer vision for mobile robots in gps denied areas
SMART_READER_LITE
LIVE PREVIEW

Computer Vision for Mobile Robots in GPS Denied Areas Michael - - PowerPoint PPT Presentation

Computer Vision for Mobile Robots in GPS Denied Areas Michael Berli, 28th of April 2015 Supervisor: Tobias Ngeli 1 Robots can work in places we as humans can't reach and they can do jobs we are unable or unwilling to do. 2 [1,2]


slide-1
SLIDE 1

Michael Berli, 28th of April 2015 Supervisor: Tobias Nägeli

Computer Vision for Mobile Robots in GPS Denied Areas

1

slide-2
SLIDE 2

2

Robots can work in places we as humans can't reach and they can do jobs we are unable or unwilling to do.

[1,2]

slide-3
SLIDE 3

§ How do we make robots navigate autonomously?

Autonomous mobile robots

3

Robots should be able to explore an unknown environment and navigate inside this environment without active human control

slide-4
SLIDE 4

Autonomous mobile robots

4

Mapless Map-Based Map-Building

§ Using computer vision for autonomous navigation

slide-5
SLIDE 5

Robots

5

[3,4,5]

slide-6
SLIDE 6

Type of robot § Autonomous Ground Vehicles Environment § Indoor environments (rooms, tunnels, warehouses) Sensors § Cameras, wheel sensors

Focus in this talk

6

slide-7
SLIDE 7

Robot scenarios: Industrial-Automation

7

[6]

slide-8
SLIDE 8

Robot scenarios: Inspection & Discovery

8

[7]

slide-9
SLIDE 9

Robot scenarios: Space operations

9

[8]

slide-10
SLIDE 10

The three navigation classes

10

Mapless Map-Based Map-Building

slide-11
SLIDE 11

Mapless Navigation

11

Walk through Paris without colliding

[10]

slide-12
SLIDE 12

Collision Avoidance

12

slide-13
SLIDE 13

Optical Flow

13

Frame @ t Frame @ t+1 (x,y) (x+dx,y+dy) u v (x,y)

§ Describe the motion of patterns in successive images

slide-14
SLIDE 14

14

Optical Flow

[11]

slide-15
SLIDE 15

15

t0 t1

[11]

slide-16
SLIDE 16

§ Get an understanding of depth in images § Time-To-Contact between a camera and an object

Optical Flow

16

[11]

slide-17
SLIDE 17

Optical Flow: Time-To-Contact

17

slide-18
SLIDE 18

Optical Flow: Time-To-Contact

18

slide-19
SLIDE 19

Optical Flow: Time-To-Contact

19

FOE

Focus of Expansion Where the camera points at

slide-20
SLIDE 20

Optical Flow: Time-To-Contact

20

FOE

Left Flow Right Flow Central Flow

TTCl TTCc TTCr

slide-21
SLIDE 21

Obstacle Avoidance FSM

21

[23]

slide-22
SLIDE 22

Inspired by biology

22

slide-23
SLIDE 23

Inspired by biology

23

slide-24
SLIDE 24

Inspired by biology

24

Maximum of

  • ptical flow
slide-25
SLIDE 25

§ Applications for visually impaired § Image Stabilization § Video Compression (MPEG) Drawbacks § Hard if no textures § Dynamic scenes?

Optical Flow: Further applications

25

slide-26
SLIDE 26

The three navigation classes

26

Mapless Map-Based Map-Building

A E B F C G D

slide-27
SLIDE 27

Map-Based Navigation

27

Use a map of Paris to navigate to champs elysée

[12]

slide-28
SLIDE 28

Map-Based Navigation: Robot Scenario

28

[13]

slide-29
SLIDE 29

Map-Based Navigation: Map Representation

29

Topological Map Graph-based representation of features and their relations, often associated with actions. Metric Map Two-Dimensional space in which objects and paths are placed.

path feature

+ simple and compact

  • no absolute distances
  • obstacle avoidance needed

+ very precise

  • hard to obtain and to maintain
slide-30
SLIDE 30

Map-Based Navigation Example

30

Use the topological map to navigate Build a topological map of the floor

slide-31
SLIDE 31

Feature Extraction

31

Feature Elements which can easily be re-observed and distinguished from the environment

§ Features should be

§ Easily re-observable and distinguishable § Plentiful in the environment § Stationary

slide-32
SLIDE 32

Room Identification

32

F

Signature Room F

[14]

slide-33
SLIDE 33

33

Topological Map

[14]

slide-34
SLIDE 34

Room Searching

34 Signature matching

[P]

slide-35
SLIDE 35

§ Learning and maintenance is expensive § Use scanner tags or artificial beacons?

Drawbacks and Extensions

35

?

remove cupboard

slide-36
SLIDE 36

The three navigation classes

36

Mapless Map-Based Map-Building

[15]

slide-37
SLIDE 37

Map-Building Navigation

37

Leave your hotel in Paris, explore the environment and return to the hotel afterwards

[16]

slide-38
SLIDE 38

§ Goal: in an unknown environment the robot can build a map and localize itself in the map § Two application categories

§ Structure from Motion (Offline) § Simultaneous Localization and Mapping (SLAM) ß Real-Time!

Map-Building Navigation

38

slide-39
SLIDE 39

Structure from Motion (Offline)

Pros Cons

§ Well studied § Very accurate and robust solution § Offline approach § Changing environment requires new learning phase

39

Robot moves around and captures video frames Frame-To-Frame feature detection 3D Map and trajectory reconstruction

slide-40
SLIDE 40

§ Build a map using dead reckoning and camera readings § We focus on EKF-SLAM (Extended Kalman Filter)

Simultaneous Localisation and Mapping (SLAM)

40

slide-41
SLIDE 41

41

[15]

slide-42
SLIDE 42

A map built with SLAM

42

[15]

slide-43
SLIDE 43

§ Motion estimation with data from odometry and heading sensors

Dead Reckoning

43

Starting position Uncertainty Prediction

slide-44
SLIDE 44

Six steps of map-building (1/2)

44

[17]

slide-45
SLIDE 45

Six steps of map-building (2/2)

45

[17]

slide-46
SLIDE 46

EKF-SLAM: The system

46

This system is represented by

  • System state vector
  • System covariance matrix
slide-47
SLIDE 47

EKF-SLAM: The state vector

47

⌢ xv = xr yr θr ! " # # # $ % & & & ⌢ y1 = x1 y1 ! " # $ % & ⌢ y2 = x2 y2 ! " # $ % & ⌢ y3 = x3 y3 ! " # $ % &

slide-48
SLIDE 48

EKF-SLAM: The covariance matrix

48

slide-49
SLIDE 49

49

PREDICTION

  • f Robot position

Feature Extraction Match predicted and

  • bserved features

Camera PREDICTION

  • f observed features

EKF Fusion Robot moved ESTIMATION

  • f updated robot position

SLAM Process

slide-50
SLIDE 50

§ Estimate robot‘s new position after a movement

Motion model

50

Motion model

  • ld

position

  • dometry

xv = fv( ˆ xv,u)

Estimated robot position

slide-51
SLIDE 51

51

PREDICTION

  • f robot position

Feature Extraction Match predicted and

  • bserved features

Camera PREDICTION

  • f observed features

EKF Fusion Robot moved ESTIMATION

  • f updated robot position

SLAM Process

slide-52
SLIDE 52

§ Based on the predicted robot position and the map, use a measurement model to predict which features should be in view now

Measurement model

52

slide-53
SLIDE 53

SLAM Process

53

PREDICTION

  • f robot position

Feature Extraction Match predicted and

  • bserved features

Camera PREDICTION

  • f observed features

EKF Fusion Robot moved ESTIMATION

  • f updated robot position
slide-54
SLIDE 54

Data matching

54

Prediction Camera

§ Match predicted and observed features

slide-55
SLIDE 55

SLAM Process

55

PREDICTION

  • f robot position

Feature Extraction Match predicted and

  • bserved features

Camera PREDICTION

  • f observed features

EKF Fusion Robot moved ESTIMATION

  • f updated robot position
slide-56
SLIDE 56

EKF Fusion

56

Prediction Camera Residual

slide-57
SLIDE 57

EKF Fusion

57

slide-58
SLIDE 58

EKF Update

58

slide-59
SLIDE 59

§ Robustness in changing environments § Multiple robot mapping

SLAM – Research topics

59

slide-60
SLIDE 60

Motion estimation of agile cameras

60

§ Real-Time SLAM with a Single Camera

§ Andrew J. Davison, University of Oxford, 2003

§ Parallel Tracking and Mapping for Small AR Workspaces

§ Georg Klein, David Murray, University of Oxford, 2007

[18]

slide-61
SLIDE 61

§ No odometry data, fast and unpredictable movements § Use a constant velocity model instead of odometry

Motion estimation of agile cameras

61

xv = (x y z α β δ vx vy vz vα vβ vδ )

Position Orientation Velocity

slide-62
SLIDE 62

62

[19]

slide-63
SLIDE 63

Motion estimation of agile cameras

63

§ Real-Time SLAM with a Single Camera

§ Andrew J. Davison, University of Oxford, 2003

§ Parallel Tracking and Mapping for Small AR Workspaces

§ Georg Klein, David Murray, University of Oxford, 2007

[19]

slide-64
SLIDE 64

Tracking and Mapping for AR Workspaces

64

[20]

slide-65
SLIDE 65

65

[21]

slide-66
SLIDE 66

§ What autonomous mobile robots are used for § How todays mobile robots navigate autonomously

§ mapless, map-based, map-building

§ The potential and the challenges of SLAM

What we have seen

66

slide-67
SLIDE 67

Papers 1. Bonin-Font, Francisco, Alberto Ortiz, and Gabriel Oliver. "Visual navigation for mobile robots: A survey." Journal of intelligent and robotic systems 53.3 (2008): 263-296. 2. Davison, Andrew J. "Real-time simultaneous localisation and mapping with a single camera." Proceedings of 9th IEEE International Conference onComputer Vision, 2003. 3. Klein, Georg, and David Murray. "Parallel tracking and mapping for small AR workspaces." Proceedings of 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2007 4. Davison, Andrew J. "Sequential localisation and map-building for real-time computer vision and robotics“, Robotics and Autonomous Systems 36 (2001) 171-183. 2001 5. Mehmed Serdar Guzel, Robert Bicker. “Optical Flow Based System Design for Mobile Robots”, Robotics Automation and Mechatronics, 2010 6.

  • M. Mata, J-M.Armingol, A. de la Escalera, M.A. Salichs. “Using learned visual landmarks for intelligent

topological navigation of mobile robots”, Mata, 2003

References

67

slide-68
SLIDE 68

Images & Videos 1. https://www.youtube.com/watch?v=ISznqY3kESI 2. http://si.wsj.net/public/resources/images/BN-EJ674_DYSON3_G_20140904010817.jpg 3. http://cdn.phys.org/newman/gfx/news/hires/2013/therhextakes.jpg 4. http://www.designboom.com/cms/images/andrea08/aqua201.jpg 5. http://www.flyability.com/wp-content/uploads/2013/08/Flyabiliy-Gimball-2.png 6. http://cnet4.cbsistatic.com/hub/i/r/2014/12/01/b1baf339-67d6-4004-bc66-7dd34c11a870/resize/770x578/3d17e8de0dbd6d26cbf13e53a6c0b655/ amazon-kiva-robots-donna-7611.jpg 7. http://cryptome.org/eyeball/daiichi-npp10/pict29.jpg 8. http://i.space.com/images/i/000/007/679/original/curiosity-mars-rover.jpg?1295367909 9. http://si.wsj.net/public/resources/images/BN-EJ674_DYSON3_G_20140904010817.jpg 10. http://www.paris-tours-guides.com/image/avenue-champs_elysees/walking-champs-elysees-paris.jpg 11. http://videohive.net/item/moving-train-and-passing-landscape/8960245? ref=Grey_Coast_Media&ref=Grey_Coast_Media&clickthrough_id=415192702&redirect_back=true 12. http://www.effectiveui.com/blog/wp-content/uploads/2012/06/Paris-Interactive-Map.jpg 13. https://timedotcom.files.wordpress.com/2015/03/463383156.jpg?quality=65&strip=color&w=1100 14. http://portal.uc3m.es/portal/page/portal/dpto_ing_sistemas_automatica/investigacion/lab_sist_inteligentes/publications/icra03a.pdf 15. http://www.soue.org.uk/souenews/issue4/mobilerobots.html 16. http://www.foreignpixel.com/wp-content/uploads/galleries/post-1227/full/street.jpg 17. https://www.doc.ic.ac.uk/~ajd/Publications/davison_kita_ras2001.pdf 18. http://ecx.images-amazon.com/images/I/41cveXjTHdL._SY300_.jpg 19. http://www.robots.ox.ac.uk/˜ajd/Movies/realtime 30fps slam.mpg 20. http://www.robots.ox.ac.uk/~gk/publications/KleinMurray2007ISMAR.pdf 21. http://www.robots. ox.ac.uk/∼gk/videos/klein 07 ptam ismar.avi 22. https://www.bcgperspectives.com/content/articles/business_unit_strategy_innovation_rise_of_robotics/ 23. Mehmed Serdar Guzel, Robert Bicker. “Optical Flow Based System Design for Mobile Robots”, 2010

References

68