Global Robot Ego-Localization C Combining Image Retrieval and HMM- - - PowerPoint PPT Presentation

global robot ego localization c combining image retrieval
SMART_READER_LITE
LIVE PREVIEW

Global Robot Ego-Localization C Combining Image Retrieval and HMM- - - PowerPoint PPT Presentation

Global Robot Ego-Localization C Combining Image Retrieval and HMM- bi i I R i l d HMM based filtering Cdric LE BARZ ONERA, The French Aerospace Laboratory PhD PhD advisors: d i - M. CORD (Pierre&Marie Curie University - Paris)


slide-1
SLIDE 1

Global Robot Ego-Localization C bi i I R i l d HMM Combining Image Retrieval and HMM- based filtering

Cédric LE BARZ

ONERA, The French Aerospace Laboratory

PhD d i PhD advisors:

  • M. CORD (Pierre&Marie Curie University - Paris)
  • S. HERBIN & M. SANFOURCHE (Onera - Palaiseau)

( )

  • J-Y. DUFOUR (Thales company - Palaiseau)

1

slide-2
SLIDE 2

Context illustration

A P i Ai Amazon Prime Air

2

slide-3
SLIDE 3

Autonomous navigation

  • Needs accurate & absolute localization

 Accuracy required to follow trajectory and compute  Accuracy required to follow trajectory and compute appropriate commands  Absolute localization required because mission is specified q p with absolute coordinates

  • GPS ?

 May be absent (shadowing effect)  May be jammed (military missions)  Precision about 10m

 An other information source is needed to deal with GPS drawbacks. Our proposal: use visual information…like humans Our proposal: use visual information…like humans

3

slide-4
SLIDE 4

Our objective: visual absolute localization

  • How to find, in a geo-referenced image database,

th i d i ti th th l t the image depicting the same scene as the last image acquired by the robot (request image) ?

 Image retrieval problematic

  • Nice feature would be:

 Use a freely available and wide coverage geo-referenced image database  … like Google Streetview

4

slide-5
SLIDE 5

Image retrieval: State of the arts

  • Image Retrieval (IR) algorithms

 Vote based methods (kNN)  Vote based methods (kNN)  Dictionary based methods (BOVW)  Descriptor modelization based methods (VLAD)  Descriptor modelization based methods (VLAD)  Kernel based methods

  • Based on local descriptors. Construction of a

compact signature, which is then used for IR task. p g ,

5

slide-6
SLIDE 6

Image retrieval Sometimes it works well …..

6

slide-7
SLIDE 7

Image retrieval

7

slide-8
SLIDE 8

Image retrieval

8

slide-9
SLIDE 9

Image retrieval Sometimes it does not …

9

slide-10
SLIDE 10

Image retrieval

10

slide-11
SLIDE 11

Image retrieval

  • Many disparities may occur: points of view, illumination

conditions, focal length, scene objects (car, people, vegetation…)

  • Visual matching methods are sensitive to these disparities

11

 How can we make visual localization more robust ?

slide-12
SLIDE 12

Our approach (1/2)

  • Combine visual information with odometric

measurements (hybrid system) measurements (hybrid system)

12

slide-13
SLIDE 13

Our approach (2/2)

  • Goal: find the trajectory that best

explain observations

  • T

j t b

  • Trajectory can be seen as a

sequence of states

  • Where each state is associated

ith f d i with a geo-referenced image

  • Means: HMM
  • Hidden states are places

p

13

slide-14
SLIDE 14

HMM: ‘Π’ vector

  • Π is the initial state distribution vector: the probability that the first

state/position is a particular state/positio

  • Example: If we are in state S3 with the uniform uncertainty

hypothesis, then Π=[0;1/3;1/3;1/3;0;0]

  • ‘Π’ enables to take into account initial estimated position and initial
  • Π enables to take into account initial estimated position and initial

localization uncertainty

14

slide-15
SLIDE 15

HMM: ‘A’ matrix

  • A={aij} is the transition probability matrix between each state

aij=P(qt+1=Sj | qt=Si)

  • Example: From state S2, odometric measurements enables to

estimate state S4. If we consider uniform uncertainty about

  • dometric measurements, then a2,j = [0;0;1/3;1/3;1/3;0]

2,j

[ ]

  • ‘A’ enables to take into account odometric measurements and
  • dometric uncertainty (a priori information)

15

slide-16
SLIDE 16

HMM: ‘B’ matrix

  • B is the likelihood to get

a specific observation given the state P(O|S) the state P(O|S)

  • Enable to take into

account visual similarity between observations and geo-referenced database geo referenced database images

  • Visual similarity is

computed from the number

  • f matching descriptors
  • f matching descriptors

between each request image and database images

16

images

slide-17
SLIDE 17

Qualitative results: Example

17

slide-18
SLIDE 18

Qualitative results : Example

18

slide-19
SLIDE 19

Experiments / Evaluation

Image database Image number Request images Google street view 1105 1 image every ~10m Geo-referenced database ‘Pittsburgh’ database (provided 2215 1 image every ~5m Images (p by Google company) y

  • 11 km trajectory
  • 640x480 images

19

slide-20
SLIDE 20

Results: Mean error localization vs. initial localization uncertainty

  • Odometric uncertainty 10m
  • Odometric uncertainty 10m
  • 5 observations

(*) A Zamir and M Shah “Accurate image localization based on google maps street view ” in 20 ( ) A. Zamir and M. Shah, Accurate image localization based on google maps street view, in Proceedings of the European Conference on Computer Vision. IEEE, 2010, pp. 255–268.

slide-21
SLIDE 21

Result : Mean error localization vs. number of

  • bservations
  • Sensitivity of our solution to the number of past observations used,

according the initial localization uncertainty U

  • Odometric

uncertainty: 10m

= > The higher U, the more observations number have to be considered to keep

21

the mean error localization under a threshold.

slide-22
SLIDE 22

Conclusions & Perspectives

Conclusion

  • We proposed a hybrid method that combines odometric

measurements with visual similarity measurements.

  • Advantages

 Improve image retrieval / Reduce error localization  Complete re-estimation of the last part of the trajectory when i d (l t i ti ) required (long-term navigation)  Developed framework can easily integrate new IR solutions  Can be used for the kidnapped robot problem  Can be used for the kidnapped robot problem

Improvements

  • Learn HMM parameters (‘A’ matrix)
  • Learn HMM parameters ( A matrix)
  • Improve visual similarity measurements (‘B’ matrix)

 Find discriminative and representative elements thanks to learning  Find discriminative and representative elements thanks to learning

  • Improve localization accuracy thanks to pose estimation (

22

slide-23
SLIDE 23

Thank you for your attention Questions ?

23