L ECTURE 23: SLAM P ROPERTIES D ATA A SSOCIATION P ROBLEM I NSTRUCTOR - - PowerPoint PPT Presentation
L ECTURE 23: SLAM P ROPERTIES D ATA A SSOCIATION P ROBLEM I NSTRUCTOR - - PowerPoint PPT Presentation
16-311-Q I NTRODUCTION TO R OBOTICS F ALL 17 L ECTURE 23: SLAM P ROPERTIES D ATA A SSOCIATION P ROBLEM I NSTRUCTOR : G IANNI A. D I C ARO E K F E Q U AT I O N S F O R S L A M EKF FOR SIMULTANEOUS LOCALIZATION AND MAPPING The SLAM EKF
EKF FOR SIMULTANEOUS LOCALIZATION AND MAPPING
The SLAM EKF equations: At every time step k: ˆ ξk+1|k = fk(ˆ ξk|k, 0 0; ∆Sk, ∆θk) Pk+1|k = FkξPkFkξT + FkνVkFkνT When a new landmark i is observed: ˆ ξ∗
k+1|k = qk(ˆ
ξk|k, zk+1) P ∗
k+1|k = Qkξ
" Pk|k Wk # QkξT At every time step k + 1 a landmark i is observed ˆ ξk+1 = ˆ ξk+1|k + Gk+1(zk+1 − ok(ˆ ξk+1|k, 0 0)) Pk+1 = Pk+1|k − Gk+1OkξPk+1|k Gk+1 = Pk+1|kOkξT S−1
k+1
Sk+1 = OkξPk+1|kOkξT + OkwWk+1Okw T The Kalman gain matrix G multiplies innovation from the landmark observation, a 2-vector, so as to update every element of the state vector: the pose of the vehicle and the position of every map feature.
2
E K F E Q U AT I O N S F O R S L A M
SLAM PERFORMANCE: COVARIANCE MATRIX
" ΣRR ΣRλ ΣT
Rλ Σλλ
#
3
S L A M C O VA R I A N C E M AT R I X
IN THE LIMIT, ALL ESTIMATES BECOME FULLY CORRELATED
From the moment of initialization, the location of a landmark is a function of the robot location, such that errors in the robot location will also appear as errors in feature location Every observation of a feature affects the estimate of every other feature in the map. It’s as they are all tied up together with elastic bands - pulling at one will pull at the others in turn.
4
I N T H E L I M I T, A L L E S T I M AT E S A R E C O R R E L AT E D
- In (d) the robot senses the first
landmark again, whose position was known with relatively little uncertainty
5
C O R R E L AT I O N H E L P S T O R E D U C E U N C E RTA I N T Y
- Landmark position is either known
(from a given map) or has been previously estimated (from the map being built during SLAM)
For reducing pose’s uncertainty, and fully exploit the correlation in the covariance matrix, it is necessary to see the same landmark multiple times
LOOP CLOSURE
Recognizing an already mapped area → (All) uncertainties collapse Landmark uncertainties shrinks differently according to their initial value and to the distance from the loop closure point Data association is potentially highly ambiguous Environment symmetries make the problem even more difficult Wrong loop closure is catastrophic and leads to divergence
6
L O O P C L O S U R E
ANOTHER SLAM SIMULATION MOVIE, MORE REALISTIC
7
A N O T H E R S L A M M O V I E , M O R E R E A L I S T I C
THE EVOLUTION OF THE CORRELATION STRUCTURE
Known number of landmarks, normalized covariance Checkerboard structure (dark and white alternating with regularity): the x coordinates are correlated among themselves, and similarly for the y, but not cross-correlation between the two spatial dimensions
8
E V O L U T I O N O F T H E C O R R E L AT I O N S T R U C T U R E
When a new landmark is initialized, its uncertainty is maximal The determinant of any landmark sub-matrix of the map covariance matrix decreases monotonically → Landmark uncertainties never increase (the landmarks do not move and their prediction step do not increase their covariance) Robot position uncertainty can grow over time if no landmarks are observed
9
U N C E RTA I N T Y D E C R E A S E F O R T H E L A N D M A R K S
BOUNDS ON THE UNCERTAINTIES
The landmark position uncertainties (in the global reference frame) never drop below the initial uncertainty of the robot. In the limit, the covariance associated with any single landmark location estimate is determined only by the initial covariance in the robot location estimate. However, the relative uncertainties can asymptotically go to zero
10
B O U N D S O N T H E U N C E RTA I N T I E S
Convergence is guaranteed when everything is linear Can diverge if nonlinearities are large . . .
11
C O M P U TAT I O N A L I S S U E S
Cost per step: quadratic in the number n of landmarks: O(n2) Total cost to build a map with n landmarks: O(n3) Memory: O(n2) Problem: becomes computationally intractable for large maps! Approaches exist that make the cost of EKF-SLAM one order less
12
SLAM IN ROS (GMAPPING PACKAGE)
Hokuyo URG laser scanner + ROS gmapping package
13
THE VICTORIA PARK DATASET/STUDY
The data gathering procedure using a laser on a car The “reference” paper / dataset for SLAM Full paper + videos: http://www-personal.acfr.usyd.edu.au/nebot/victoria_park.htm
14
THE “GOOD” TREES TO BE USED FOR LANDMARKS
It’s not easy to single out the trees, distinguishing them from other environment features, passing by people, and cars … Loop closure is problematic here … The complexity of the environment Distinguish between people and trees “Complex” trees, not truly cylindrical
THE LANDMARKS (REAL AND WRONG ONES)
15
L A N D M A R K S ( R E A L A N D W R O N G O N E S )
THE TIME EVOLUTION AND THE FINAL OUTPUT
Animation for the final trajectory and navigation map: http://www-personal.acfr.usyd.edu.au/nebot/publications/slam/video/slam1.gif Evolution of the error on the state: http://www-personal.acfr.usyd.edu.au/nebot/publications/slam/video/dev4.gif
16
T I M E E V O L U T I O N A N D F I N A L O U P U T
THE PROBLEM OF CLASSIFYING LANDMARKS
17
T H E P R O B L E M O F C L A S S I F Y I N G L A N D M A R K S
18
D ATA A S S O C I AT I O N
So far, it was assumed that landmark correspondences can be determined with certainty: the identity of the landmark was returned by the sensor Data association problem: Identity has to be determined based on the raw sensor data, or from the extracted features
- Which landmark in the map, am I
- bserving?
- Is the landmark a new one or an
existing one?
- The extracted features, to what map
element do they correspond?
19
D ATA A S S O C I AT I O N
Wrong data association can be catastrophic! DA it's a very general classification problem, with many different solution approaches like: Nearest Neighbors, Joint Compatibility Branch and Bound (JCBB), InterpretationTree, Joint Probabilistic Data Association filter (JPDA), … A good choice of the landmarks/features is essential to minimize errors:
- Should be easily re-observable
- Should be distinguishable from each other
- Should be plentiful in the environment
- Should be stationary
Maximum likelihood
20
M L D ATA A S S O C I AT I O N F O R E K F W I T H U N K N O W N L A N D M A R K C O R R E S P O N D E N C E S
Scenario: The number of landmarks is known, such that the data association algorithm has to perform only the right association between the observed data and one of the n landmarks in the map, based on their estimated positions λi
At each step k: ⇠k+1 = 2 6 6 6 6 6 6 6 6 6 6 6 6 4 xk yk θk 1
k
· · · n
k
3 7 7 7 7 7 7 7 7 7 7 7 7 5 + 2 6 6 6 6 6 6 6 6 6 6 6 4 (∆Sk + νs
k) cos(θk + ∆θk 2
+ νθ
k)
(∆Sk + νs
k) sin(θk + ∆θk 2
+ νθ
k)
∆θk + νθ
k
· · · 3 7 7 7 7 7 7 7 7 7 7 7 5 = fk(⇠k, ⌫k; ∆Sk, ∆θk)
State equation for robot motion: zk+1 = " ρi
k+1
βi
k+1
# = 2 4 q (λi
kx − xk)2 + (λi ky − yk)2
arctan ⇣ (λi
ky − yk)/(λi kx − xk)
⌘ − θk 3 5 + " w ρ
k
w β
k
# = ok(ξk, wk) Observation equation for landmarks / features: the identity i is not returned
21
S E N S O R F R A M E , R O B O T F R A M E , M A P F R A M E zk = [ 𝝇 𝜸 ]T is the measure returned from the
sensor → Measured position of landmark in the sensor frame {S}
𝝁i = [ xi yi ]T is the estimated position of
landmark i (based on observations) in map M→ Landmark position in the map frame {M}
- We have (and will) assumed that the
sensor frame {S} and the robot frame (base_link) {R} coincides
- Robot pose is in the map frame
- The observation equation ok()
transforms landmark map coordinates into the sensor frame in order to compare them (+ account for noise)
22
M L C R I T E R I O N F O R L A N D M A R K A S S O C I AT I O N
Innovation iff observed landmark is λj
- Each feature of the observation vector provides a contribution
- Which one is the best landmark? ML: The one the best explains the observation
- → The one with the minimal distance from the observation, accounting for the
uncertainties
23
M A H A L A N O B I S D I S TA N C E
Distance between measured and predicted observation is a Gaussian random variable, being the difference between two Gaussian random variables
ˆ ✏j
k+1 = zk+1 − ˆ
zj
k+1
The distance between two Gaussian rv is well defined by the Mahalanobis distance
24
M A H A L A N O B I S D I S TA N C E
25
M A H A L A N O B I S D I S TA N C E
26
M A H A L A N O B I S D I S TA N C E
27
M L C R I T E R I O N F O R L A N D M A R K S E L E C T I O N
28
VA L I D AT I O N G AT E S
29
W H AT VA L U E F O R 𝝳 ?
30