analysis and control of multi robot systems multi robot
play

Analysis and Control of Multi-Robot Systems Multi-Robot - PowerPoint PPT Presentation

Elective in Robotics 2011/2012 Analysis and Control of Multi-Robot Systems Multi-Robot Localization Dr. Paolo Stegagno Dipartimento di Ingegneria Informatica, Automatica e Gestionale 1 / 73 Introduction multi-robot localization :


  1. Elective in Robotics 2011/2012 Analysis and Control of Multi-Robot Systems Multi-Robot Localization Dr. Paolo Stegagno Dipartimento di Ingegneria Informatica, Automatica e Gestionale 1 / 73

  2. Introduction • multi-robot localization : estimation of the state (pose) of the components of a team of robots • required in most multi-robot tasks, such as: • coordinated control • motion in formation • cooperative map building • cooperative estimation • in general, all algorithms that implies cooperation and/or exchange of data • not required in some low-level swarm behavior, based on instantaneous perception and no communication among robots 2 / 73

  3. Example of multi-robot application • task: run evenly spaced on a circle centered on a target/teammate • each robot is modeled as an integrator in polar coordinates: � ˙ ρ i � � � v ρ i q i = ˙ = ˙ φ i v φ i and controlled with the law: ρ i = K ρ ( R − ρ i ) ˙ φ i = Ω + K φ (¯ ˙ φ i − φ i ) , ¯ φ i = mean ( φ i +1 , φ i − 1 ) target • robot i needs to know the phase of robots robot i + 1 and i − 1 to perform the control robot robot 3 / 73

  4. R i R j W R j R i W World Localization • World Localization (WL): an extern system localizes the components of a multi-robot team in a world frame • Global Positioning System (GPS) • at least 4 satellites in line-of-sight • not available in indoor • not suited for accurate localization • sensitive to environmental conditions • not available on the Moon! • centralized • Camera System • frequent recalibration • only in structured environments • centralized 4 / 73

  5. W R i R j Map Localization • Map Localization : each agent localizes himself in the frame of a common map of the environment • no external measurement system • the agents have to rely on their own sensory perceptions • proprioceptive: encoders, IMU • exteroceptive: laser scan, camera, kinect • implies a priori knowledge • the map can be specified as: • a topological description of the environment (occupancy grid, polygonal representation of the free space, ...) • a set of landmarks (beacons) at known positions • usually the information coming from the two types of sensor (proprioceptive and exteroceptive) are fused by the use of filters 5 / 73

  6. − P k ˆ ˆ ˆ z k u k q k q k P k q k-1 P k-1 Kalman Filter prior knowledge • given a system whose model is of state state q k = A q k − 1 + B ( u k − 1 + ν u k − 1 ) measure z k = C q k + ν z k with ν u k − 1 ∼ N (0 , Q ) , ν z k ∼ N (0 , R ) • the prediction step is q − state ˆ k = A ˆ q k − 1 + B u k − 1 prediction step k = AP k − 1 A T + BQB T covariance P − based on the model • the update step is q − pred. measure ˆ z k = C ˆ k y k = z k − ˆ innovation ¯ z k k C T + R innovation cov. S k = CP − update step Kalman gain K k = P − k C T S − 1 compare prediction k q − state ˆ q k = ˆ k + K k ¯ y k to the measurement covariance P k = ( I − K k C ) P − k 6 / 73

  7. Extended Kalman Filter • the equivalent of the KF for nonlinear systems • system state q k = f ( q k − 1 , u k − 1 + ν u k − 1 ) with ν u k − 1 ∼ N (0 , Q ) , ν z k ∼ N (0 , R ) measure z k = h ( q k ) + ν z k let be F q k − 1 = ∂ f � { q , u } = { q k − 1 , u k − 1 } F u k − 1 = ∂ f � { q , u } = { q k − 1 , u k − 1 } H k = ∂ h � � � � ∂ q ∂ u ∂ q � � � q = q k • prediction q − state ˆ k = f (ˆ q k − 1 , u k − 1 ) (1) covariance P − k = F q k − 1 P k − 1 F T q k − 1 + F u k − 1 QF T u k − 1 • update q − pred. measure ˆ z k = h (ˆ k ) innovation ¯ y k = z k − ˆ z k innovation cov. S k = H k P − k H T k + R (2) Kalman gain K k = P − k S − 1 k H T k q − state ˆ q k = ˆ k + K k ¯ y k covariance P k = ( I − K k H k ) P − k 7 / 73

  8. 2 2 2 u ( t ) b 1 1 1 p ( t ) = ( x ( t ), y ( t )) 1 2 R R W p = ( x , y ) b b p ( t ) = ( x ( t ), y ( t )) u ( t ) Example 1: Map Localization • consider now the problem of map localization in 2-D of a single robot R with state q , whose evolution follows the unicycle model � ν v     � v x ˙ ( v + ν v ) cos θ � �  =  , u = q = ˙ y ˙ ( v + ν v ) sin θ , ν u = ∼ N (0 , Q )   ω ν ω ˙ ω + ν ω θ • u = ( v ω ) T is the control input of the system • the map is composed by a single landmark whose position p b = ( x b y b ) is known w.r.t. a world frame W • R is able to gather relative measurements of the position of the landmark w.r.t. a frame of reference attached to itself , whose measure function is z = R T ( θ )( p b − p ) + ν z , ν z ∼ N (0 , R ) • the final goal of R is to estimate its own pose in W (i.e.: to localize itself in the map) 8 / 73

  9. R R W ( x , y ) b b ( x , y ) k-1 k-1 ( x , y ) k k k u ∆ T Example 1: Map Localization • system • the system must be discretized in  ˙    x ( v + ν v ) cos θ order to compute the prediction step of  = ˙ q = ˙ y ( v + ν v ) sin θ    the EKF ˙ θ ω + ν ω a → a k − a k − 1 • discretization ( a → a k − 1 , ˙ ) ∆ T     x k ∆ T cos θ k − 1 0  = q k − 1 +  ( u k − 1 + ν u k − 1 ) q k = y k ∆ T sin θ k − 1 0   θ k 0 1 • then the Jacobians of f w.r.t. q k − 1 and u k − 1 are:   1 0 − ∆ T sin θ k − 1 v k − 1 F q k − 1 = 0 1 ∆ T cos θ k − 1 v k − 1   0 0 1   ∆ T cos θ k − 1 0 F u k − 1 = ∆ T sin θ k − 1 0   0 1 • prediction q − state ˆ k = f (ˆ q k − 1 , u k − 1 ) covariance P − k = F u k − 1 P k − 1 F T u k − 1 + F u k − 1 QF T u k − 1 9 / 73

  10. k-1 R u ∆ T k k k ( x , y ) k-1 ( x , y ) b b ( x , y ) W R Example 1: Map Localization • update z = h ( q )+ ν z = R T ( θ )( p b − p )+ ν z • we need to compute the Jacobian of h ( q ) to completely determine the equations of the EKF • we take the derivative of h w.r.t. p ∂ h ∂ p = − R T ( θ ) • and the derivative of h w.r.t. θ � − sin θ � ∂ h cos θ ∂θ = ( p b − p ) − cos θ − sin θ • putting the previous results together � − sin θ k � � � � H k = ∂ h cos θ k � − R T ( θ k ) = ( p b − p ) � ∂ q − cos θ k − sin θ k � q = q k • to conclude, just plug in H k in the update equations (2) of the EKF 10 / 73

  11. Example 2: Map Localization with Uncertainty • assume now that we want to estimate also the position of the landmark that is known only with some uncertainty • we define the new state of the the filter ξ = ( q T p T b ) T , whose evolution is     x ˙ ( v + ν v ) cos θ � F u k − 1 0 3 x 2 � F u k − 1 y ˙ ( v + ν v ) sin θ     � � ˙ ˙ ⇒ ˜ , ˜     ξ = = ω + ν ω F ξ k − 1 = F u k − 1 = θ     0 2 x 3 0 2 I 2     0 x b ˙     0 y b ˙ • the prediction for the first three components of ξ is the same as the case of perfect knowledge of the position of the landmark, while the prediction for � ˆ � ˆ � x b x − � � � x b , k − 1 b , k is trivially = y − ˆ y b ˆ y b , k − 1 b , k • the covariance of the prediction is � F u k − 1 0 3 x 2 � F T � F u k − 1 � F T � u k − 1 0 3 x 2 � � P − 0 2 � k = P k − 1 + Q u k − 1 0 2 x 3 I 2 0 2 x 3 I 2 0 2 so that the covariance of the position of the landmark is not modified by the motion of the robot 11 / 73

  12. Example 2: Map Localization with Uncertainty • the measure equation is the same as the previous case z = h ( q ) + ν z = R T ( θ )( p b − p ) + ν z • but now we need to derive also by p b ∂ h ∂ p b = R T ( θ ) • we finally get � H k = ∂ h ˜ � � R T ( θ k ) � = H k | p b = p b , k � ∂ξ � ξ = ξ k • remarks : • the covariance of the position of the landmark is not modified by the prediction step • the covariance of the position of the landmark is reduced by the update step • for k → ∞ the uncertainty on the position of the landmark goes to zero 12 / 73

  13. W W W Cooperative Positioning • idea : use the other robots of the team as movable landmarks • robots are divided into two groups , A and B, and track their positions by repeating move-and-stop actions as explained in the following procedure: 1. group A remain stationary at a known position , move group B and make them position themselves relative to group A using information from their internal sensors (i.e.: dead reckoning, encoders) 2. stop group B after they have traveled an appropriate distance, and accurately measure their positions relative to the group-A robots 3. exchange roles of groups A and B and repeat the steps above 4. repeat this process until they reach the target positions A A A B B B reference : R. Kurazume, S. Nagata, and S. Hirose. Cooperative positioning with multiple robots . In Proc. IEEE International Conference on Robotics and Automation, pages 1250-1257 vol.2, 1994. 13 / 73

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend