Analysis and Control of Multi-Robot Systems Multi-Robot - - PowerPoint PPT Presentation

analysis and control of multi robot systems multi robot
SMART_READER_LITE
LIVE PREVIEW

Analysis and Control of Multi-Robot Systems Multi-Robot - - PowerPoint PPT Presentation

Elective in Robotics 2011/2012 Analysis and Control of Multi-Robot Systems Multi-Robot Localization Dr. Paolo Stegagno Dipartimento di Ingegneria Informatica, Automatica e Gestionale 1 / 73 Introduction multi-robot localization :


slide-1
SLIDE 1

Elective in Robotics 2011/2012

Analysis and Control

  • f Multi-Robot Systems

Multi-Robot Localization

  • Dr. Paolo Stegagno

Dipartimento di Ingegneria Informatica, Automatica e Gestionale

1 / 73

slide-2
SLIDE 2

Introduction

  • multi-robot localization: estimation of the state (pose) of the components
  • f a team of robots
  • required in most multi-robot tasks, such as:
  • coordinated control
  • motion in formation
  • cooperative map building
  • cooperative estimation
  • in general, all algorithms that implies cooperation and/or exchange of

data

  • not required in some low-level swarm behavior, based on instantaneous

perception and no communication among robots

2 / 73

slide-3
SLIDE 3

Example of multi-robot application

  • task: run evenly spaced on a circle

centered on a target/teammate

  • each robot is modeled as an integrator in

polar coordinates: ˙ qi = ˙ ρi ˙ φi

  • =
  • vρi

vφi

  • and controlled with the law:

˙ ρi = Kρ(R − ρi) ˙ φi = Ω + Kφ(¯ φi − φi), ¯ φi = mean(φi+1, φi−1)

  • robot i needs to know the phase of robots

i + 1 and i − 1 to perform the control

robot target robot robot

3 / 73

slide-4
SLIDE 4

World Localization

  • World Localization (WL): an extern system localizes the components of a

multi-robot team in a world frame

  • Global Positioning System (GPS)
  • at least 4 satellites in line-of-sight
  • not available in indoor
  • not suited for accurate localization
  • sensitive to environmental conditions
  • not available on the Moon!
  • centralized

Ri Rj W

  • Camera System
  • frequent recalibration
  • only in structured environments
  • centralized

Ri Rj W

4 / 73

slide-5
SLIDE 5

Map Localization

  • Map Localization: each agent localizes himself in the frame of a common

map of the environment

  • no external measurement system
  • the agents have to rely on their
  • wn sensory perceptions
  • proprioceptive: encoders, IMU
  • exteroceptive: laser scan, camera,

kinect

  • implies a priori knowledge
  • the map can be specified as:

Ri Rj W

  • a topological description of the environment (occupancy grid, polygonal

representation of the free space, ...)

  • a set of landmarks (beacons) at known positions
  • usually the information coming from the two types of sensor (proprioceptive

and exteroceptive) are fused by the use of filters

5 / 73

slide-6
SLIDE 6

Kalman Filter

prior knowledge

  • f state

Pk-1 qk-1

prediction step based on the model

Pk qk Pk qk

update step compare prediction to the measurement

uk zk ˆ ˆ ˆ −

  • given a system whose model is

state qk = Aqk−1 + B(uk−1 + νuk−1) measure zk = Cqk + νzk with νuk−1 ∼ N(0, Q), νzk ∼ N(0, R)

  • the prediction step is

state ˆ

q−

k = Aˆ

qk−1 + Buk−1

covariance P−

k = APk−1AT + BQBT

  • the update step is
  • pred. measure ˆ

zk = C ˆ q−

k

innovation ¯

yk = zk − ˆ zk

innovation cov. Sk = CP−

k C T + R

Kalman gain Kk = P−

k C TS−1 k

state ˆ

qk = ˆ q−

k + Kk¯

yk

covariance Pk = (I − KkC)P−

k

6 / 73

slide-7
SLIDE 7

Extended Kalman Filter

  • the equivalent of the KF for nonlinear systems
  • system

state qk = f (qk−1, uk−1 + νuk−1) measure zk = h(qk) + νzk with νuk−1 ∼ N(0, Q), νzk ∼ N(0, R) let be

Fqk−1 = ∂f ∂q

  • {q,u}={qk−1,uk−1} Fuk−1 = ∂f

∂u

  • {q,u}={qk−1,uk−1} Hk = ∂h

∂q

  • q=qk
  • prediction

state ˆ

q−

k = f (ˆ

qk−1, uk−1)

covariance P−

k = Fqk−1Pk−1F T qk−1 + Fuk−1QF T uk−1

(1)

  • update
  • pred. measure ˆ

zk = h(ˆ q−

k )

innovation ¯

yk = zk − ˆ zk

innovation cov. Sk = HkP−

k HT k + R

Kalman gain Kk = P−

k HT k S−1 k

state ˆ

qk = ˆ q−

k + Kk¯

yk

covariance Pk = (I − KkHk)P−

k

(2)

7 / 73

slide-8
SLIDE 8

Example 1: Map Localization

  • consider now the problem of map localization in 2-D of a single robot R with

state q, whose evolution follows the unicycle model ˙ q =   ˙ x ˙ y ˙ θ   =   (v + νv) cos θ (v + νv) sin θ ω + νω   , u = v ω

  • , νu =

νv νω

  • ∼ N(0, Q)
  • u = (v ω)T is the control input of the system
  • the map is composed by a single landmark whose position pb = (xb yb) is

known w.r.t. a world frame W

  • R is able to gather relative measurements of the position of the landmark

w.r.t. a frame of reference attached to itself, whose measure function is z = RT(θ)(pb − p) + νz, νz ∼ N(0, R)

  • the final goal of R is to

estimate its own pose in W (i.e.: to localize itself in the map)

R R W p =(x , y )

b b

p(t )=(x(t ), y(t ))

2 2 1 2

p(t )=(x(t ), y(t ))

1 1 1 b

u(t )

2

u(t )

8 / 73

slide-9
SLIDE 9

Example 1: Map Localization

  • system

˙ q =   ˙ x ˙ y ˙ θ   =   (v + νv) cos θ (v + νv) sin θ ω + νω  

  • the system must be discretized in
  • rder to compute the prediction step of

the EKF

  • discretization (a → ak−1, ˙

a → ak−ak−1

∆T

) qk =   xk yk θk   = qk−1 +   ∆T cos θk−1 0 ∆T sin θk−1 0 1   (uk−1 + νuk−1)

  • then the Jacobians of f w.r.t. qk−1 and uk−1 are:

Fqk−1 =   1 0 −∆T sin θk−1vk−1 0 1 ∆T cos θk−1vk−1 0 0 1   Fuk−1 =   ∆T cos θk−1 0 ∆T sin θk−1 0 1  

R R W (x , y )

b b

(x , y )

k-1 k-1

(x , y )

k k k

u ∆T

  • prediction

state ˆ

q−

k = f (ˆ

qk−1, uk−1)

covariance P−

k = Fuk−1Pk−1F T uk−1 + Fuk−1QF T uk−1

9 / 73

slide-10
SLIDE 10

Example 1: Map Localization

  • update

z = h(q)+νz = RT(θ)(pb −p)+νz

  • we need to compute the Jacobian
  • f h(q) to completely determine

the equations of the EKF

  • we take the derivative of h w.r.t. p

∂h ∂p = −RT(θ)

  • and the derivative of h w.r.t. θ

∂h ∂θ =

  • − sin θ

cos θ − cos θ − sin θ

  • (pb − p)

R R W (x , y )

b b

(x , y )

k-1 k-1

(x , y )

k k k

u ∆T

  • putting the previous results together

Hk = ∂h ∂q

  • q=qk

=

  • −RT(θk)

− sin θk cos θk − cos θk − sin θk

  • (pb − p)
  • to conclude, just plug in Hk in the update equations (2) of the EKF

10 / 73

slide-11
SLIDE 11

Example 2: Map Localization with Uncertainty

  • assume now that we want to estimate also the position of the landmark that is

known only with some uncertainty

  • we define the new state of the the filter ξ = (qT pT

b )T, whose evolution is

˙ ξ =       ˙ x ˙ y ˙ θ ˙ xb ˙ yb       =       (v + νv) cos θ (v + νv) sin θ ω + νω       ⇒ ˜ Fξk−1 = Fuk−1 03x2 02x3 I2

  • , ˜

Fuk−1 = Fuk−1 02

  • the prediction for the first three components of ξ is the same as the case of

perfect knowledge of the position of the landmark, while the prediction for xb yb

  • is trivially

ˆ x−

b,k

ˆ y −

b,k

  • =

ˆ xb,k−1 ˆ yb,k−1

  • the covariance of the prediction is

P−

k =

Fuk−1 03x2 02x3 I2

  • Pk−1

F T

uk−1 03x2

02x3 I2

  • +

Fuk−1 02

  • Q

F T

uk−1

02

  • so that the covariance of the position of the landmark is not modified by the

motion of the robot

11 / 73

slide-12
SLIDE 12

Example 2: Map Localization with Uncertainty

  • the measure equation is the same as the previous case

z = h(q) + νz = RT(θ)(pb − p) + νz

  • but now we need to derive also by pb

∂h ∂pb = RT(θ)

  • we finally get

˜ Hk = ∂h ∂ξ

  • ξ=ξk

=

  • Hk|pb=pb,k

RT(θk)

  • remarks:
  • the covariance of the position of the landmark is not modified by the

prediction step

  • the covariance of the position of the landmark is reduced by the update

step

  • for k → ∞ the uncertainty on the position of the landmark goes to zero

12 / 73

slide-13
SLIDE 13

Cooperative Positioning

  • idea: use the other robots of the team as movable landmarks
  • robots are divided into two groups, A and B, and track their positions by

repeating move-and-stop actions as explained in the following procedure:

  • 1. group A remain stationary at a known position, move group B and make

them position themselves relative to group A using information from their internal sensors (i.e.: dead reckoning, encoders)

  • 2. stop group B after they have traveled an appropriate distance, and accurately

measure their positions relative to the group-A robots

  • 3. exchange roles of groups A and B and repeat the steps above
  • 4. repeat this process until they reach the target positions

W W

A B A B

W

A B

reference: R. Kurazume, S. Nagata, and S. Hirose. Cooperative positioning with multiple robots. In Proc. IEEE International Conference on Robotics and Automation, pages 1250-1257 vol.2, 1994.

13 / 73

slide-14
SLIDE 14

Cooperative Localization

  • Cooperative Localization (CL) is

the problem of estimating the poses

  • f the agents in a common fixed

frame using relative observations among the robots

  • in general, the ability of sensing

each other improves the localization of the entire system

  • btained by simple dead reckoning

i

measured or partially measured

Ri Rj F Fj F

measured or partially measured measured or partially measured

? ?

  • the fusion of proprioceptive and exteroceptive sensory perceptions is usually

performed thorough the use of filters

  • EKF
  • particle filters (not covered by this class)
  • agreeing on a common fixed frame implies some centralization, since the

robots must have a prior common knowledge

14 / 73

slide-15
SLIDE 15

Cooperative Localization

  • one EKF with state q = (qT

1 qT 2 . . . qT n )T and covariance matrix

P =    P11 P12 . . . P1n P21 P22 . . . P1n . . . . . . . . . . . . Pn1 Pn2 . . . Pnn    , Pij = cov(qi, qj)

  • the control input of the system is u = (uT

1 uT 2 . . . uT n )T

state

  • we assume the evolution of the state of the generic robot Ri to be

independent from the states and the control inputs of Rj, ∀j = i: qi,k = f (qi,k−1, ui,k−1 + νui,k−1), νui,k−1 ∼ N(0, Qi)

  • hence, the Jacobian matrices of f are in the form

Fqk−1 =   Fq1,k−1

Ø

...

Ø

Fqn,k−1   , Fuk−1 =   Fu1,k−1

Ø

...

Ø

Fun,k−1  

  • Fqk−1 is a diagonal 3 × 3-block matrix, while Fuk−1 is a 3n × mn matrix, with

m the dimension of the control input (m = 2 if unicycle)

15 / 73

slide-16
SLIDE 16

Cooperative Localization

prediction

  • we apply the first of the (1) to compute the prediction of the state of Ri

ˆ q−

i,k = f (ˆ

qi,k−1, ui,k−1)

  • the second of the (1) is used to compute the covariance

P−

k = Fqk−1Pk−1F T qk−1 + Fuk−1QF T uk−1 =

=    Fq1,k−1

Ø

...

Ø

Fqn,k−1      P11,k−1 . . . P1n,k−1 . . . . . . . . . Pn1,k−1 . . . Pnn,k−1      F T

q1,k−1

Ø

...

Ø

F T

qn,k−1

   + +    Fu1,k−1

Ø

...

Ø

Fun,k−1       Q1

Ø

...

Ø

Qn       F T

u1,k−1

Ø

...

Ø

F T

un,k−1

   = =   Fq1,k−1P11,k−1F T

q1,k−1 . . .

Fq2,k−1P21,k−1F T

q1,k−1 . . .

. . . . . .   +    Fu1,k−1Q1F T

u1,k−1

Ø

...

Ø

Fun,k−1QnF T

un,k−1

  

16 / 73

slide-17
SLIDE 17

Cooperative Localization

  • the final equations for the covariance update are:

P−

ii,k = Fqi,k−1Pii,k−1F T qi,k−1 + Fui,k−1QiF T ui,k−1

P−

ij,k = Fqi,k−1Pij,k−1F T qj,k−1

  • note: the motion of Ri influences only the block Pii

update

  • assume that at step k Ri observes Rj thorough its exteroceptive sensor; the

measure function in this case is zij,k = h(qi,k, qj,k) + νzij,k, νzij,k ∼ N(0, Rij)

  • from (2) is possible to compute the update rule for the state of Rl as

ˆ ql,k = ˆ q−

l,k + (P− li HT i + P− lj HT j )S−1 k ¯

yij,k

  • and the update rule for the component Plf in the covariance matrix

Plf ,k = P−

lf ,k + (P− li HT i + P− lj HT j )S−1 k (HiP− if + HjP− jf )

  • where Hi, Hj are the Jacobian of h w.r.t. qi and qj, computed in (qi,k, qj,k)
  • note: in general, the state and covariance of Rl depend from the observation

that Ri makes of Rj through the mixed terms of the covariance matrix

17 / 73

slide-18
SLIDE 18

Cooperative Localization

  • to conclude the filter we need to compute the Jacobians Hi and Hj

three possible types of relative measurement in 2-D:

  • 1. relative bearing
  • 2. relative distance
  • 3. relative orientation
  • other types of measurements can be taken into account as combination of the

aforementioned (e.g.: relative position = relative bearing + relative distance)

Ri Ri Ri Ri Ri Ri W W W

relative bearing relative distance relative orientation

18 / 73

slide-19
SLIDE 19

Cooperative Localization

  • let be ∆x = xj,k − xi,k and ∆y = yj,k − yi,k
  • relative bearing

h(ˆ qi,k, ˆ qj,k) = tan−1 − sin θi,k∆x + cos θi,k∆y cos θi,k∆x + sin θi,k∆y

  • (3)

Hi =

  • ∆y

∆x2 + ∆y 2 −∆x ∆x2 + ∆y 2 − 1

  • , Hj =
  • −∆y

∆x2 + ∆y 2 −∆x ∆x2 + ∆y 2 0

  • relative distance

h(ˆ qi,k, ˆ qj,k) =

  • ∆x2 + ∆y 2

(4) Hi =

  • −∆x
  • ∆x2 + ∆y 2

−∆y

  • ∆x2 + ∆y 2 0
  • , Hj =
  • ∆x
  • ∆x2 + ∆y 2

∆y

  • ∆x2 + ∆y 2 0
  • relative orientation

h(ˆ qi,k, ˆ qj,k) =θj,k − θi,k (5) Hi = ( 0 0 − 1 ), Hj = ( 0 0 1 )

19 / 73

slide-20
SLIDE 20

Cooperative Localization

  • bservability
  • an observability analysis of the system shows that if no robot has absolute

localization capabilities the system is unobservable, i.e.: the error will increase indefinitely, and the estimates of the poses qi in W will eventually diverge

  • however, even if the whole team is lost in W, the error on the relative poses

qj ⊖ qi, ∀(i, j) among the robots goes to zero

  • if at least one robot has absolute localization capabilities (e.g.: because

it receives measures by a GPS, or is able to detect a landmark with known position), the system becomes observable, and the error converges to zero

  • this happens because one robot is able to estimates its pose in W and the
  • ther robots are able to localize themselves w.r.t. it

Ri Rk Rj Rk Ri Rj W W no robot with absolute with absolute actual Ri Rk Ri Rj

20 / 73

slide-21
SLIDE 21

Cooperative Localization

  • remarks
  • the filter can be performed in a distributed fashion (i.e.: each robots

performs only a small part of the needed computations)

  • some authors have studied also the situation of limited communication

ray and bandwidth

  • references
  • A. Martinelli, F. Pont, and R. Siegwart. Multi-robot localization using relative
  • bservations. In 2005 IEEE Int. Conf. on Robotics and Automation, pages

2797-2802, Barcelona, Spain, Apr. 2005

  • A. Martinelli, and R. Siegwart. Observability Analysis for Mobile Robot
  • Localization. In 2005 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems,

pages 1471-1476, Edmonton, Canada, Aug. 2005.

  • S. Roumeliotis, and G .A. Beckey. Distributed Multirobot Localization. In

IEEE Trans. on Robotics and Automation, vol. 18, no. 5, Oct. 2002.

21 / 73

slide-22
SLIDE 22

Relative Mutual Localization

  • how can we avoid the observability problem of CL?
  • we can provide Ri the ability of localizing himself in W (structured and

centralized solution)

  • we can provide each Ri a frame of reference in which he can not lose
  • we define an attached moving frame Fi for each Ri
  • Relative Mutual Localization (RML)

is the problem of estimating the relative poses among the moving frames

  • each robot computes an estimate of

the state of the system in its frame of reference, such that each robot considers itself always in (0 0 0)T

  • this approach is called also the

robo-centric (or ego-centric)

?

i

measured or partially measured

Ri Rj F Fj Rk Fk

22 / 73

slide-23
SLIDE 23

Relative Mutual Localization

prediction

  • which model for the motion in the robo-centric approach?
  • in the following, for simplicity and without loss of generality we assume to

carry out the filtering from the point of view of R1

  • the state of the filter is the 3(n − 1)-vector (qT

2 . . . qT n )T, where qj is the

change of coordinates between frames Fj and F1 (i.e.: the relative pose of Rj w.r.t. R1)

  • in the following, we will derive directly the

discretized model of the system

  • as in the CL, the motion of Rh, h ∈ {j, 1}

does not affect the state of Rj, and we can compute the prediction separately for each qj = (xj yj θj), j = 2, . . . , n

  • on the other hand, the system model for qj

need to take into account the motion of R1

Ri Ri Rj Rj estjmated and actual formatjon k-1 k uj,k-1 ui,k-1 qj,k-1 qj,k ˆ ˆ

23 / 73

slide-24
SLIDE 24

Relative Mutual Localization

  • in this case, we assume to preliminarly integrate the wheel velocities

provided by the encoders, such that uj,k−1 = (r T

j,k−1, φj,k−1)T represents the

spatial and angular displacements between k − 1 and k, not the linear and angular velocities (i.e.: it represent the change of coordinates between Fj,k and Fj,k−1)

  • consider the simplified model in the figure, where the generic

MqN = (MpN , MθN ), M, N ∈ {A, B, C, D} is the change of coordinates

between the frame N and the frame M

  • then we can write

ApB + RA B BpD = ApC + RA C CpD BpD = RB A(ApC − ApB + RA C CpD)

(6) where RM

N is the 2-D rotation matrix R(MθN )

  • for the orientations

BθD = AθC + CθD − AθB

(7)

A B C D qB

A

qC

A

qD

C

qB

D

24 / 73

slide-25
SLIDE 25

Relative Mutual Localization

  • the application of equations (6–7) to the case of moving robots eventually

gives the discretized model of the system pj,k = R(φ1,k−1)T[pj,k−1 − r1,k−1 + R(θj,k−1)rj,k−1] θj,k = θj,k−1 + φj,k−1 − φ1,k−1

  • the prediction step can be applied after the computation of Fqi,k−1 and Fui,k−1

(exercise) update

  • we distinguish three types of relative (bearing/distance/orientation) measures:

(a) measures of Rj gathered by R1 (b) measures of R1 gathered by Rj (c) measures of Rh gathered by Rj

  • the measurement functions of the first and second types are a simplification of
  • eq. (3–5), with q1 = (0 0 0)T
  • the measurement functions of the third type are exactly the same equations

(3–5)

  • to conclude the design of the filter, compute h and Hk for all the types of

measures (exercise)

25 / 73

slide-26
SLIDE 26

Relative Mutual Localization

  • the computational cost of the filter is O(n2), since we need to compute the

covariance matrix composed by 3(n − 1) × 3(n − 1) elements question

  • do we really need to estimate the whole system all together?
  • let assume that the robot R1 maintains n − 1 separate filters, one for each

Rj, j = 1 with state qj

  • this would simplify the EKF equations and reduce the complexity to O(n)
  • the separate estimation of q2, . . . , qn is equivalent to assume that the

cross-correlation blocks of the covariance matrix are always null P =   P22

Ø

...

Ø

Pnn  

  • we are neglecting the dependency between the estimates of the state of

different robots

26 / 73

slide-27
SLIDE 27

Relative Mutual Localization

  • equivalently, we are assuming that q2, . . . , qn are conditionally independent:

p(q2, . . . , qn) =

  • j=2,...,n

p(qj) with p(qj) probability density function of qj

  • a measure z1j of type (a) or zj1 of type (b) can be directly used to update the

state of the filter on qj

  • a measure zjh of type (c) can not be directly used to update the filter on qh;

in fact, it is expressed in the frame of Rj (zjh = Fjzjh), and must be first expressed in the frame of R1

  • this can be simply done by the application of an appropriate function

F1zjh = g(Fjzjh, qj)

  • however, we do not know the actual value of qj, and we can use only its

current estimate

F1ˆ

zjh = g(Fjzjh, ˆ qj)

  • this brings the estimate ˆ

qh to be dependent from the estimate ˆ qj

27 / 73

slide-28
SLIDE 28

Relative Mutual Localization

  • in the same way, whenever a new measure Fhzhj arrives, its transformed value

F1ˆ

zhj = g(Fhzhj, ˆ qh) will be dependent from ˆ qh, that itself is dependent from a previous estimate ˆ qj

  • this is sufficient to say that F1ˆ

zhj is dependent from ˆ qj, which is a clear violation of Kalman filter hypotheses that assume conditional independence of the measures from the current and the previous estimates

  • remarks
  • not all the cases of dependency are so easy to detect (and avoid)
  • the single-filter approach takes into account this dependencies by

computing the complete covariance matrix and using it in the update step

  • neglecting the cross-covariance terms in P results in a reduction of the

area of the covariance ellipse, hence the filter becomes overconfident on wrong values (inconsistent) and eventually diverges

28 / 73

slide-29
SLIDE 29

Relative Mutual Localization

  • sensor network localization is a particular case of RML when the robots are

not able to move (static agents)

  • the estimation of the relative poses of the agents is not conducted by the use
  • f recursive filters, but is usually carried out using geometrical algorithms

and/or least square estimators

  • the network they compose can be modeled as a measurement graph in

which the nodes are the agents and the edges represents the fact that an agent measures another agent

  • the main problem of network localization are:
  • check if a measurement graph allows a unique solution (rigid graph)
  • find minimal set of measures such that the graph is rigid
  • find measures needed to make rigid a non-rigid graph
  • produce an estimate of the formation

29 / 73

slide-30
SLIDE 30

Absolute Mutual Localization

  • each robot Ri has
  • a fixed frame F⋆

i

  • an attached moving frame Fi
  • Absolute Mutual Localization

(AML) is the problem of estimating the relative poses among the fixed frames

  • applications:
  • map merging
  • cooperative exploration
  • multi-robot SLAM

i

measured or partially measured

known by Ri known by Rj

Fi Fj Ri Rj F Fj

?

  • remark: AML is solved if RML is solved and the agents are localized in their

fixed frame

30 / 73

slide-31
SLIDE 31

Summary of the Multi-Robot Localization Problems

  • World Localization: an extern agent estimates the state of the robots in a

world frame

  • Map Localization: the robots localize themselves in a common map using
  • bservations of the environment
  • Cooperative Localization: the robots estimates their pose in a common

fixed frame using relative observations

  • Relative Mutual Localization:

the robots estimates the change

  • f coordinates among their

attached frames using relative

  • bservations

↓ network localization: particular case of static agents

  • Absolute Mutual Localization:

the robots estimates the change

  • f coordinates among their fixed

frames using relative

  • bservations

31 / 73

slide-32
SLIDE 32

Elective in Robotics 2011/2012

Analysis and Control

  • f Multi-Robot Systems

Data Association in Multi-Robot Localization

  • Dr. Paolo Stegagno

Dipartimento di Ingegneria Informatica, Automatica e Gestionale

32 / 73

slide-33
SLIDE 33

Introduction

  • human beings usually use eyes to gather relative measures of the other human

beings and ears/voice to communicate with them

  • human beings have the ability to recognize the faces and the voices,

associating them to a given person

  • example: can you recognize this guy?

→ Xxxxxx Xxxxxxx

  • human beings can associate a unique identifier (the name) to a face or a

voice

  • robots do not have this ability unless we program them to
  • ears/voice of the robots are communication devices such as WLAN, XBee,

...; in order to make other robots ”recognize his voice”, each robot needs only to sign every communication packet with its unique identifier (ID)

33 / 73

slide-34
SLIDE 34

Introduction

  • human beings usually use eyes to gather relative measures of the other human

beings and ears/voice to communicate with them

  • human beings have the ability to recognize the faces and the voices,

associating them to a given person

  • example: can you recognize this guy?

→ Homer Simpson

  • human beings can associate a unique identifier (the name) to a face or a

voice

  • robots do not have this ability unless we program them to
  • ears/voice of the robots are communication devices such as WLAN, XBee,

...; in order to make other robots ”recognize his voice”, each robot needs only to sign every communication packet with its unique identifier (ID)

34 / 73

slide-35
SLIDE 35

Introduction

  • eyes of the robots are cameras, laser scanners, sonars, ...; the recognition of

the identities of the other robots with these devices is more complicated and can not be always performed

  • but multi-robot localization as explained in the previous lecture implicitly

assumes that the robots are able to recognize each other’s identity!

  • example: if Ri measures its distance d from Rj, it knows that d is referred to

Rj, and, in the equations of the Kalman filter, is able to use the Jacobian of the measurement function h w.r.t. the variable qj

  • different possibilities to overcome the problem
  • give the robots the ability to recognize the identity of other robots so

that when measures are available they are already associated to the correct robot (tagging)

  • develop algorithms that associate the measures to the robots before using

them in the filter (data association)

  • design new localization methods expressly for the case of measures

without identities (localization with anonymous measures)

35 / 73

slide-36
SLIDE 36

Tagging

  • the ability of human beings to associate a name to a face relies on the a priori

assumption that each human being has its own unique face; this assumption is sufficiently reasonable, because each human being is the result of a unique sequence of biological processes

  • on the contrary, each robot is the result of serial production industrial

processes, hence all robots in a homogeneous team look the same

  • a feasible solution is tagging, which

consists in two main aspects

  • 1. endow each object with a

distinctive feature which can be detected by the sensors of the other robots

  • 2. store in each robot the same

transcoding table

36 / 73

slide-37
SLIDE 37

Tagging

  • in general, tagging needs specific hardware and sensors, is sensible to

environmental conditions, centralized and not robust

  • tagging can be

inapplicable

  • tagging implies

centralization yellow is R1 red is R2 green is R3 blue is R4

  • tagging can

affect the task task = disguising the red one is the target!

37 / 73

slide-38
SLIDE 38

Nearest Neighbor Data Association

  • let be qi, ˆ

qi, Pii, i = 1, . . . , n, respectively the current (unknown) and the estimated states and covariance matrices of n robots R1, . . . , Rn

  • let be ¯

z a measure obtained by a sensor whose measurement function is z = h(qj) + νz, νz ∼ N(0, R) where j is unknown

  • problem: find j

let be ˆ zi = h(ˆ qi), i = 1, . . . , n, then j = argmin

j∈{1,...,n}

[h(ˆ qj) − ¯ z]TP−1

jj [h(ˆ

qj) − ¯ z] (8)

  • ambiguous situations can happen when

more than one j maximizes the

  • bjective function of equation (8) (this

happens usually when some robots are ’close’ each other, or when the Pii are too spread) q1 z ˆ q2 ˆ

38 / 73

slide-39
SLIDE 39

Nearest Neighbor Data Association

  • the problem becomes more complicated if the sensor gathers m ≤ n measures

¯ z1, . . . , ¯ zm simultaneously; this situation is likely to happen when the sensors are cameras or range finders

  • the problem defined for the single measure case must be generalized as

(j1, . . . , jm) = argmin

(j1,...,jm)∈{Dm,n} m

  • i=1

[h(ˆ qji) − ¯ zi]TP−1

jiji [h(ˆ

qji) − ¯ zi] (9)

  • note that this problem is combinatorial and its solution in closed form implies

the enumeration and evaluation of all possible combinations, leading to an exponential complexity with the number of measures

  • sub-optimal solutions of problem (9) can be found in linear time with the

number of measures relapsing the constraints (in particular the mutual exclusion, i.e.: allowing a robot to be associated to more than one measure, and viceversa), but this could lead to wrong data associations

39 / 73

slide-40
SLIDE 40

Nearest Neighbor Data Association

  • remarks:
  • the nearest neighbor criterion can be modified to take into account other

factors, such as the probability of each robot to be detected and the possibility of false positives measures (e.g.: objects detected as robots by mistake)

  • nearest neighbor criterion works well when the robots obtain at each

step few or no measures and the robots are sufficiently spaced so that there is few ambiguity on the identity of the measured robot

  • if the density of robots is such that ambiguous situation happens

frequently, nearest neighbor is not reliable and its application could bring to errors in the data association

  • if the mean number of measures at each step is high, the application of

nearest neighbor becomes unfeasible for computational issues

  • the application of nearest neighbor assumes the existence of a reliable a

knowledge on the state of the robots, and is not feasible in the initialization of a system if no a priori knowledge is given

40 / 73

slide-41
SLIDE 41

Mutual Localization from Anonymous Measures

  • in a densely populated environment, with no reliable sensors, neither tagging

nor nearest neighbor are viable choice and a custom algorithm must be implemented to perform the localization

  • this choice brings to a more versatility, decentralization and robustness of

the implemented system, since it requires no prior knowledge, lower level sensors and no necessity for difficult (and computational-time-consuming) image processing algorithms

  • the main drawbacks are the time needed to develop the algorithm and its

specificness (i.e.: the algorithm is designed for a specific type of measures, hence changing the type of the measures implies starting the design from scratch)

  • we will explain now a custom algorithm to solve the RML problem in case of

anonymous position measurements, in 2-D scenario

41 / 73

slide-42
SLIDE 42

Problem Formulation

k k j j

j j

i

t

i

t-1 t false negatjve false positjve F

j

F

i

F

k

F F F R R R

i,1 t-1

  • j

t

j

R

i

R

k

R

i,2 t-1

  • i,1

t

  • i,2

t

  • i

t k t

d

D

d

D

  • equipment: odometer, robot detector, communication module

qt

j = (pj, θj)

δt

j

Ot

j = {ot j,1, . . .}

C t

j

relative pose of Fj w.r.t. Fi in t noisy measure of Fj’s displacement between t − 1 and t set of anonymous position measures gathered by Rj at time t set of robots communicating with Rj at time t

42 / 73

slide-43
SLIDE 43

Feature Extraction vs. Tagging

  • data association without perception tagging
  • each perceived robot is indistinguishable from the others
  • parts of the perceived environment may be indistinguishable from a robot

(e.g., pillars yes, walls no)

  • the received packets are signed but not directly associable to the

perceived objects → → reality feature extraction untagged measures

43 / 73

slide-44
SLIDE 44

Anonymity Causes Loss of Unique Solvability

configuration measures solutions

k j i

R R R

k j i

R R R

i k j

R R R

j i k

R R R

k j i

R R R

j i

R R

k j

R R

j k

R R

i

R

j

R

k

R

k j i

R R R

i

R

j

R

k

R

k j i

R R R

k j i

R R R

j k i

R R R

  • main questions:
  • which are the configurations generating ambiguity?
  • how many solutions in those configurations?
  • how to solve ambiguous configurations during the estimation process?

44 / 73

slide-45
SLIDE 45

Static RML Problem with Anonymous Measures

  • consider a simplified deterministic static scenario:
  • perfect position measurements (no noise)
  • unlimited field of view (all measure all)
  • no false positives/negatives
  • let P = {p1, . . . , pn} be the set of the positions of

the formation, and c its center of mass, then

Theorem (necessary and sufficient condition for unique solvability)

Unique solvability ⇔ non rotational symmetry of P

Theorem (maximum number of solutions)

For a given n, the maximum number of possible solutions of the mutual localization problem is (n − 1)!

  • Remark: the maximum number of possible solutions accounts for regular

n-gon if c ∈ P, and regular (n − 1)-gon if c ∈ P

45 / 73

slide-46
SLIDE 46

System Architecture

  • two-phase approach:
  • a memoryless registration algorithm (PMR) solves the anonymity

problem by enumerating all the geometrically feasible solutions

  • a bank of particle filters solves the ambiguity and filter out the noise

Robot detector Odometer {bel(qt

j)}j∈C1:t

i

{Ot

j}j∈Ct

i

δt

i

{δ1:t

j }j∈Ct

i

Ot

i

to/from neighbors

PMR

Motion models Measure updates

{bel(qt

j)}j∈C1:t

i

{ˆ qt

j}j∈Ct

i

Filtering stage Multiple registration stage

46 / 73

slide-47
SLIDE 47

Binary Registration

  • a binary registration is an

algorithm that estimates the ‘more likely’ change of coordinates among the viewpoints of two

  • bservations O1 e O2

O O

1 2

  • admits multiple solutions
  • each solution identifies a

new observation that is the union of the initial

  • bservations

47 / 73

slide-48
SLIDE 48

Irreconcilable Solutions

  • two solutions are said to be irreconcilable if in their union:
  • two different robots are associated to the same feature
  • the same robot is associated to two different features

48 / 73

slide-49
SLIDE 49

PMR Example

real configuration

R1 R3 R2

49 / 73

slide-50
SLIDE 50

PMR Example

real configuration raw observations

O = O

1

~

1

O2 O3 R1 R3 R3 R2 R1 R2

50 / 73

slide-51
SLIDE 51

PMR Example

real configuration raw observations step 1

O = O

1

~

1

O2 O3 R1 R3 R3 R2 R1 R2

51 / 73

slide-52
SLIDE 52

PMR Example

real configuration raw observations step 1 binary registration of O and O

1 2

~

O = O

1

~

1

O2 O3 O =

2

~

Oγ1 R1 R3 R3 R2 R2 R1 R1 Oγ2 R2 R1 R2

52 / 73

slide-53
SLIDE 53

PMR Example

real configuration raw observations step 1 binary registration of O and O binary registration of O and O

1 1 2 3

~ ~

O = O

1

~

1

O2 O3 O =

2

~

Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R2 R2 R1 R1 Oγ2 R2 R1 R1 R2 Oγ4 R3 R1

53 / 73

slide-54
SLIDE 54

PMR Example

real configuration raw observations step 1 binary registration of O and O binary registration of O and O

1 1 2 3

~ ~

O = O

1

~

1

O2 O3 O =

2

~

selection of irreconcilable solutions Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R2 R2 R1 R1 Oγ2 R2 R1 R1 R2 Oγ4 R3 R1

54 / 73

slide-55
SLIDE 55

PMR Example

real configuration raw observations step 1 binary registration of O and O binary registration of O and O

1 1 2 3

~ ~

O = O

1

~

1

O2 O3 O =

2

~

selection of irreconcilable solutions Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R2 R2 R1 R1 R1 R1 Oγ2 R2 R1 R1 R2 Oγ4 R3 R1 bel(x )  bel(x )

55 / 73

slide-56
SLIDE 56

PMR Example

real configuration raw observations step 1 binary registration of O and O binary registration of O and O

1 1 2 3

~ ~

O = O

1

~

1

O2 O3 O =

2

~

selection of irreconcilable solutions Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R2 R2 R1 R1 R1 R1 Oγ2 R2 R1 R1 R2 Oγ4 R3 R1 bel(x )  bel(x )

selection of solutions consistent with the current belief

56 / 73

slide-57
SLIDE 57

PMR Example

real configuration raw observations step 1 step 2 binary registration of O and O binary registration of O and O binary registration of O and O

1 1 2 3 2 3

~ ~ ~

O = O

1

~

1

O2 O3 O =

2

~

selection of irreconcilable solutions Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R3 R2 R2 R2 R1 R1 R1 R1 Oγ2 R2 R1 R1 R1 R2 Oγ4 R3 R1 bel(x )  bel(x )

selection of solutions consistent with the current belief

57 / 73

slide-58
SLIDE 58

PMR Example

real configuration raw observations step 1 step 2 binary registration of O and O binary registration of O and O binary registration of O and O

1 1 2 3 2 3

~ ~ ~

O = O

1

~

1

O2 O3 O =

2

~

when more than one partial solution is compatible with the current belief, expands a branch of the algorithm for each

  • f them

Oγ1 O =

2

~

Oγ3 R1 R3 R3 R3 R3 R2 R2 R2 R1 R1 R1 R1 Oγ2 R2 R1 R1 R1 R2 Oγ4 R3 R1 bel(x )  bel(x )

binary registration

  • f O and O

2 2

~

R2 R3 R1

58 / 73

slide-59
SLIDE 59

Multiple Registration with PMR

input: feature sets Ot

i , {Ot j }j∈C t

i , beliefs bel{qj}j∈C t i

  • utput: estimate relative poses {ˆ

qt

j }j∈C t

i

for i ← 1 to |C t

i | do

  • perform binary registrations among Ot

i and all the Ot j , j ∈ C t i , not

registered yet

  • select a maximal subset of irreconcilable solutions
  • select the partial solutions whose metric
  • p(ˆ

qt

j |qt j )bel(qt j )dqt j is above a

certain threshold

  • foreach selected solution S
  • expand the aggregated set of feature with S
  • tune all the already estimated relative poses taking into account new

correspondences

  • create a new branch of the algorithm if more than one solution

59 / 73

slide-60
SLIDE 60

Computational Cost of PMR

  • Ω(n2): (n − 1) + (n − 2) + . . . + 2 + 1 = n(n − 1)/2 binary registrations for

each branch of the algorithm

  • number of branches of the algorithm is linear with the number of solutions:

O(k) with k number of solutions

  • k increases factorially with n iff:
  • the system has no current belief and is in symmetric configuration
  • one ore more robots are kidnapped and placed in a symmetric

configuration

  • PMR has square computational cost when the system reaches ambiguity

with a ‘good’ belief

60 / 73

slide-61
SLIDE 61

System Architecture

  • two-phase approach:
  • a memoryless registration algorithm (PMR) solves the anonymity

problem by enumerating all the geometrically feasible solutions

  • a bank of particle filters solves the ambiguity and filter out the noise

Robot detector Odometer {bel(qt

j)}j∈C1:t

i

{Ot

j}j∈Ct

i

δt

i

{δ1:t

j }j∈Ct

i

Ot

i

to/from neighbors

PMR

Motion models Measure updates

{bel(qt

j)}j∈C1:t

i

{ˆ qt

j}j∈Ct

i

Filtering stage Multiple registration stage

61 / 73

slide-62
SLIDE 62

Particle Filters

  • a particle filter for each Rj ‘met’ by Ri computes the marginal pdf p(qt

j )

instead of the single joint belief p({qt

j }j∈C 1:t

i ), where:

  • C t

i set of robots communicating with Ri at time t

  • C 1:t

i

set of robots with which Ri has communicated at least once until t

  • this choice implies the independence assumption

p({qt

j }j∈C 1:t

i ) =

j∈C 1:t

i

p(qt

j )

  • maintaining p({qt

j }j∈C 1:t

i ) is not feasible due to computational problems

  • input of the j-th filter at time t:
  • the measure δt

i from the motion detector

  • the measure δ1:t

j

from the communication

  • the set of relative pose estimates {ˆ

qt

j } of Rj in Fi computed by PMR

(approximated as a set of gaussian measurements)

62 / 73

slide-63
SLIDE 63

Particle Filters

  • Ri motion update: p(qt

j |δt i ) = Ni

  • p(δ′|δt

i )p(qt−1 j

⊕ δ′)dδ′, update equation for the single particle: qt

j = qt−1 j

⊖ (δt

i ⊕ nδ)

  • Rj motion update: p(qt

j |δt j ) = Nj

  • p(δ′|δt

j )p(qt−1 j

⊖ δ′)dδ′ update equation for the single particle: qt

j = qt−1 j

⊕ (δt

j ⊕ nδ)

  • both the motion updates cause a translation and a blur of p(qt−1

j

)

  • measurement update:

if |{ˆ qt

j }| = 1

then p(qj|ˆ qt

j ) = N p(ˆ

qt

j |qj)p(qt j )

if |{ˆ qt

j }| > 1

then p(qj|ˆ qt

j ) = N {ˆ qt

j } p(ˆ

qt

j |qj)p(qt j )

with Ni, Nj, N normalization factors p(δ′|δ) motion detector model, p(ˆ qj|qj) measurement model nδ sample taken by p(δ′|δ).

63 / 73

slide-64
SLIDE 64

Experimental setup

  • 4 Khepera III robots:
  • diameter 130 mm
  • differential drive
  • feature extraction on scan of Hokuyo

URG range finder as robot detector:

  • angular range 240◦
  • linear range artificially limited to 2 m
  • error ±10 mm
  • precision affected by the color of the
  • bjects, optimized for white
  • groundtruth from external camera system

64 / 73

slide-65
SLIDE 65

Experiment: video

65 / 73

slide-66
SLIDE 66

Experiment: estimate errors

distance error: {1||pj − ¯ pj||}j=2,3,4

  • rientation error: {θj − ¯

θj}j=2,3,4

0.5 1 1.5 2 2.5 3 3.5 0.5 1 1.5

x [m] y [m]

25 50 75 100 125 0.5 1 1.5

time [s] distance error [m]

25 50 75 100 125 −180 −135 −90 −45 45 90 135 180

time [s]

  • rientation error [deg]

66 / 73

slide-67
SLIDE 67

Experiment: video

67 / 73

slide-68
SLIDE 68

Experiment: estimate errors first phase

distance error: {1||pj − ¯ pj||}j=2,3,4

  • rientation error: {θj − ¯

θj}j=2,3,4

175 0.5 1 1.5 2 2.5 3 3.5 0.5 1 1.5

x [m] y [m]

25 50 75 100 125 150 175 0.5 1 1.5

time [s] distance error [m]

175 25 50 75 100 125 150 175 −180 −135 −90 −45 45 90 135 180

time [s]

  • rientation error [deg]

68 / 73

slide-69
SLIDE 69

Experiment: estimate errors kidnapping

distance error: {1||pj − ¯ pj||}j=2,3,4

  • rientation error: {θj − ¯

θj}j=2,3,4

0.5 1 1.5 2 2.5 3 3.5 0.5 1 1.5

x [m] y [m]

25 50 0.5 1 1.5

time [s] distance error [m]

25 50 −180 −135 −90 −45 45 90 135 180

time [s]

  • rientation error [deg]

69 / 73

slide-70
SLIDE 70

Extensions

bearing only measurements extension new multiple registration algorithm filter has to recover the scale 3-D bearing only extension assumes IMU instead of odometer new multiple registration algorithm

70 / 73

slide-71
SLIDE 71

Spin-offs

anti-symmetry control law keeps the system away from rotational symmetry by the use of a symmetry metric function encirclement of a target uses the mutual localization filter as source to feed the controller

71 / 73

slide-72
SLIDE 72

Conclusions

  • we have presented the problem of reconstructing the identity of the robot

to which is referred a given measure

  • there are three main ways to solve this problem
  • tagging: this is a hardware solution that is feasible only in safe

environmental conditions; it overcomes the problem by allowing each robot to recognize the identity of the others

  • nearest neighbor: this is a software solution that implies a good a

priori knowledge and is not feasible in crowded environments; it associates each measure to the ’best choice’ over the robot set, based on the current knowledge of the robot

  • RML with anonymous position measures: this is a robust,

decentralized and reliable localization system that solves the anonymity of the measures using a custom geometrical probabilistic algorithm; its generalization to different types of measures can be tricky and time consuming

72 / 73

slide-73
SLIDE 73

References

  • Prof. Giuseppe Oriolo

Antonio Franchi Marco Cognetti

  • M. Cognetti, P. Stegagno, A. Franchi, G. Oriolo and H. H. Bulthoff, 3-D Mutual Localization with

Anonymous Bearing Measurements. To appear in 2012 IEEE Int. Conf. on Robotics and Automation, Saint Paul, MN, USA, May 2012.

  • P. Stegagno, M. Cognetti, A. Franchi and G. Oriolo, Mutual Localization using Anonymous Bearing
  • Measurements. In 2011 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, San Francisco, CA,

USA, Sep. 2011.

  • A. Franchi, G. Oriolo and P. Stegagno, Probabilistic mutual localization in multi-agent systems from

anonymous position measures. In 49th IEEE Conference on Decision and Control, Atlanta, GA, USA,

  • Dec. 2010.
  • A. Franchi, G. Oriolo, and P. Stegagno, On the solvability of the mutual localization problem with

anonymous position measures. In 2010 IEEE Int. Conf. on Robotics and Automation, Anchorage, AK, USA, May 2010.

73 / 73