Distributed motion coordination of robotic networks Lecture 4 - - PowerPoint PPT Presentation

distributed motion coordination of robotic networks
SMART_READER_LITE
LIVE PREVIEW

Distributed motion coordination of robotic networks Lecture 4 - - PowerPoint PPT Presentation

Distributed motion coordination of robotic networks Lecture 4 deployment Jorge Cort es Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz http://www.ams.ucsc.edu/jcortes Summer


slide-1
SLIDE 1

Distributed motion coordination

  • f robotic networks

Lecture 4 – deployment Jorge Cort´ es

Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz http://www.ams.ucsc.edu/˜jcortes

Summer School on Geometry Mechanics and Control Centro Internacional de Encuentros Matem´ aticos Castro Urdiales, June 25-29, 2007 ,

slide-2
SLIDE 2

,

Roadmap

Lecture 1: Introduction, examples, and preliminary notions Lecture 2: Models for cooperative robotic networks Lecture 3: Rendezvous Lecture 4: Deployment Lecture 5: Agreement

slide-3
SLIDE 3

,

Today

1 Deployment – Basic motion coordination capability 2 Non-deterministic continuous-time dynamical systems –

nonsmooth stability analysis

3 Robustness – against agents’ arrivals and departures

slide-4
SLIDE 4

,

Outline

1 Deployment

Expected-value deployment Area deployment Expected-value deployment with limited-range interactions

2 Deployment: basic behaviors

Nonsmooth stability analysis Multi-center disk-covering and sphere-packing

3 Conclusions

slide-5
SLIDE 5

,

Deployment

Objective: optimal task allocation and space partitioning

  • ptimal placement and tuning of sensors

Constraints: algorithms amenable to implementation in a network adaptive versus static distributed versus centralized formal validation versus heuristics increasingly important w/ complexity of network, task, environment, constraints truly implementable on experimental testbeds asynchronous, delay, limited bandwidth, limited energy, interference

slide-6
SLIDE 6

,

Coverage optimization

DESIGN of performance metrics

1 how to cover a region with n minimum-radius overlapping disks? 2 how to design a minimum-distorsion (fixed-rate) vector quantizer?

(Lloyd ’57)

3 where to place mailboxes in a city / cache servers on the internet?

ANALYSIS of cooperative distributed behaviors

4 how do animals share territory?

what if every fish in a swarm goes toward center of own dominance region?

Barlow, Hexagonal territories, Animal Behavior, 1974

5 what if each vehicle goes to center of mass of own Voronoi cell? 6 what if each vehicle moves away from closest vehicle?

slide-7
SLIDE 7

,

Top-down: expected-value deployment

Objective: Given sensors/nodes/robots/sites (p1, . . . , pn) moving in environment Q achieve optimal coverage defined according to

slide-8
SLIDE 8

,

Top-down: expected-value deployment

Objective: Given sensors/nodes/robots/sites (p1, . . . , pn) moving in environment Q achieve optimal coverage defined according to Scenario 1 —expected value performance measure given distribution density function φ minimize HC(p1, . . . , pn) = Eφ

  • min

i

q − pi2

slide-9
SLIDE 9

,

Scenario 1: coverage algorithm

Name: Coverage behavior Goal: distributed optimal agent deployment Requires: (i) own Voronoi cell computation

Definition

(ii) centroid computation At each communication round, each agent:

1: acquire neighbors’ positions 2: compute own dominance region

Computation

3: follow gradient – move towards centroid

Caveat: convergence only to local minimum of HC

slide-10
SLIDE 10

,

Simulation

initial configuration gradient descent final configuration

slide-11
SLIDE 11

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

slide-12
SLIDE 12

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq

slide-13
SLIDE 13

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq +

  • ∂Vi(P )

f (q − pi) ni(q), ∂q ∂pi φ(q)dq

slide-14
SLIDE 14

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq +

  • ∂Vi(P )

f (q − pi) ni(q), ∂q ∂pi φ(q)dq +

  • j neigh i
  • Vj(P )∩Vi(P )

f (q − pj) nji(q), ∂q ∂pi φ(q)dq

slide-15
SLIDE 15

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq +

  • ∂Vi(P )

f (q − pi) ni(q), ∂q ∂pi φ(q)dq −

  • ∂Vi(P )

f (q − pi) ni(q), ∂q ∂pi φ(q)dq

slide-16
SLIDE 16

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq

slide-17
SLIDE 17

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq = 2 MVi(P )(pi − CVi(P ))

  • for f(x)=x2
slide-18
SLIDE 18

,

Scenario 1: technical approach

1 Alternative formulation (f : R+ → R+, differentiable,

non-decreasing) Eφ

  • min

i

f(q − pi)

  • =

n

  • i=1
  • Vi(P )

f(q − pi)φ(q)dq ≤

n

  • i=1
  • Wi

f(q − pi)φ(q)dq

2 Compute decentralized gradient

∂HC ∂pi (P) =

  • Vi(P )

∂ ∂pi f (q − pi) φ(q)dq = 2 MVi(P )(pi − CVi(P ))

  • for f(x)=x2

critical points for H are centroidal Voronoi configurations

slide-19
SLIDE 19

,

Correctness of dispersion laws

Distributed: over Delaunay graph Adaptive: changing environment, agent arrivals and departures Verifiably correct: convergence to centroidal Voronoi configurations via LaSalle Invariance Principle Asynchronous implementation: wake up

1 determine local Voronoi diagram (w/ outdated

information)

2 determine centroid of own Voronoi region 3 take a step in that direction

go to sleep

slide-20
SLIDE 20

,

Top-down: area deployment

Objective: Given sensors/nodes/robots/sites (p1, . . . , pn) moving in environment Q achieve optimal coverage defined according to Scenario 2 —area (with limited-range sensor or communication radius r) given distribution density function φ maximize areaφ(∪n

i=1B r 2 (pi)) =

  • Q
  • max

i

1B r

2 (pi)(q)

  • φ(q)dq
slide-21
SLIDE 21

,

Scenario 2: weighted normal

Take density function constant, φ = 1

  • arc(r)

nB r

2 (p) φ

slide-22
SLIDE 22

,

Scenario 2: weighted normal

Take density function constant, φ = 1

  • arc(r)

nB r

2 (p) φ

If arc(r) is described by [θ−, θ+] ∋ θ → p + r

2(cos θ, sin θ) ∈ R2

r 2 θ+

θ−

(cos θ, sin θ)dθ = sin θ+ − θ− 2

  • cos

θ+ + θ− 2

  • , sin

θ+ + θ− 2

slide-23
SLIDE 23

,

Scenario 2: weighted normal

Take density function constant, φ = 1

  • arc(r)

nB r

2 (p) φ

If arc(r) is described by [θ−, θ+] ∋ θ → p + r

2(cos θ, sin θ) ∈ R2

r 2 θ+

θ−

(cos θ, sin θ)dθ = sin θ+ − θ− 2

  • cos

θ+ + θ− 2

  • , sin

θ+ + θ− 2

slide-24
SLIDE 24

,

Scenario 2: area coverage algorithm

Name: Coverage behavior Goal: distributed optimal agent deployment Requires: (i) own cell computation (ii) weighted normal computation For all i, agent i synchronously performs:

1: determine own cell Vi ∩ B r

2 (pi)

2: determine weighted normal

  • arc(r) nB r

2 (p) φ

3: move in the direction of weighted normal

Caveat: convergence only to local maximum of areaφ(∪n

i=1B r 2 (pi))

slide-25
SLIDE 25

,

Simulation

initial configuration gradient descent final configuration

slide-26
SLIDE 26

,

Correctness and complexity of dispersion laws

Distributed: over r-limited Delaunay graph Adaptive: changing environment, agent arrivals and departures Convergence: Gradient + LaSalle Invariance Principle Complexity: for d = 1, first-order agents with r-lim Delaunay graph TC(T(rǫ)-deplmnt, CCcentroid) ∈ O(n3 log(nǫ−1))

slide-27
SLIDE 27

,

Expected-value deployment with limited-range interactions

Objective: Given sensors/nodes/robots/sites (p1, . . . , pn) moving in environment Q achieve optimal coverage defined according to Expected value —with limited-range sensing radius r given distribution density function φ minimize HC(p1, . . . , pn) = Eφ

  • min

i

f(q − pi)

  • gradient of HC is not spatially distributed over Gdisk
slide-28
SLIDE 28

,

Tuning the optimization problem

Let f r

2 (x) = f(x) 1[0, r

2 )(x) + f(diam(Q)) · 1[ r 2 ,+∞)(x), and define

H r

2 (p1, . . . , pn) = Eφ

  • min

i

f r

2 (q − pi)

  • (conservative) constant-factor approximation

β H r

2 (P) ≤ HC(P) ≤ H r 2 (P) ,

β =

  • r

2 diam(Q)

2 Gradient of H r

2 is distributed over r-limited Delaunay graph

∂H r

2

∂pi = 2MVi(P )∩B r

2 (pi)(CVi(P )∩B r 2 (pi) − pi)

− r

2

2 − diam(Q)2Mi(r)

  • k=1
  • arci,k(r)

nB r

2 (pi) φ

slide-29
SLIDE 29

,

Simulations

Limited range run #1: 16 agents, density φ is sum of 4 Gaussians, time in- variant, 1st order dy- namics

initial configuration gradient descent of H r 2 final configuration

Unlimited range run #2: 16 agents, density φ is sum of 4 Gaussians, time in- variant, 1st order dy- namics

initial configuration gradient descent of HC final configuration

slide-30
SLIDE 30

,

Most general result: distributed gradient

For general non-decreasing f : R≥0 → R piecewise differentiable finite-jump discontinuities at R1 < · · · < Rm HC(P) =

  • Q mini f(q − pi)φ(q)dq

Theorem ∂HC ∂pi (p1, . . . , pn) =

  • Vi

∂ ∂pi f(q − pi)φ(q)dq +

m

  • α=1

∆fα(Rα) Mi(2Rα)

  • k=1
  • arci,k(2Rα)

nBRα(pi)dφ

  • = integral over Vi + integral along arcs in Vi
slide-31
SLIDE 31

,

Outline

1 Deployment

Expected-value deployment Area deployment Expected-value deployment with limited-range interactions

2 Deployment: basic behaviors

Nonsmooth stability analysis Multi-center disk-covering and sphere-packing

3 Conclusions

slide-32
SLIDE 32

,

Deployment: basic behaviors

“move away from closest” “move towards furthest” Equilibria? Asymptotic behavior? Optimizing network-wide function?

slide-33
SLIDE 33

,

Deployment: 1-center optimization problems

smQ(p) = min{p − q | q ∈ ∂Q} Lipschitz 0 ∈ ∂ smQ(p) ⇔ p ∈ IC(Q) lgQ(p) = max{p − q | q ∈ ∂Q} Lipschitz 0 ∈ ∂ lgQ(p) ⇔ p = CC(Q)

Locally Lipschitz function V are differentiable a.e. Generalized gradient of V is ∂V (x) = convex closure ˘ lim

i→∞ ∇V (xi) | xi → x , xi ∈ ΩV ∪ S

¯

slide-34
SLIDE 34

,

Deployment: 1-center optimization problems

+ gradient flow of smQ ˙ pi = + Ln[∂ smQ](p) “move away from closest” − gradient flow of lgQ ˙ pi = − Ln[∂ lgQ](p) “move toward furthest”

For X essentially locally bounded, Filippov solution of ˙ x = X(x) is absolutely continuous function t ∈ [t0, t1] → x(t) verifying ˙ x ∈ K[X](x) = co{ lim

i→∞ X(xi) | xi → x , xi ∈ S}

For V locally Lipschitz, gradient flow is ˙ x = Ln[∂V ](x)

Ln = least norm operator

slide-35
SLIDE 35

,

Nonsmooth LaSalle Invariance Principle

Evolution of V along Filippov solution t → V (x(t)) is differentiable a.e. d dtV (x(t)) ∈ LXV (x(t)) = {a ∈ R | ∃v ∈ K[X](x) s.t. ζ · v = a , ∀ζ ∈ ∂V (x)}

  • set-valued Lie derivative

LaSalle Invariance Principle For S compact and strongly invariant with max LXV(x) ≤ 0 Any Filippov solution starting in S converges to largest weakly invariant set contained in

  • x ∈ S | 0 ∈

LXV(x)

  • E.g., nonsmooth gradient flow ˙

x = − Ln[∂V ](x) converges to critical set

slide-36
SLIDE 36

,

Deployment: multi-center optimization

sphere packing and disk covering

“move away from closest”: ˙ pi = + Ln(∂ smVi(P ))(pi) — at fixed Vi(P) “move towards furthest”: ˙ pi = − Ln(∂ lgVi(P ))(pi) — at fixed Vi(P) Aggregate objective functions! HSP(P) = min

i

smVi(P )(pi) = min

i=j

1

2pi − pj, dist(pi, ∂Q)

  • HDC(P) = max

i

lgVi(P )(pi) = max

q∈Q

  • min

i

q − pi

slide-37
SLIDE 37

,

Deployment: multi-center optimization

Critical points of HSP and HDC (locally Lipschitz) If 0 ∈ int ∂HSP(P), then P is strict local maximum, all agents have same cost, and P is incenter Voronoi configuration If 0 ∈ int ∂HDC(P), then P is strict local minimum, all agents have same cost, and P is circumcenter Voronoi configuration Aggregate functions monotonically optimized along evolution min LLn(∂ smV)HSP(P) ≥ 0 max L− Ln(∂ lgV)HDC(P) ≤ 0 Asymptotic convergence to center Voronoi configurations via nonsmooth LaSalle Complexity characterization in 1-d, more in progress

slide-38
SLIDE 38

,

Deployment: visibility-based deployment

Objective: achieve complete visibility of nonconvex environment (non self-intersecting polygon) Partition-based At each comm round:

1: acquire neighbors’

positions

2: compute own

dominance region

3: move towards

furthest point of

  • wn region
slide-39
SLIDE 39

,

Summary and conclusions

Deployment

1 Top-down: expected-value, area 2 Bottom-up: disk-covering, sphere-packing

Technical tools

1 Geometric optimization 2 Nonsmooth stability analysis 3 Proximity graphs, spatially-distributed maps 4 Computational geometry

slide-40
SLIDE 40

,

References

Deployment scenarios and algorithms: Nonsmooth stability analysis: Geometric and combinatorial optimization:

slide-41
SLIDE 41

,

Voronoi partitions

Let (p1, . . . , pn) ∈ Qn denote the positions of n points The Voronoi partition V(P) = {V1, . . . , Vn} generated by (p1, . . . , pn) Vi = {q ∈ Q| q − pi ≤ q − pj , ∀j = i} = Q ∩j HP(pi, pj) where HP(pi, pj) is half plane (pi, pj)

3 generators 5 generators 50 generators

Return

slide-42
SLIDE 42

,

Distributed Voronoi computation

Assume: agent with sensing/communication radius Ri Objective: smallest Ri which provides sufficient information for Vi For all i, agent i performs:

1: initialize Ri and compute

Vi = ∩j:pi−pj≤RiHP(pi, pj)

2: while Ri < 2 maxq∈b

Vi pi − q do

3:

Ri := 2Ri

4:

detect vehicles pj within radius Ri, recompute Vi

Return