Distributed Algorithms for Guiding Navigation across a Sensor - - PowerPoint PPT Presentation

distributed algorithms for guiding navigation across a
SMART_READER_LITE
LIVE PREVIEW

Distributed Algorithms for Guiding Navigation across a Sensor - - PowerPoint PPT Presentation

Distributed Algorithms for Guiding Navigation across a Sensor Network Qun Li Michael de Rosa Daniela Rus Department of Computer Science Dartmouth College MobiCom 2003 ICS280 Winter'05 Presenter: Daniel Massaguer 1 Static sensor network


slide-1
SLIDE 1

1

Distributed Algorithms for Guiding Navigation across a Sensor Network

Qun Li Michael de Rosa Daniela Rus

Department of Computer Science Dartmouth College MobiCom 2003

ICS280 – Winter'05 Presenter: Daniel Massaguer

slide-2
SLIDE 2

2 Static sensor network guides a mobile device towards a target, maintaining the safest distance to the danger areas Target Mobile Device has no global map. In-network mobile device routing

slide-3
SLIDE 3

3 Distributed algorithm: 3 algorithms: Motion planning: Artificial Potential field Safest path: Dynamic programming Navigation (retrieval of safest path)

slide-4
SLIDE 4

4

4

  • +

Target

  • Source

1

Algorithm 1 — Artificial Potential Field

slide-5
SLIDE 5

5 for all sensors si in the network do poti = 0, hopsj = ∞ for any danger j if sensed-value = danger then hopsi = 0 Broadcast message(i,hops=0) if receive(j,hops) then if hopsj > hops+1 then hopsj = hops+1 Broadcast message(j,hopsj) for all received j do Compute the potential potj of j using potj = 1 / hopsj

2

Compute the potential at si using all potj , poti = poti + potj Algorithm 1

*Distributed Bellman-Ford: sources=danger, metric=hops

slide-6
SLIDE 6

6

1 i=0 3 2 4 5

Pot=[1/4,--,--,--,--,1/4] Hops=[0,∞,∞,∞,∞,2] Pot=[1/12,2,--,--,--,1/12] Hops=[1,∞,∞,∞,∞,1] Pot=[1/4,--,5/4,--,--,1/12] Hops=[2,∞,∞,∞,∞,1] Pot=[1/9,--,--,--,--,1/9] Hops=[3,∞,∞,∞,∞,0] Pot=[1/4,--,--,--,5/4,1/12] Hops=[2,∞,∞,∞,∞,1] Pot=[1/12,--,--,5/4,--,1/4] Hops=[1,∞,∞,∞,∞,2]

Algorithm 1 — Example

*Distributed Bellman-Ford: sources=danger, metric=hops

slide-7
SLIDE 7

7 Let G be a goal sensor G broadcasts msg = (Gid, myid(G), hops = 0, potential =0) for all sensors si do Initially hopsg = ∞ and Pg = ∞ if receive(g,k,hops,potential) then Compute the potential integration from the goal to here: if Pg > potential + poti then Pg = potential + poti hopsg = hops + 1 priorg = k Broadcast (Gid, myid(si), hopsg, Pg) Algorithm 2

*Distributed Bellman-Ford: source=Target, metric=potential (e.g., danger level)

slide-8
SLIDE 8

8

1 i=0 3 2 4 5

Pot=[1/4,--,9/4,--,--,1/4] Hops=[0,∞,2,∞,∞,2] Prior=[--,--,1,--,--] Pot=[1/12,2,2,--,--,1/12] Hops=[1,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/4,--,5/4,--,--,1/12] Hops=[2,∞,∞,∞,∞,1] Prior=[--,--,--,--,--] Pot=[1/9,--,1/9,--,--,1/9] Hops=[3,∞,1,∞,∞,0] Prior=[--,--,2,--,--] Pot=[1/4,--,5/4,--,5/4,1/12] Hops=[2,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/12,--,13/4,5/4,--,1/4] Hops=[1,∞,2,∞,∞,2] Prior=[--,--,1,--,--]

Source Target Algorithm 2 — Example

*Distributed Bellman-Ford: source=Target, metric=potential (e.g., danger level)

slide-9
SLIDE 9

9 if si is a user sensor then while Not at the goal G do Broadcast inquiry message (Gid) for all received message m = (Gid,myid(sk),hops,potential,prior) do Choose the message m with minimal potential then minimal hops Let myid(sk) be the id for the sender of this message Move towards myid(sk) and prior if si is an information sensor then if receive (Gid) inquiry message then Reply with (Gid,myid(si),hopsg,Pg,priorg) Algorithm 3

slide-10
SLIDE 10

10

1 i=0 3 2 4 5

Pot=[1/4,--,9/4,--,--,1/4] Hops=[0,∞,2,∞,∞,2] Prior=[--,--,1,--,--] Pot=[1/12,2,2,--,--,1/12] Hops=[1,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/4,--,5/4,--,--,1/12] Hops=[2,∞,∞,∞,∞,1] Prior=[--,--,--,--,--] Pot=[1/9,--,1/9,--,--,1/9] Hops=[3,∞,1,∞,∞,0] Prior=[--,--,2,--,--] Pot=[1/4,--,5/4,--,5/4,1/12] Hops=[2,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/12,--,13/4,5/4,--,1/4] Hops=[1,∞,2,∞,∞,2] Prior=[--,--,1,--,--]

Source Target

<goal=2,id=4, hops=1, pot=5/4, prior=2> <goal=2,id=0, hops=2, pot=9/4, prior=1> <goal=2,id=1, hops=1, pot=2, prior=2>

Algorithm 3 — Example

slide-11
SLIDE 11

11

1 i=0 3 2 4 5

Pot=[1/9,--,1/9,--,--,1/9] Hops=[3,∞,1,∞,∞,0] Prior=[--,--,2,--,--]

Source Target Algorithm 3 — Example

Pot=[1/4,--,9/4,--,--,1/4] Hops=[0,∞,2,∞,∞,2] Prior=[--,--,1,--,--] Pot=[1/12,2,2,--,--,1/12] Hops=[1,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/4,--,5/4,--,--,1/12] Hops=[2,∞,∞,∞,∞,1] Prior=[--,--,--,--,--] Pot=[1/4,--,5/4,--,5/4,1/12] Hops=[2,∞,1,∞,∞,1] Prior=[--,--,2,--,--] Pot=[1/12,--,13/4,5/4,--,1/4] Hops=[1,∞,2,∞,∞,2] Prior=[--,--,1,--,--]

slide-12
SLIDE 12

12

Profiling of neighbors: Use information only from stable one-hop

  • neighbors. Stable neighbors are those from which we have received

packets from it more than (1/5)M times. Neighbors exchange frequency information.

→ eliminate asymmetry and transient links

Delaying broadcasts: waiting a preventive time to see if there is a better neighbor. Only one packet broadcasted (Dijkstra?).

→ transmit less packets

Random delays.

→ reduce congestion

Retransmissions.

→reliability

Route cache flushing.

→adaptability

Implementation Issues: Performance Optimization

slide-13
SLIDE 13

13 THEOREM 1: Algorithm 3 always give the user sensor a path to the goal PROOF:

  • After algorithm 2, prior link points to a node with smaller

potential value. Thus, a part from the goal node, there is always a neighboring node with a smaller potential value.

  • > no local minimum exists in the network.
  • >A user's sensor can always find a node among its

neighbors that leads to a smaller potential value. If the process continues, the node will end up with the goal that has the smallest potential value 0 (the goal). Therefore, Algorithm 3 can always give the user sensor a path to the goal. QED Analysis — Correctness

slide-14
SLIDE 14

14 Analysis — Hop Distance Model Idealistic situation: Hops = L/R R L

slide-15
SLIDE 15

15 Analysis — Hop Distance Model Realistic situation: Hops = L/avg(R'i) R' ≤ R → avg(R'i) ≤ R evaluated distance is (R/avg(R'i))·L R'1 L R'2 R'3 R'4

slide-16
SLIDE 16

16 Analysis — Hop Distance Model Empirically determined: E[R'], stdev[R'] then, E[nR']=nE[R'] stdev[nR']=sqrt(n)·stdev[R'] As nE[R'] >> sqrt(n)·stdev[R'], relative small variation -> robustness

  • > for large paths, hop count can be used as an estimate of the distance.
slide-17
SLIDE 17

17 Real distance vs hops (7x7 grid). Linear relationship Experiments — Correctness

slide-18
SLIDE 18

18 When a node broadcasts its, k, neighbors remain silent On average, each node processes info regarding o obstacles Transmission rate of each node is b packets/s

  • > Propagation time for obstacle to node is:
  • ·(l·(k/b))

k/b : waiting time to avoid broadcasting more than once l=min(L,l0) [number of hops], L : distance for obstacle potential to become 0 l0 : distance from node to obstacle For b=40 packets/s, k=8 -> k/b=0.2s

  • =1 and l=10 -> 2 seconds to propagate info 10 hops away.

Analysis — Propagation and Communication Capability

slide-19
SLIDE 19

19 50 Mote MOT300 testbed. Each mote has 'GPS' videotaping + logging metrics: time for danger information to propagate to the whole network, time for all nodes to find the shortest distance to the danger, time for the goal information to propagate to the whole network, time for all the nodes to find their safest path, Experiments

slide-20
SLIDE 20

20 Experiments — Measuring Adaptation

slide-21
SLIDE 21

21 Response time: Route cache flushed every 10 seconds. Avg response time 5 seconds (time till the cache is flushed + propagation time) Experiments — Measuring Adaptation

Response time= time from topology change till user finds path

slide-22
SLIDE 22

22 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Communication graph (at least once) Experiments — Measuring Adaptation

  • Absence of

expected links

  • Presence of

long links

  • Irregular
slide-23
SLIDE 23

23 Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Propagation time

  • Irregular: short

time for some motes to stabilize, long time for

  • thers
slide-24
SLIDE 24

24 Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Received and transmitted packets

  • Motes close to
  • bstacles and goal

transmit/receive more

slide-25
SLIDE 25

25 Experiments — Measuring Adaptation

  • Motes close to
  • bstacles and goal

transmit/receive more 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Received and transmitted packets Obstacle Goal

slide-26
SLIDE 26

26 Experiments — Performance Optimization Long links (transient) disappear. (1,7) appears.

slide-27
SLIDE 27

27 Experiments — Performance Optimization Obstacle and goal propagation time faster and more even, due to less congestion

slide-28
SLIDE 28

28 Experiments — Performance Optimization Less suppressed packets (all nodes have bigger prob to broadcast best found value), due to less congestion

slide-29
SLIDE 29

29 Experiments — Performance Optimization More balanced, more packets on goal propagation due to active broadcast to test network congestion

slide-30
SLIDE 30

30 Data loss. Due to congestion, transmission interference, and garbled messages. Asymmetric connection.

  • Congestion. Likely if message rate is high and

aggravated when close nodes try to transmit at the same time. Transient links. Experiments — Lessons Learned

slide-31
SLIDE 31

31 On a dense sensor network, local information is the same for neighboring nodes. Not all the nodes need to keep same information. Each sensor keeps a piece of information with prob p= m / sum(mi) m : local memory sum(mi) : all information to be stored The prob of that piece of information to be found on the area 1 – (1 – p) #nodes Using Sensor Networks to Distribute Information

slide-32
SLIDE 32

32 Using Sensor Networks to Distribute Information The bigger the density, the smaller the prob to keep information and the faster the prob to find the information converges to 1

slide-33
SLIDE 33

33 if I am the query sensor s then depth1 = depth2 = 1 while true do Broadcast (s,query,depth1,depth2) Wait for time depth1*Δ if some reply arrives then stop else depth1++, depth2++ if I am not the query sensor then receive(s,query,depth1,depth2) if I have already received a message with prefix (s,query,depth1,*) then discard the message if I have the information to query then send the information to s, stop if depth2-1 == 0 then stop else broadcast(s,query,depth1,depth2-1) Algorithm 4 – Retrieval of information

slide-34
SLIDE 34

34 Conclusions contributions: 1.- an interesting application for sensor network 2.- an implementation and evaluation on a physical sensor network 3.- a distance computation method that does not use node positions 4.- analysis and hardware experimentation

< @ . > dmassagu uci edu

slide-35
SLIDE 35

35

< @ . > dmassagu uci edu

slide-36
SLIDE 36

36 THEOREM 2: The computed potential integration on the computed path is upper and lower bounded with respect to the actual potential integration on the path. (R/2)·q2·P1 ≤ P2 ≤ R·q1·P1 THEOREM 3: The potential integration on the computed path is upper bounded with respect to the potential integration on the

  • ptimal sensor path

P2 ≤ (2q1/q0)·P0 Analysis — Performance Bounds s0 si sk sensor path

  • ptimal sensor path

P2 : ∫ potentials on a path P1 : ∑ potentials at every node of a path P0 : ∫ potentials on the optimal sensor path

dj dj

slide-37
SLIDE 37

37 54 experiments – 8 different topologies Experiments — Correctness

  • ptimal

average worst