distributed algorithms for guiding navigation across a
play

Distributed Algorithms for Guiding Navigation across a Sensor - PowerPoint PPT Presentation

Distributed Algorithms for Guiding Navigation across a Sensor Network Qun Li Michael de Rosa Daniela Rus Department of Computer Science Dartmouth College MobiCom 2003 ICS280 Winter'05 Presenter: Daniel Massaguer 1 Static sensor network


  1. Distributed Algorithms for Guiding Navigation across a Sensor Network Qun Li Michael de Rosa Daniela Rus Department of Computer Science Dartmouth College MobiCom 2003 ICS280 – Winter'05 Presenter: Daniel Massaguer 1

  2. Static sensor network guides a mobile device towards a target, maintaining the safest distance to the danger areas Mobile Device has no global map. In-network mobile Target device routing 2

  3. Distributed algorithm: 3 algorithms: Motion planning: Artificial Potential field Safest path: Dynamic programming Navigation (retrieval of safest path) 3

  4. Algorithm 1 — Artificial Potential Field Target + - 1 4 Source - - 4

  5. Algorithm 1 for all sensors s i in the network do pot i = 0, hops j = ∞ for any danger j if sensed-value = danger then hop s i = 0 Broadcast message(i,hops=0) if receive(j,hops) then if hops j > hops+1 then hops j = hops+1 Broadcast message(j,hops j ) for all received j do Compute the potential pot j of j using pot j = 1 / hops j 2 Compute the potential at s i using all pot j , pot i = pot i + pot j *Distributed Bellman-Ford: sources=danger, metric=hops 5

  6. Algorithm 1 — Example 2 i=0 Pot=[1/4,--,5/4,--,--,1/1 2 ] Pot=[1/4,--,--,--,--,1/4] 1 Hops=[2 ,∞,∞,∞,∞, 1] Hops=[0 ,∞,∞,∞,∞, 2] Pot=[1/1 2 ,2,--,--,--,1/1 2 ] Hops=[1 ,∞,∞,∞,∞, 1] 4 5 Pot=[1/4,--,--,--,5/4,1/1 2 ] 3 Hops=[2 ,∞,∞,∞,∞, 1] Pot=[1/9,--,--,--,--,1/9] Pot=[1/1 2 ,--,--,5/4,--,1/4] Hops=[3 ,∞,∞,∞,∞, 0] Hops=[1 ,∞,∞,∞,∞, 2] *Distributed Bellman-Ford: sources=danger, metric=hops 6

  7. Algorithm 2 Let G be a goal sensor G broadcasts msg = ( G id , myid( G ), hops = 0, potential =0) for all sensors s i do Initially hops g = ∞ and P g = ∞ if receive(g,k,hops,potential) then Compute the potential integration from the goal to here: if P g > potential + pot i then P g = potential + pot i hops g = hops + 1 prior g = k Broadcast ( G id , myid( s i ), hops g , P g ) *Distributed Bellman-Ford: source=Target, metric=potential (e.g., danger level) 7

  8. Algorithm 2 — Example Target 2 i=0 Pot=[1/4,--,5/4,--,--,1/1 2 ] Pot=[1/4,--,9/4,--,--,1/4] 1 Hops=[2 ,∞,∞,∞,∞, 1] Hops=[0 ,∞, 2 ,∞,∞, 2] Prior=[--,--,--,--,--] Pot=[1/1 2 ,2,2,--,--,1/1 2 ] Prior=[--,--,1,--,--] Hops=[1 ,∞, 1, ∞,∞, 1] Prior=[--,--,2,--,--] 4 Source 5 Pot=[1/4,--,5/4,--,5/4,1/1 2 ] 3 Hops=[2 ,∞, 1 ,∞,∞, 1] Pot=[1/9,--,1/9,--,--,1/9] Prior=[--,--,2,--,--] Pot=[1/1 2 ,--,13/4,5/4,--,1/4] Hops=[3 ,∞, 1 ,∞,∞, 0] Hops=[1 ,∞, 2 ,∞,∞, 2] Prior=[--,--,2,--,--] Prior=[--,--,1,--,--] *Distributed Bellman-Ford: source=Target, metric=potential (e.g., danger level) 8

  9. Algorithm 3 if s i is a user sensor then while Not at the goal G do Broadcast inquiry message ( G id ) for all received message m = (G id ,my id (s k ),hops,potential,prior) do Choose the message m with minimal potential then minimal hops Let my id (s k ) be the id for the sender of this message Move towards my id (s k ) and prior if s i is an information sensor then if receive (G id ) inquiry message then Reply with (G id ,my id (s i ),hops g ,P g ,prior g ) 9

  10. Algorithm 3 — Example <goal=2,id=0, hops=2, pot=9/4, Target prior=1> <goal=2,id=1, 2 i=0 hops=1, pot=2, prior=2> Pot=[1/4,--,5/4,--,--,1/1 2 ] Pot=[1/4,--,9/4,--,--,1/4] 1 Hops=[2 ,∞,∞,∞,∞, 1] Hops=[0 ,∞, 2 ,∞,∞, 2] Prior=[--,--,--,--,--] Pot=[1/1 2 ,2,2,--,--,1/1 2 ] Prior=[--,--,1,--,--] Hops=[1 ,∞, 1, ∞,∞, 1] <goal=2,id=4, Prior=[--,--,2,--,--] hops=1, pot=5/4, 4 prior=2> Source 5 Pot=[1/4,--,5/4,--,5/4,1/1 2 ] 3 Hops=[2 ,∞, 1 ,∞,∞, 1] Pot=[1/9,--,1/9,--,--,1/9] Prior=[--,--,2,--,--] Pot=[1/1 2 ,--,13/4,5/4,--,1/4] Hops=[3 ,∞, 1 ,∞,∞, 0] Hops=[1 ,∞, 2 ,∞,∞, 2] Prior=[--,--,2,--,--] Prior=[--,--,1,--,--] 10

  11. Algorithm 3 — Example Target 2 i=0 Pot=[1/4,--,5/4,--,--,1/1 2 ] Pot=[1/4,--,9/4,--,--,1/4] 1 Hops=[2 ,∞,∞,∞,∞, 1] Hops=[0 ,∞, 2 ,∞,∞, 2] Prior=[--,--,--,--,--] Pot=[1/1 2 ,2,2,--,--,1/1 2 ] Prior=[--,--,1,--,--] Hops=[1 ,∞, 1, ∞,∞, 1] Prior=[--,--,2,--,--] 4 Source 5 Pot=[1/4,--,5/4,--,5/4,1/1 2 ] 3 Hops=[2 ,∞, 1 ,∞,∞, 1] Pot=[1/9,--,1/9,--,--,1/9] Prior=[--,--,2,--,--] Pot=[1/1 2 ,--,13/4,5/4,--,1/4] Hops=[3 ,∞, 1 ,∞,∞, 0] Hops=[1 ,∞, 2 ,∞,∞, 2] Prior=[--,--,2,--,--] Prior=[--,--,1,--,--] 11

  12. Implementation Issues: Performance Optimization Profiling of neighbors: Use information only from stable one-hop neighbors. Stable neighbors are those from which we have received packets from it more than (1/5)M times. Neighbors exchange frequency information. → eliminate asymmetry and transient links Delaying broadcasts: waiting a preventive time to see if there is a better neighbor. Only one packet broadcasted ( Dijkstra? ). → transmit less packets Random delays. → reduce congestion Retransmissions. → reliability Route cache flushing. → adaptability 12

  13. Analysis — Correctness THEOREM 1: Algorithm 3 always give the user sensor a path to the goal PROOF: ● After algorithm 2, prior link points to a node with smaller potential value. Thus, a part from the goal node, there is always a neighboring node with a smaller potential value. -> no local minimum exists in the network. ->A user's sensor can always find a node among its neighbors that leads to a smaller potential value. If the process continues, the node will end up with the goal that has the smallest potential value 0 (the goal). Therefore, Algorithm 3 can always give the user sensor a path to the goal. QED 13

  14. Analysis — Hop Distance Model R L Idealistic situation: Hops = L/R 14

  15. Analysis — Hop Distance Model R' 1 R' 2 R' 3 R' 4 L Realistic situation: Hops = L/avg(R' i ) R' ≤ R → avg(R' i ) ≤ R evaluated distance is (R/avg(R' i ))·L 15

  16. Analysis — Hop Distance Model Empirically determined: E[R'], stdev[R'] then, E[nR']=nE[R'] stdev[nR']=sqrt(n)·stdev[R'] As nE[R'] >> sqrt(n)·stdev[R'], relative small variation -> robustness -> for large paths, hop count can be used as an estimate of the distance. 16

  17. Experiments — Correctness Real distance vs hops (7x7 grid). Linear relationship 17

  18. Analysis — Propagation and Communication Capability When a node broadcasts its, k , neighbors remain silent On average, each node processes info regarding o obstacles Transmission rate of each node is b packets/s -> Propagation time for obstacle to node is: o·(l·(k/b)) k/b : waiting time to avoid broadcasting more than once l=min(L,l0) [number of hops] , L : distance for obstacle potential to become 0 l0 : distance from node to obstacle For b =40 packets/s, k =8 -> k/b =0.2s o =1 and l =10 -> 2 seconds to propagate info 10 hops away. 18

  19. Experiments 50 Mote MOT300 testbed. Each mote has 'GPS' videotaping + logging metrics: time for danger information to propagate to the whole network, time for all nodes to find the shortest distance to the danger, time for the goal information to propagate to the whole network, time for all the nodes to find their safest path, 19

  20. Experiments — Measuring Adaptation 20

  21. Experiments — Measuring Adaptation Response time: Route cache flushed every 10 seconds. Avg response time 5 seconds (time till the cache is flushed + propagation time) Response time= time from topology change till user finds path 21

  22. Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Communication graph (at least once) ● Absence of expected links ● Presence of long links ● Irregular 22

  23. Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Propagation time ● Irregular: short time for some motes to stabilize, long time for others 23

  24. Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Received and transmitted packets ● Motes close to obstacles and goal transmit/receive more 24

  25. Experiments — Measuring Adaptation 7x7 grid (nodes are far apart) Obstacles at (1,1) and (7,7) – Goal at (1,7) Received and transmitted packets ● Motes close to obstacles and goal transmit/receive more Goal Obstacle 25

  26. Experiments — Performance Optimization Long links (transient) disappear. (1,7) appears. 26

  27. Experiments — Performance Optimization Obstacle and goal propagation time faster and more even, due to less congestion 27

  28. Experiments — Performance Optimization Less suppressed packets (all nodes have bigger prob to broadcast best found value), due to less congestion 28

  29. Experiments — Performance Optimization More balanced, more packets on goal propagation due to active broadcast to test network congestion 29

  30. Experiments — Lessons Learned Data loss. Due to congestion, transmission interference, and garbled messages. Asymmetric connection. Congestion. Likely if message rate is high and aggravated when close nodes try to transmit at the same time. Transient links. 30

  31. Using Sensor Networks to Distribute Information On a dense sensor network, local information is the same for neighboring nodes. Not all the nodes need to keep same information. Each sensor keeps a piece of information with prob p= m / sum(mi) m : local memory sum(mi) : all information to be stored The prob of that piece of information to be found on the area 1 – (1 – p) #nodes 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend