Optimal sampling of multiple linear processes over a shared medium - - PowerPoint PPT Presentation

optimal sampling of multiple linear processes over a
SMART_READER_LITE
LIVE PREVIEW

Optimal sampling of multiple linear processes over a shared medium - - PowerPoint PPT Presentation

Optimal sampling of multiple linear processes over a shared medium Sebin Mathew a , Karl H. Johannson b , Aditya Mahajan a IEEE Conference on Decision and Control 17 December, 2018 a McGill University b KTH Royal Institute of Technology Sampling


slide-1
SLIDE 1

Optimal sampling of multiple linear processes

  • ver a shared medium

Sebin Mathewa, Karl H. Johannsonb, Aditya Mahajana

a McGill University b KTH Royal Institute of Technology

IEEE Conference on Decision and Control 17 December, 2018

slide-2
SLIDE 2

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

slide-3
SLIDE 3

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Sensor Networks

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

slide-4
SLIDE 4

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Sensor Networks Internet of Things

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

slide-5
SLIDE 5

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Sensor Networks Internet of ThingsSmart Cities

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

slide-6
SLIDE 6

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Sensor Networks Internet of ThingsSmart Cities

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

Salient features:

Adapt transmission rate at sensors to avoid congestion and, at the same time, minimize estimation errors Adaptation should take place in a low complexity and distributed manner

slide-7
SLIDE 7

Sampling over shared medium–(Mathew, Johannson, Mahajan)

1

Sensor Networks Internet of ThingsSmart Cities

Many remote estimation applications where:

Multiple sensors transmit over shared links Link capacity varies exogenously

Salient features:

Adapt transmission rate at sensors to avoid congestion and, at the same time, minimize estimation errors Adaptation should take place in a low complexity and distributed manner

Show that such questions can be answered using dual decomposition theory

slide-8
SLIDE 8

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

slide-9
SLIDE 9

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

slide-10
SLIDE 10

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

slide-11
SLIDE 11

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

slide-12
SLIDE 12

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt

slide-13
SLIDE 13

Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

slide-14
SLIDE 14

Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

slide-15
SLIDE 15

Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C.

slide-16
SLIDE 16

Sampling over shared medium–(Mathew, Johannson, Mahajan)

4

Solution approach

Proposition

Under assumptions (A1) and (A2), the optimization problem has a unique solution which we denote by 𝐒∗ = (R∗

1, . . . , R∗ n).

slide-17
SLIDE 17

Sampling over shared medium–(Mathew, Johannson, Mahajan)

4

Solution approach

Proposition

Under assumptions (A1) and (A2), the optimization problem has a unique solution which we denote by 𝐒∗ = (R∗

1, . . . , R∗ n).

How do we fjnd 𝐒∗ in a distributed manner?

slide-18
SLIDE 18

Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Distributed Solution via Dual Decomposition

slide-19
SLIDE 19

Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC

Distributed Solution via Dual Decomposition

slide-20
SLIDE 20

Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

slide-21
SLIDE 21

Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

At each sensor i

Pick Ri,k to min Mi(Ri) + λkRi,k

At the network

λk+1 = [ λk − αk ( C −

n

i=1

Ri,k )]

+

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

Syncronous Algorithm

Network starts with a guess λ0. At each iteration k = 0, 1, . . .

slide-22
SLIDE 22

Sampling over shared medium–(Mathew, Johannson, Mahajan)

6

Properties of synchronous algorithm

Theorem 1

Under (A1) and (A2), for any initial guess λ0 and appropriately chosen step sizes αk, lim

k→∞ 𝐒k ≔ lim k→∞(R1,k, . . . , Rn,k) = 𝐒∗.

slide-23
SLIDE 23

Sampling over shared medium–(Mathew, Johannson, Mahajan)

6

Properties of synchronous algorithm

Theorem 1

Under (A1) and (A2), for any initial guess λ0 and appropriately chosen step sizes αk, lim

k→∞ 𝐒k ≔ lim k→∞(R1,k, . . . , Rn,k) = 𝐒∗.

Implementation

The synchronous algorithm can be implemented as part of the initial handshaking protocol when the sensors come online.

Drawbacks

Large signaling overhead. Algorithm needs to be rerun when: a sensor leaves, or a new sensor comes on board, or the network capacity changes.

slide-24
SLIDE 24

Sampling over shared medium–(Mathew, Johannson, Mahajan)

6

Properties of synchronous algorithm

Theorem 1

Under (A1) and (A2), for any initial guess λ0 and appropriately chosen step sizes αk, lim

k→∞ 𝐒k ≔ lim k→∞(R1,k, . . . , Rn,k) = 𝐒∗.

Implementation

The synchronous algorithm can be implemented as part of the initial handshaking protocol when the sensors come online.

Drawbacks

Large signaling overhead. Algorithm needs to be rerun when: a sensor leaves, or a new sensor comes on board, or the network capacity changes.

Salient feature: The network doesn’t need to know Ri,k. It only needs an estimate of ∑ Ri, which it can infer from the received packets.

slide-25
SLIDE 25

Sampling over shared medium–(Mathew, Johannson, Mahajan)

7

Asynchronous algorithm for choosing sampling rates

At the network

Initialize λ > 0 Upon event ⟨new packet received⟩ do Estimate received sum rate ˆ Ck based on packets received in a suffjciently large sliding window of time. λk+1 = [λk − αk(C − ˆ Ck)]

+.

Broadcast λk+1

slide-26
SLIDE 26

Sampling over shared medium–(Mathew, Johannson, Mahajan)

7

Asynchronous algorithm for choosing sampling rates

At the network

Initialize λ > 0 Upon event ⟨new packet received⟩ do Estimate received sum rate ˆ Ck based on packets received in a suffjciently large sliding window of time. λk+1 = [λk − αk(C − ˆ Ck)]

+.

Broadcast λk+1

At each sensor

Upon event ⟨initialize⟩ or ⟨take new sample⟩ do Observe λ Pick Ri to min Mi(Ri) + λRi Set next sampling time = current time + 1 Ri.

slide-27
SLIDE 27

Sampling over shared medium–(Mathew, Johannson, Mahajan)

8

Properties of asynchronous algorithm

Assumption (A3)

The time between the consecutive samples is bounded.

Theorem 2

Under (A1)–(A3), for any initial guess λ0 and appropriately chosen step sizes αk, lim

k→∞ 𝐒k ≔ lim k→∞(R1,k, . . . , Rn,k) = 𝐒∗.

Moreover, if the synchronous and the asynchronous algorithms use the same learning rates {αk}k≥0, then the corresponding Lagrange multipliers converge to the same value.

slide-28
SLIDE 28

Sampling over shared medium–(Mathew, Johannson, Mahajan)

8

Properties of asynchronous algorithm

Assumption (A3)

The time between the consecutive samples is bounded.

Theorem 2

Under (A1)–(A3), for any initial guess λ0 and appropriately chosen step sizes αk, lim

k→∞ 𝐒k ≔ lim k→∞(R1,k, . . . , Rn,k) = 𝐒∗.

Moreover, if the synchronous and the asynchronous algorithms use the same learning rates {αk}k≥0, then the corresponding Lagrange multipliers converge to the same value.

Example

2 sensors: GaussMarkov(1, 1) and GaussMarkov(1, 2). Network capacity C = 1.

50 100 150 200 250 300 0.2 0.4 0.6 0.8 1 R1 R2 Iteration Rates

SYNC ASYNC

50 100 150 200 250 300 20 40 60 80 100 Iteration λ

SYNC ASYNC

slide-29
SLIDE 29

Sampling over shared medium–(Mathew, Johannson, Mahajan)

9

Robustness to packet drops and delays

100 200 300 400 500 20 40 60 80 100 Iteration λ

p = 0 p = 0.1 p = 0.3

slide-30
SLIDE 30

Sampling over shared medium–(Mathew, Johannson, Mahajan)

10

Illustrative example

Changing network conditions

Sensors arrive according to a Poisson process and stay in the system for an exponentially distributed amount of time. Sensor parameters are distributed randomly. Network capacity changes exogeneously.

slide-31
SLIDE 31

Sampling over shared medium–(Mathew, Johannson, Mahajan)

10

Illustrative example

Changing network conditions

Sensors arrive according to a Poisson process and stay in the system for an exponentially distributed amount of time. Sensor parameters are distributed randomly. Network capacity changes exogeneously.

Network

Network is not aware of the number of sensors. Adapts λ according to the asynchronous algorithm Broadcasts the value of λ.

Sensors

Sensors don’t know the network capacity. Run the asynchronous algorithm to adapt rate Ri.

slide-32
SLIDE 32

Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

100 200 300 400 22 24 26 28 30 T (sec) N

N

Illustrative example: System parameters vs time

slide-33
SLIDE 33

Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

slide-34
SLIDE 34

Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

130 140 150 24 26 28 T (sec) Ri

  • Ri

1 2 3 λ

λ

Sensor leaving

slide-35
SLIDE 35

Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

180 185 190 24 26 28 T (sec) Ri

  • Ri

1 2 3 λ

λ

Sensor coming on board

slide-36
SLIDE 36

Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

200 210 220 20 22 24 26 28 T (sec) Ri

  • Ri

2 4 6 8 λ

λ

Network capacity changing

slide-37
SLIDE 37

Sampling over shared medium–(Mathew, Johannson, Mahajan)

12

Comparison with baseline schemes

Scheme 1: Ri = C/30. Scheme 2: Ri = C/N(t).

slide-38
SLIDE 38

Sampling over shared medium–(Mathew, Johannson, Mahajan)

12

Comparison with baseline schemes

Scheme 1: Ri = C/30. Scheme 2: Ri = C/N(t). Performance of Asynchronous algorithm is better than baseline, and signifjcantly so when the network capacity is low.

100 200 300 400 50 100 150 T (sec)

  • i∈N Mi(Ti)

ASYNC SCHEME 1 SCHEME 2

slide-39
SLIDE 39

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

slide-40
SLIDE 40

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt

slide-41
SLIDE 41

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C.

slide-42
SLIDE 42

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C. Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

At each sensor i

Pick Ri,k to min Mi(Ri) + λkRi,k

At the network

λk+1 = [ λk − αk ( C −

n

i=1

Ri,k )]

+

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

Syncronous Algorithm

Network starts with a guess λ0. At each iteration k = 0, 1, . . .

slide-43
SLIDE 43

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C. Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

At each sensor i

Pick Ri,k to min Mi(Ri) + λkRi,k

At the network

λk+1 = [ λk − αk ( C −

n

i=1

Ri,k )]

+

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

Syncronous Algorithm

Network starts with a guess λ0. At each iteration k = 0, 1, . . . Sampling over shared medium–(Mathew, Johannson, Mahajan)

7

Asynchronous algorithm for choosing sampling rates

At the network

Initialize λ > 0 Upon event ⟨new packet received⟩ do Estimate received sum rate ˆ Ck based on packets received in a suffjciently large sliding window of time. λk+1 = [λk − αk(C − ˆ Ck)]

+.

Broadcast λk+1

At each sensor

Upon event ⟨initialize⟩ or ⟨take new sample⟩ do Observe λ Pick Ri to min Mi(Ri) + λRi Set next sampling time = current time + 1 Ri.

slide-44
SLIDE 44

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C. Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

At each sensor i

Pick Ri,k to min Mi(Ri) + λkRi,k

At the network

λk+1 = [ λk − αk ( C −

n

i=1

Ri,k )]

+

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

Syncronous Algorithm

Network starts with a guess λ0. At each iteration k = 0, 1, . . . Sampling over shared medium–(Mathew, Johannson, Mahajan)

7

Asynchronous algorithm for choosing sampling rates

At the network

Initialize λ > 0 Upon event ⟨new packet received⟩ do Estimate received sum rate ˆ Ck based on packets received in a suffjciently large sliding window of time. λk+1 = [λk − αk(C − ˆ Ck)]

+.

Broadcast λk+1

At each sensor

Upon event ⟨initialize⟩ or ⟨take new sample⟩ do Observe λ Pick Ri to min Mi(Ri) + λRi Set next sampling time = current time + 1 Ri. Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

130 140 150 24 26 28 T (sec) Ri

  • Ri

1 2 3 λ

λ Sensor leaving

slide-45
SLIDE 45

Sampling over shared medium–(Mathew, Johannson, Mahajan)

14

Summary

Sampling over shared medium–(Mathew, Johannson, Mahajan)

2

System Model

Network ⋮ Process 1 ⋮ Process n Sampler Sampler Remote Estimator X1(t) Xn(t) ˆ X1(t) ˆ Xn(t)

Process dynamics

dXi(t) = aiXi(t)dt + dWi(t). {Wi(t)}t≥0 is stationary and indep across sensors.

Sampling process

Sensor i samples process i at rate Ri = 1/Ti.

Network

Rate region ℛ = {(R1, . . . , Rn) ∈ ℝn

≥0 : ∑ n i=1 Ri ≤ C}

Estimated process

At a sampling time: ˆ Xi(t) = Xi(t). At other times: dˆ Xi(t) = ai ˆ Xi(t)dt Sampling over shared medium–(Mathew, Johannson, Mahajan)

3

System Performance and Optimization Problem

Mean-square error

Mi(Ri) = Ri ∫

1/Ri

(Xi(t) − ˆ XI(t))

2dt when sensor i is sampling at rate Ri.

Example

If the noise process is a Wiener process with variance σ2

i, then the state process is a

Gauss-Markov (or Ornstein-Uhlenbeck) process, and Mi(Ri) = σ2

i

2ai [ e2ai/Ri−1 2ai/Ri − 1 ] .

Assumptions

(A1) For any sensor i and rate Ri > 0, Mi(Ri) is strictly decreasing and convex in Ri. (A2) Mi(Ri) is twice difgerentiable and there exists a positive constant ci such that M″

i (Ri) ≥ ci for all Ri > 0.

Problem formulation

Find rate (R1, . . . , Rn) ∈ ℝn

≥0

to min

n

i=1

Mi(Ri) such that

n

i=1

Ri ≤ C. Sampling over shared medium–(Mathew, Johannson, Mahajan)

5

Primal Problem

min

𝐒∗∈ℝn

≥0

n

i=1

Mi(Ri) s.t.

n

i=1

Ri ≤ C

Lagrangian Dual

min

λ∈ℝ≥0 L(𝐒, λ)

where L(𝐒, λ) =

n

i=1

[Mi(Ri) + λRi] − λC [Kelly et al 1998] [Low Lapsley 1999] [Chiang et al 2007]

At each sensor i

Pick Ri,k to min Mi(Ri) + λkRi,k

At the network

λk+1 = [ λk − αk ( C −

n

i=1

Ri,k )]

+

Distributed Solution via Dual Decomposition

Dual Decomposition

Decomposes into two parts: Network and Sensor i

Syncronous Algorithm

Network starts with a guess λ0. At each iteration k = 0, 1, . . . Sampling over shared medium–(Mathew, Johannson, Mahajan)

7

Asynchronous algorithm for choosing sampling rates

At the network

Initialize λ > 0 Upon event ⟨new packet received⟩ do Estimate received sum rate ˆ Ck based on packets received in a suffjciently large sliding window of time. λk+1 = [λk − αk(C − ˆ Ck)]

+.

Broadcast λk+1

At each sensor

Upon event ⟨initialize⟩ or ⟨take new sample⟩ do Observe λ Pick Ri to min Mi(Ri) + λRi Set next sampling time = current time + 1 Ri. Sampling over shared medium–(Mathew, Johannson, Mahajan)

11

100 200 300 400 20 25 30 35 T (sec)

  • i∈N Ri

C

  • i∈N Ri

100 200 300 400 22 24 26 28 30 T (sec) N

N

2 4 6 8 λ

λ

Illustrative example: System parameters vs time

130 140 150 24 26 28 T (sec) Ri

  • Ri

1 2 3 λ

λ Sensor leaving Sampling over shared medium–(Mathew, Johannson, Mahajan)

13

Conclustion

The asynchronous event-driven algorithm can adapt to slowly varying network conditions in a distributed

  • manner. Asymptotically, the algorithm converges to the optimal rates.

The sensors and the estimators don’t need synchronous clocks! Robust to packet drops, delays, and slow variation in system parameters. Dual decomposition does not ensure that the constraint ∑ Ri ≤ C is satisfjed at all iterations. In practice, violation of this constraint will lead to congestion. To understand its impact, we need to consider a more elaborate network model where congestion leads to delay.