Performance Oriented Association in Cellular Networks with - - PowerPoint PPT Presentation

performance oriented association in cellular networks
SMART_READER_LITE
LIVE PREVIEW

Performance Oriented Association in Cellular Networks with - - PowerPoint PPT Presentation

Performance Oriented Association in Cellular Networks with Technology Diversity Abishek Sankararaman, Jeong-woo Cho, Franois Baccelli. Outline Motivation and Background. Main mathematical model. Summary of Results. Conclusions


slide-1
SLIDE 1

Performance Oriented Association in Cellular Networks with Technology Diversity

Abishek Sankararaman, Jeong-woo Cho, François Baccelli.

slide-2
SLIDE 2

Outline

  • Motivation and Background.
  • Main mathematical model.
  • Summary of Results.
  • Conclusions
slide-3
SLIDE 3

MVNOs (Mobile Virtual Network Operators) These operators pool together various wireless technologies (for eg. 2G, 3G, 4G LTE) to create a service.

Background

A UE can dynamically chose between the various technologies depending on whichever yields higher instantaneous benefit.

slide-4
SLIDE 4

A new model of cellular service by

Background

slide-5
SLIDE 5

Leveraging the presence and control of multiple wireless technologies operating on non-overlapping bandwidth.

Google Fi - The different cellular operators and WiFi operate

  • n separate bands.

MVNOs - Multiple orthogonal technologies (For ex. 3G and 4G LTE).

What the Paper is About ? Technology Diversity

slide-6
SLIDE 6

Leveraging the presence and control of multiple wireless technologies operating on non-overlapping bandwidth.

Google Fi - The different cellular operators and WiFi operate

  • n separate bands.

MVNOs - Multiple orthogonal technologies (For ex. 2G, 3G and 4G LTE).

Technology Diversity - A framework to leverage and evaluate the benefits from the diversity of wireless technologies.

What the Paper is About ? Technology Diversity

slide-7
SLIDE 7

Which Base-Station/Access Point must a UE associate with ? A principled way to exploit the diversity in the network.

The Problem we study - Base Station Association

slide-8
SLIDE 8

The “optimal” BS Association is not an obvious choice

First Guess - Connect to the nearest BS irrespective of the type

  • f BS it is.
slide-9
SLIDE 9

First Guess - Connect to the nearest BS irrespective of the type

  • f BS it is.

Nearest BS

SNR > SNR

(Basis for nearest BS association.)

Signal Interference + Noise

SINR =

The “optimal” BS Association is not an obvious choice

slide-10
SLIDE 10

First Guess - Connect to the nearest BS irrespective of the type

  • f BS it is.

Nearest BS

SNR > SNR

(Basis for nearest BS association.)

But bandwidths are not overlapping and thus interference only from one type of Base Stations.

Signal Interference + Noise

SINR =

The “optimal” BS Association is not an obvious choice

slide-11
SLIDE 11

First Guess - Connect to the nearest BS irrespective of the type

  • f BS it is.

Nearest BS

SINR < SINR SNR > SNR

(Basis for nearest BS association.)

Thus, in this example But bandwidths are not overlapping and thus interference only from one type of Base Stations.

Signal Interference + Noise

SINR =

The “optimal” BS Association is not an obvious choice

slide-12
SLIDE 12

Mathematical Framework - Network Model

Consider a network comprised of different technologies.

T

The BS/APs of technology is distributed as a independent Poisson Point Process with intensity

i

φi ⊂ R2 λi

slide-13
SLIDE 13

Denote by to be the closest point to the origin and by Consider a network comprised of different technologies.

T

jth

Xi

j ∈ φi

ri

j = ||Xi j||

Mathematical Framework - Network Model

The BS/APs of technology is distributed as a independent Poisson Point Process with intensity

i

φi ⊂ R2 λi

slide-14
SLIDE 14

Denote by to be the closest point to the origin and by Consider a network comprised of different technologies.

T

There is a typical user at the origin of the Euclidean plane who wishes to associate to a BS. The BS/APs of technology is distributed as a independent Poisson Point Process with intensity

i

φi ⊂ R2 λi

jth

Xi

j ∈ φi

ri

j = ||Xi j||

Palm theory connects the viewpoint of a single user to the average performance experienced by the users in the network.

Mathematical Framework - Network Model

slide-15
SLIDE 15

X2

1

X1

1

T = 2 φ1 = φ2 =

with intensity with intensity

λ1 λ2

Mathematical Framework - Network Model

slide-16
SLIDE 16

Independent fading from the jth nearest BS of technology i to the typical user - BS of technology transmits at power i Pi Signal from BS of technology i attenuated with distance as given by the function li(·) : R+ → R+ Hi

j

Mathematical Framework - Signal Model

slide-17
SLIDE 17

r1

j

r2

j

(Signal Power from the BS) (Signal Power from the BS)

P2H2

j l2(r2 j)

P1H1

j l2(r1 j)

Independent fading from the jth nearest BS of technology i to the typical user - BS of technology transmits at power i Pi Signal from BS of technology i attenuated with distance as given by the function li(·) : R+ → R+ Hi

j

Mathematical Framework - Signal Model

slide-18
SLIDE 18

Performance Metrics

(Non-overlapping bandwidths =

⇒Interference from only one technology) pi(·) : R+ → R+

  • For each technology , denote by bounded non-increasing

functions denoting the reward function. i SINRi,j SINR of the typical UE when it associates to the jth nearest BS of technology i.

slide-19
SLIDE 19

(Non-overlapping bandwidths =

⇒Interference from only one technology) pi(·) : R+ → R+

  • For each technology , denote by bounded non-increasing

functions denoting the reward function. i SINRi,j SINR of the typical UE when it associates to the jth nearest BS of technology i.

  • If the typical UE connects to the jth nearest BS of technology

i, then it receives a reward of pi(SINRi,j

0 )

Performance Metrics

slide-20
SLIDE 20

X2

2

The reward received by the UE in this example is p2(SINR2,2

0 )

T = 2 φ1 = φ2 =

with intensity with intensity

λ1 λ2

Performance Metrics

slide-21
SLIDE 21

X2

2

The reward received by the UE in this example is p2(SINR2,2

0 )

T = 2 φ1 = φ2 =

with intensity with intensity

λ1 λ2 Examples of common reward functions

  • Coverage
  • Average Achievable Rate

pi(x) = 1(x ≥ βi) pi(x) = Bi log2(1 + x)

Performance Metrics

slide-22
SLIDE 22

Goal- Design association schemes exploiting available network “information” at the UE, that maximize expected reward of a typical UE.

Information at the UE

slide-23
SLIDE 23

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k Goal- Design association schemes exploiting available network “information” at the UE, that maximize expected reward of a typical UE. k

Information at the UE

slide-24
SLIDE 24

How Information affects Optimal Association

SINR < SINR

When averaged across fading

slide-25
SLIDE 25

SINR < SINR

When averaged across fading

How Information affects Optimal Association

slide-26
SLIDE 26

Sudden very good signal which the UE can sense.

SINR < SINR

When averaged across fading

How Information affects Optimal Association

slide-27
SLIDE 27

Sudden very good signal which the UE can sense.

In this case, UE should associate to

SINR < SINR

When averaged across fading

How Information affects Optimal Association

slide-28
SLIDE 28

Information at the UE

Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.

slide-29
SLIDE 29

Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.

(Ω, F, P)

The Probability space containing the independent RVs and {φi}T

1=1

{Hi

j}i∈[1,T ],j∈N

  • Information at the UE - Formalization
slide-30
SLIDE 30

Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.

(Ω, F, P)

The Probability space containing the independent RVs and {φi}T

1=1

{Hi

j}i∈[1,T ],j∈N

  • FI ⊆ F

The information sigma-algebra.

  • Information at the UE - Formalization
slide-31
SLIDE 31

Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.

(Ω, F, P)

The Probability space containing the independent RVs and {φi}T

1=1

{Hi

j}i∈[1,T ],j∈N

  • This is measurable function

denoting the association scheme.

  • π : Ω → [1, T] × N

FI

FI ⊆ F

The information sigma-algebra.

  • Information at the UE - Formalization
slide-32
SLIDE 32

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k k

Information at the UE - Examples

slide-33
SLIDE 33

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k k

Information at the UE - Examples

FI = σ

  • {ri

1}T i=1

slide-34
SLIDE 34

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k k

Information at the UE - Examples

FI = σ

  • {ri

1}T i=1

  • FI = σ
  • {ri

k}T i=1

slide-35
SLIDE 35

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k k

Information at the UE - Examples

FI = σ

  • {ri

1}T i=1

  • FI = σ
  • {ri

k}T i=1

  • FI = σ
  • {ri

k}T i=1, {Hi j}j∈[1,k],i∈[1,T ]

slide-36
SLIDE 36

Examples of Information that a UE can know -

  • Nearest BS of all technologies.
  • Nearest BS of all technologies.
  • Instant fading and the distance to the nearest BS
  • Noisy estimate of the instant fading from the nearest BS
  • f each technology.

k k k

Information at the UE - Examples

FI = σ

  • {ri

1}T i=1

  • FI = σ
  • {ri

k}T i=1

  • FI = σ
  • {ri

k}T i=1, {Hi j}j∈[1,k],i∈[1,T ]

  • where

FI = σ ⇣ σ({ri

k}T i=1) ∪ F

0⌘

F

0 ⊆ σ

  • {Hi

j}j∈[1,k],i∈[1,T ]

slide-37
SLIDE 37

Performance Metric for Association

This is measurable function denoting the association scheme.

  • π : Ω → [1, T] × N

FI

FI ⊆ F

The information sigma-algebra.

  • (Ω, F, P)

The Probability space containing the independent RVs and {φi}T

1=1

{Hi

j}i∈[1,T ],j∈N

Recall - pi(·) : R+ → R+ - The non-increasing reward function of technology i

slide-38
SLIDE 38

This is measurable function denoting the association scheme.

  • π : Ω → [1, T] × N

FI

FI ⊆ F

The information sigma-algebra.

  • (Ω, F, P)

The Probability space containing the independent RVs and {φi}T

1=1

{Hi

j}i∈[1,T ],j∈N

pi(·) : R+ → R+ - The non-increasing reward function of technology i

Rπ = E[pπ(0)(SINRπ

0 )]

Performance of policy is then

π

Recall -

Performance Metric for Association

slide-39
SLIDE 39

Optimal Association

For any association scheme , the expected reward is

π

Rπ = E[pπ(0)(SINRπ

0 )]

slide-40
SLIDE 40

Thus, the optimal association is then obviously the one that maximizes expected reward i.e. π∗

I = arg sup π Rπ I

(sup over all measurable functions )

π : Ω → [1, T] × N

FI

For any association scheme , the expected reward is

π

Rπ = E[pπ(0)(SINRπ

0 )]

Optimal Association

slide-41
SLIDE 41

Proposition:

π∗

I = arg

sup

i∈[1,T ],j∈N

E[pi(SINRi,j

0 )|FI]

Thus, the optimal association is then obviously the one that maximizes expected reward i.e. π∗

I = arg sup π Rπ I

(sup over all measurable functions )

π : Ω → [1, T] × N

FI

For any association scheme , the expected reward is

π

Rπ = E[pπ(0)(SINRπ

0 )]

a.s.

Optimal Association

slide-42
SLIDE 42

Optimal Association - Comparison of Information

a.s. The optimal association under information is given by FI π∗

I

π∗

I = arg

sup

i∈[1,T ],j∈N

E[pi(SINRi,j

0 )|FI]

slide-43
SLIDE 43

a.s. The optimal association under information is given by FI π∗

I

π∗

I = arg

sup

i∈[1,T ],j∈N

E[pi(SINRi,j

0 )|FI]

Denote by the performance of the optimal association as Rπ∗

I

= E[pπ∗(0)(SINRπ∗

0 )]

Rπ∗

I

where

Optimal Association - Comparison of Information

slide-44
SLIDE 44

a.s. The optimal association under information is given by FI π∗

I

π∗

I = arg

sup

i∈[1,T ],j∈N

E[pi(SINRi,j

0 )|FI]

Denote by the performance of the optimal association as Rπ∗

I

= E[pπ∗(0)(SINRπ∗

0 )]

Rπ∗

I

where Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Optimal Association - Comparison of Information

slide-45
SLIDE 45

a.s. The optimal association under information is given by FI π∗

I

π∗

I = arg

sup

i∈[1,T ],j∈N

E[pi(SINRi,j

0 )|FI]

Denote by the performance of the optimal association as Rπ∗

I

= E[pπ∗(0)(SINRπ∗

0 )]

Rπ∗

I

where Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Comparison of schemes without messy computation !

Optimal Association - Comparison of Information

slide-46
SLIDE 46

Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Proof -

Rπ∗

I2 = E

" sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

#

Optimal Association - Comparison of Information

slide-47
SLIDE 47

Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Proof -

Rπ∗

I2 = E

" sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

#

= E " E " sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

  • FI1

##

Tower Property

Optimal Association - Comparison of Information

slide-48
SLIDE 48

Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Proof -

Rπ∗

I2 = E

" sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

#

= E " E " sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

  • FI1

## ≥ E " sup

i∈[1,T ],j≥1

E  E[pi(SINRi,j

0 )|FI2]

  • FI1

#

Tower Property Jensen’s Inequality

Optimal Association - Comparison of Information

slide-49
SLIDE 49

Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Proof -

Rπ∗

I2 = E

" sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

#

= E " E " sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

  • FI1

## ≥ E " sup

i∈[1,T ],j≥1

E  E[pi(SINRi,j

0 )|FI2]

  • FI1

# = E " sup

i∈[1,T ],j≥1

E h pi(SINRi,j

0 )|FI1

i#

Tower Property Jensen’s Inequality Tower Property

Optimal Association - Comparison of Information

slide-50
SLIDE 50

Theorem - “More information gives better performance”

FI1 ⊆ FI2 = ⇒ Rπ∗

I1 ≤ Rπ∗ I2

Proof -

Rπ∗

I2 = E

" sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

#

= E " E " sup

i∈[1,T ],j≥1

E[pi(SINRi,j

0 )|FI2]

  • FI1

## ≥ E " sup

i∈[1,T ],j≥1

E  E[pi(SINRi,j

0 )|FI2]

  • FI1

# = E " sup

i∈[1,T ],j≥1

E h pi(SINRi,j

0 )|FI1

i#

= Rπ∗

I1

Tower Property Jensen’s Inequality Tower Property

Optimal Association - Comparison of Information

slide-51
SLIDE 51

Optimal Association - Associate to the Nearest BS

Lemma: If the information is independent of FI then, arg sup

j≥1

E[pi(SINRi,j

0 )|FI] = 1

for all . i σ

  • {Hi

j}j≥1,i∈[1,T ]

slide-52
SLIDE 52

In particular, in the absence of fading information, the

  • ptimal strategy is to associate to the nearest BS of the best

technology.

Optimal Association - Associate to the Nearest BS

Lemma: If the information is independent of FI then, arg sup

j≥1

E[pi(SINRi,j

0 )|FI] = 1

for all . i σ

  • {Hi

j}j≥1,i∈[1,T ]

slide-53
SLIDE 53

Examples of Association Policies and Computations

Some examples of association policies.

  • Optimal Association under no information.
  • Optimal Association under nearest BS.
  • Max-Ratio Association.
  • Optimal Association under complete network information.
slide-54
SLIDE 54

Max-Ratio Association

Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers !

slide-55
SLIDE 55

Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers ! We propose a “data-dependent” association scheme called Max-Ratio association.

Max-Ratio Association

slide-56
SLIDE 56

Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers ! We propose a “data-dependent” association scheme called Max-Ratio association. πm =  arg max

i∈[1,T ]

ri

1

ri

2

, 1

  • Associate to the nearest BS of the technology yielding the

maximum ratio. (Oblivious to the statistical assumptions on the network.)

Max-Ratio Association

slide-57
SLIDE 57

πm =  arg max

i∈[1,T ]

ri

1

ri

2

, 1

  • Associate to the nearest BS of the technology yielding the

maximum ratio. Information - FI = σ ✓ ∪T

i=1

ri

1

ri

2

Max-Ratio Association

slide-58
SLIDE 58

πm =  arg max

i∈[1,T ]

ri

1

ri

2

, 1

  • Associate to the nearest BS of the technology yielding the

maximum ratio. This scheme trades off high signal power with that of interference power. Information - FI = σ ✓ ∪T

i=1

ri

1

ri

2

Max-Ratio Association

slide-59
SLIDE 59

Theorem - Let the noise powers be 0 and the reward function for all technologies are the same, i.e. . Consider, the family of power law path-loss functions , where . Let be any integer larger than or equal to 2. If the information at the UE is the distance to the nearest BS of each technology, i.e. , then Max-Ratio is asymptotically almost surely optimal i.e.

Max-Ratio Association - Asymptotic Optimality

pi(·) = p(·)∀i {l(α)(·)}α>2 l(α)(x) = x−α k k FI = σ(∪T

i=1(ri 1, · · · , ri k))

π∗

α α→∞

− − − − →  arg max

i∈[1,T ]

ri

1

ri

2

, 1

  • a.s.
slide-60
SLIDE 60

2.5 3 3.5 4 4.5 5 5.5 6 6.5 7

Path-loss Exponent α

2 4 6 8 10 12

Average Achievable Rate

Opt-Association (1 BS) Opt-Association (2 BS) Random BS Association Nearest BS Association Max-Ratio Association

Max-Ratio Association - Finite Scale Behavior

π∗

α α→∞

− − − − →  arg max

i∈[1,T ]

ri

1

ri

2

, 1

  • a.s.

For , Max-Ratio is nearly optimal.

α ≥ 5

slide-61
SLIDE 61

Formulas for Performance - Generalized Association

Denote by , the dimensional vector corresponding to the information about technology ri L i

slide-62
SLIDE 62

Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Generalized Association - and i∗ = arg max

i∈[1,T ] πi(ri, λi)

by , the index of the BS to associate to if .

i∗ = i

ji(ri, λi)

Formulas for Performance - Generalized Association

slide-63
SLIDE 63

Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Generalized Association - and i∗ = arg max

i∈[1,T ] πi(ri, λi)

by , the index of the BS to associate to if .

i∗ = i

ji(ri, λi)

Randomness

Formulas for Performance - Generalized Association

slide-64
SLIDE 64

Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Example - The Max-Ratio and the Optimal Association fit into this performance evaluation framework. Generalized Association - and i∗ = arg max

i∈[1,T ] πi(ri, λi)

by , the index of the BS to associate to if .

i∗ = i

ji(ri, λi)

Randomness

Formulas for Performance - Generalized Association

slide-65
SLIDE 65

Generalized Association - and i∗ = arg max

i∈[1,T ] πi(ri, λi)

by , the index of the BS to associate to if .

i∗ = i

ji(ri, λi)

ri

L i The dimensional vector corresponding to information

  • n technology

fi(·) πi(ri, λi) Fi(·) The PDF and CDF of ,

I = T

X

i=1

Z

r2Rl E[pi(SINRi,ji(r,λi)

)|r]fi(r)

T

Y

j=1,j6=i

Fj(πj(r, λj))dr

Theorem -

Formulas for Performance - Generalized Association

slide-66
SLIDE 66

Generalized Association - and i∗ = arg max

i∈[1,T ] πi(ri, λi)

by , the index of the BS to associate to if .

i∗ = i

ji(ri, λi)

ri

L i The dimensional vector corresponding to information

  • n technology

fi(·) πi(ri, λi) Fi(·) The PDF and CDF of ,

I = T

X

i=1

Z

r2Rl E[pi(SINRi,ji(r,λi)

)|r]fi(r)

T

Y

j=1,j6=i

Fj(πj(r, λj))dr

Theorem - Corollary - Closed form formulas for Max-Ratio and certain optimal association schemes.

Formulas for Performance - Generalized Association

slide-67
SLIDE 67

Conclusions

Provide a systematic framework to design and evaluate association schemes that exploits the available information and technology diversity

slide-68
SLIDE 68

Provide a systematic framework to design and evaluate association schemes that exploits the available information and technology diversity Max-Ratio Algorithm - A simple and almost optimal algorithm for association.

Conclusions

slide-69
SLIDE 69

Thank you for your time.