Performance Oriented Association in Cellular Networks with - - PowerPoint PPT Presentation
Performance Oriented Association in Cellular Networks with - - PowerPoint PPT Presentation
Performance Oriented Association in Cellular Networks with Technology Diversity Abishek Sankararaman, Jeong-woo Cho, Franois Baccelli. Outline Motivation and Background. Main mathematical model. Summary of Results. Conclusions
Outline
- Motivation and Background.
- Main mathematical model.
- Summary of Results.
- Conclusions
MVNOs (Mobile Virtual Network Operators) These operators pool together various wireless technologies (for eg. 2G, 3G, 4G LTE) to create a service.
Background
A UE can dynamically chose between the various technologies depending on whichever yields higher instantaneous benefit.
A new model of cellular service by
Background
Leveraging the presence and control of multiple wireless technologies operating on non-overlapping bandwidth.
Google Fi - The different cellular operators and WiFi operate
- n separate bands.
MVNOs - Multiple orthogonal technologies (For ex. 3G and 4G LTE).
What the Paper is About ? Technology Diversity
Leveraging the presence and control of multiple wireless technologies operating on non-overlapping bandwidth.
Google Fi - The different cellular operators and WiFi operate
- n separate bands.
MVNOs - Multiple orthogonal technologies (For ex. 2G, 3G and 4G LTE).
Technology Diversity - A framework to leverage and evaluate the benefits from the diversity of wireless technologies.
What the Paper is About ? Technology Diversity
Which Base-Station/Access Point must a UE associate with ? A principled way to exploit the diversity in the network.
The Problem we study - Base Station Association
The “optimal” BS Association is not an obvious choice
First Guess - Connect to the nearest BS irrespective of the type
- f BS it is.
First Guess - Connect to the nearest BS irrespective of the type
- f BS it is.
Nearest BS
SNR > SNR
(Basis for nearest BS association.)
Signal Interference + Noise
SINR =
The “optimal” BS Association is not an obvious choice
First Guess - Connect to the nearest BS irrespective of the type
- f BS it is.
Nearest BS
SNR > SNR
(Basis for nearest BS association.)
But bandwidths are not overlapping and thus interference only from one type of Base Stations.
Signal Interference + Noise
SINR =
The “optimal” BS Association is not an obvious choice
First Guess - Connect to the nearest BS irrespective of the type
- f BS it is.
Nearest BS
SINR < SINR SNR > SNR
(Basis for nearest BS association.)
Thus, in this example But bandwidths are not overlapping and thus interference only from one type of Base Stations.
Signal Interference + Noise
SINR =
The “optimal” BS Association is not an obvious choice
Mathematical Framework - Network Model
Consider a network comprised of different technologies.
T
The BS/APs of technology is distributed as a independent Poisson Point Process with intensity
i
φi ⊂ R2 λi
Denote by to be the closest point to the origin and by Consider a network comprised of different technologies.
T
jth
Xi
j ∈ φi
ri
j = ||Xi j||
Mathematical Framework - Network Model
The BS/APs of technology is distributed as a independent Poisson Point Process with intensity
i
φi ⊂ R2 λi
Denote by to be the closest point to the origin and by Consider a network comprised of different technologies.
T
There is a typical user at the origin of the Euclidean plane who wishes to associate to a BS. The BS/APs of technology is distributed as a independent Poisson Point Process with intensity
i
φi ⊂ R2 λi
jth
Xi
j ∈ φi
ri
j = ||Xi j||
Palm theory connects the viewpoint of a single user to the average performance experienced by the users in the network.
Mathematical Framework - Network Model
X2
1
X1
1
T = 2 φ1 = φ2 =
with intensity with intensity
λ1 λ2
Mathematical Framework - Network Model
Independent fading from the jth nearest BS of technology i to the typical user - BS of technology transmits at power i Pi Signal from BS of technology i attenuated with distance as given by the function li(·) : R+ → R+ Hi
j
Mathematical Framework - Signal Model
r1
j
r2
j
(Signal Power from the BS) (Signal Power from the BS)
P2H2
j l2(r2 j)
P1H1
j l2(r1 j)
Independent fading from the jth nearest BS of technology i to the typical user - BS of technology transmits at power i Pi Signal from BS of technology i attenuated with distance as given by the function li(·) : R+ → R+ Hi
j
Mathematical Framework - Signal Model
Performance Metrics
(Non-overlapping bandwidths =
⇒Interference from only one technology) pi(·) : R+ → R+
- For each technology , denote by bounded non-increasing
functions denoting the reward function. i SINRi,j SINR of the typical UE when it associates to the jth nearest BS of technology i.
(Non-overlapping bandwidths =
⇒Interference from only one technology) pi(·) : R+ → R+
- For each technology , denote by bounded non-increasing
functions denoting the reward function. i SINRi,j SINR of the typical UE when it associates to the jth nearest BS of technology i.
- If the typical UE connects to the jth nearest BS of technology
i, then it receives a reward of pi(SINRi,j
0 )
Performance Metrics
X2
2
The reward received by the UE in this example is p2(SINR2,2
0 )
T = 2 φ1 = φ2 =
with intensity with intensity
λ1 λ2
Performance Metrics
X2
2
The reward received by the UE in this example is p2(SINR2,2
0 )
T = 2 φ1 = φ2 =
with intensity with intensity
λ1 λ2 Examples of common reward functions
- Coverage
- Average Achievable Rate
pi(x) = 1(x ≥ βi) pi(x) = Bi log2(1 + x)
Performance Metrics
Goal- Design association schemes exploiting available network “information” at the UE, that maximize expected reward of a typical UE.
Information at the UE
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k Goal- Design association schemes exploiting available network “information” at the UE, that maximize expected reward of a typical UE. k
Information at the UE
How Information affects Optimal Association
SINR < SINR
When averaged across fading
SINR < SINR
When averaged across fading
How Information affects Optimal Association
Sudden very good signal which the UE can sense.
SINR < SINR
When averaged across fading
How Information affects Optimal Association
Sudden very good signal which the UE can sense.
In this case, UE should associate to
SINR < SINR
When averaged across fading
How Information affects Optimal Association
Information at the UE
Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.
Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.
(Ω, F, P)
The Probability space containing the independent RVs and {φi}T
1=1
{Hi
j}i∈[1,T ],j∈N
- Information at the UE - Formalization
Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.
(Ω, F, P)
The Probability space containing the independent RVs and {φi}T
1=1
{Hi
j}i∈[1,T ],j∈N
- FI ⊆ F
The information sigma-algebra.
- Information at the UE - Formalization
Notion of Information at the typical UE formalized through the notion of filtrations of a sigma algebra.
(Ω, F, P)
The Probability space containing the independent RVs and {φi}T
1=1
{Hi
j}i∈[1,T ],j∈N
- This is measurable function
denoting the association scheme.
- π : Ω → [1, T] × N
FI
FI ⊆ F
The information sigma-algebra.
- Information at the UE - Formalization
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k k
Information at the UE - Examples
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k k
Information at the UE - Examples
FI = σ
- {ri
1}T i=1
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k k
Information at the UE - Examples
FI = σ
- {ri
1}T i=1
- FI = σ
- {ri
k}T i=1
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k k
Information at the UE - Examples
FI = σ
- {ri
1}T i=1
- FI = σ
- {ri
k}T i=1
- FI = σ
- {ri
k}T i=1, {Hi j}j∈[1,k],i∈[1,T ]
Examples of Information that a UE can know -
- Nearest BS of all technologies.
- Nearest BS of all technologies.
- Instant fading and the distance to the nearest BS
- Noisy estimate of the instant fading from the nearest BS
- f each technology.
k k k
Information at the UE - Examples
FI = σ
- {ri
1}T i=1
- FI = σ
- {ri
k}T i=1
- FI = σ
- {ri
k}T i=1, {Hi j}j∈[1,k],i∈[1,T ]
- where
FI = σ ⇣ σ({ri
k}T i=1) ∪ F
0⌘
F
0 ⊆ σ
- {Hi
j}j∈[1,k],i∈[1,T ]
Performance Metric for Association
This is measurable function denoting the association scheme.
- π : Ω → [1, T] × N
FI
FI ⊆ F
The information sigma-algebra.
- (Ω, F, P)
The Probability space containing the independent RVs and {φi}T
1=1
{Hi
j}i∈[1,T ],j∈N
Recall - pi(·) : R+ → R+ - The non-increasing reward function of technology i
This is measurable function denoting the association scheme.
- π : Ω → [1, T] × N
FI
FI ⊆ F
The information sigma-algebra.
- (Ω, F, P)
The Probability space containing the independent RVs and {φi}T
1=1
{Hi
j}i∈[1,T ],j∈N
pi(·) : R+ → R+ - The non-increasing reward function of technology i
Rπ = E[pπ(0)(SINRπ
0 )]
Performance of policy is then
π
Recall -
Performance Metric for Association
Optimal Association
For any association scheme , the expected reward is
π
Rπ = E[pπ(0)(SINRπ
0 )]
Thus, the optimal association is then obviously the one that maximizes expected reward i.e. π∗
I = arg sup π Rπ I
(sup over all measurable functions )
π : Ω → [1, T] × N
FI
For any association scheme , the expected reward is
π
Rπ = E[pπ(0)(SINRπ
0 )]
Optimal Association
Proposition:
π∗
I = arg
sup
i∈[1,T ],j∈N
E[pi(SINRi,j
0 )|FI]
Thus, the optimal association is then obviously the one that maximizes expected reward i.e. π∗
I = arg sup π Rπ I
(sup over all measurable functions )
π : Ω → [1, T] × N
FI
For any association scheme , the expected reward is
π
Rπ = E[pπ(0)(SINRπ
0 )]
a.s.
Optimal Association
Optimal Association - Comparison of Information
a.s. The optimal association under information is given by FI π∗
I
π∗
I = arg
sup
i∈[1,T ],j∈N
E[pi(SINRi,j
0 )|FI]
a.s. The optimal association under information is given by FI π∗
I
π∗
I = arg
sup
i∈[1,T ],j∈N
E[pi(SINRi,j
0 )|FI]
Denote by the performance of the optimal association as Rπ∗
I
= E[pπ∗(0)(SINRπ∗
0 )]
Rπ∗
I
where
Optimal Association - Comparison of Information
a.s. The optimal association under information is given by FI π∗
I
π∗
I = arg
sup
i∈[1,T ],j∈N
E[pi(SINRi,j
0 )|FI]
Denote by the performance of the optimal association as Rπ∗
I
= E[pπ∗(0)(SINRπ∗
0 )]
Rπ∗
I
where Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Optimal Association - Comparison of Information
a.s. The optimal association under information is given by FI π∗
I
π∗
I = arg
sup
i∈[1,T ],j∈N
E[pi(SINRi,j
0 )|FI]
Denote by the performance of the optimal association as Rπ∗
I
= E[pπ∗(0)(SINRπ∗
0 )]
Rπ∗
I
where Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Comparison of schemes without messy computation !
Optimal Association - Comparison of Information
Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Proof -
Rπ∗
I2 = E
" sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
#
Optimal Association - Comparison of Information
Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Proof -
Rπ∗
I2 = E
" sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
#
= E " E " sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
- FI1
##
Tower Property
Optimal Association - Comparison of Information
Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Proof -
Rπ∗
I2 = E
" sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
#
= E " E " sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
- FI1
## ≥ E " sup
i∈[1,T ],j≥1
E E[pi(SINRi,j
0 )|FI2]
- FI1
#
Tower Property Jensen’s Inequality
Optimal Association - Comparison of Information
Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Proof -
Rπ∗
I2 = E
" sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
#
= E " E " sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
- FI1
## ≥ E " sup
i∈[1,T ],j≥1
E E[pi(SINRi,j
0 )|FI2]
- FI1
# = E " sup
i∈[1,T ],j≥1
E h pi(SINRi,j
0 )|FI1
i#
Tower Property Jensen’s Inequality Tower Property
Optimal Association - Comparison of Information
Theorem - “More information gives better performance”
FI1 ⊆ FI2 = ⇒ Rπ∗
I1 ≤ Rπ∗ I2
Proof -
Rπ∗
I2 = E
" sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
#
= E " E " sup
i∈[1,T ],j≥1
E[pi(SINRi,j
0 )|FI2]
- FI1
## ≥ E " sup
i∈[1,T ],j≥1
E E[pi(SINRi,j
0 )|FI2]
- FI1
# = E " sup
i∈[1,T ],j≥1
E h pi(SINRi,j
0 )|FI1
i#
= Rπ∗
I1
Tower Property Jensen’s Inequality Tower Property
Optimal Association - Comparison of Information
Optimal Association - Associate to the Nearest BS
Lemma: If the information is independent of FI then, arg sup
j≥1
E[pi(SINRi,j
0 )|FI] = 1
for all . i σ
- {Hi
j}j≥1,i∈[1,T ]
In particular, in the absence of fading information, the
- ptimal strategy is to associate to the nearest BS of the best
technology.
Optimal Association - Associate to the Nearest BS
Lemma: If the information is independent of FI then, arg sup
j≥1
E[pi(SINRi,j
0 )|FI] = 1
for all . i σ
- {Hi
j}j≥1,i∈[1,T ]
Examples of Association Policies and Computations
Some examples of association policies.
- Optimal Association under no information.
- Optimal Association under nearest BS.
- Max-Ratio Association.
- Optimal Association under complete network information.
Max-Ratio Association
Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers !
Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers ! We propose a “data-dependent” association scheme called Max-Ratio association.
Max-Ratio Association
Optimal Association scheme great in theory. But it is a parametric algorithm - depends on the distribution of the point-processes, fading powers ! We propose a “data-dependent” association scheme called Max-Ratio association. πm = arg max
i∈[1,T ]
ri
1
ri
2
, 1
- Associate to the nearest BS of the technology yielding the
maximum ratio. (Oblivious to the statistical assumptions on the network.)
Max-Ratio Association
πm = arg max
i∈[1,T ]
ri
1
ri
2
, 1
- Associate to the nearest BS of the technology yielding the
maximum ratio. Information - FI = σ ✓ ∪T
i=1
ri
1
ri
2
◆
Max-Ratio Association
πm = arg max
i∈[1,T ]
ri
1
ri
2
, 1
- Associate to the nearest BS of the technology yielding the
maximum ratio. This scheme trades off high signal power with that of interference power. Information - FI = σ ✓ ∪T
i=1
ri
1
ri
2
◆
Max-Ratio Association
Theorem - Let the noise powers be 0 and the reward function for all technologies are the same, i.e. . Consider, the family of power law path-loss functions , where . Let be any integer larger than or equal to 2. If the information at the UE is the distance to the nearest BS of each technology, i.e. , then Max-Ratio is asymptotically almost surely optimal i.e.
Max-Ratio Association - Asymptotic Optimality
pi(·) = p(·)∀i {l(α)(·)}α>2 l(α)(x) = x−α k k FI = σ(∪T
i=1(ri 1, · · · , ri k))
π∗
α α→∞
− − − − → arg max
i∈[1,T ]
ri
1
ri
2
, 1
- a.s.
2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
Path-loss Exponent α
2 4 6 8 10 12
Average Achievable Rate
Opt-Association (1 BS) Opt-Association (2 BS) Random BS Association Nearest BS Association Max-Ratio Association
Max-Ratio Association - Finite Scale Behavior
π∗
α α→∞
− − − − → arg max
i∈[1,T ]
ri
1
ri
2
, 1
- a.s.
For , Max-Ratio is nearly optimal.
α ≥ 5
Formulas for Performance - Generalized Association
Denote by , the dimensional vector corresponding to the information about technology ri L i
Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Generalized Association - and i∗ = arg max
i∈[1,T ] πi(ri, λi)
by , the index of the BS to associate to if .
i∗ = i
ji(ri, λi)
Formulas for Performance - Generalized Association
Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Generalized Association - and i∗ = arg max
i∈[1,T ] πi(ri, λi)
by , the index of the BS to associate to if .
i∗ = i
ji(ri, λi)
Randomness
Formulas for Performance - Generalized Association
Denote by , the dimensional vector corresponding to the information about technology ri L i Denote by as the PDF of the random variable and by as the associated CDF. fi(·) πi(ri, λi) Fi(·) Example - The Max-Ratio and the Optimal Association fit into this performance evaluation framework. Generalized Association - and i∗ = arg max
i∈[1,T ] πi(ri, λi)
by , the index of the BS to associate to if .
i∗ = i
ji(ri, λi)
Randomness
Formulas for Performance - Generalized Association
Generalized Association - and i∗ = arg max
i∈[1,T ] πi(ri, λi)
by , the index of the BS to associate to if .
i∗ = i
ji(ri, λi)
ri
L i The dimensional vector corresponding to information
- n technology
fi(·) πi(ri, λi) Fi(·) The PDF and CDF of ,
Rπ
I = T
X
i=1
Z
r2Rl E[pi(SINRi,ji(r,λi)
)|r]fi(r)
T
Y
j=1,j6=i
Fj(πj(r, λj))dr
Theorem -
Formulas for Performance - Generalized Association
Generalized Association - and i∗ = arg max
i∈[1,T ] πi(ri, λi)
by , the index of the BS to associate to if .
i∗ = i
ji(ri, λi)
ri
L i The dimensional vector corresponding to information
- n technology
fi(·) πi(ri, λi) Fi(·) The PDF and CDF of ,
Rπ
I = T
X
i=1
Z
r2Rl E[pi(SINRi,ji(r,λi)
)|r]fi(r)
T
Y
j=1,j6=i