Competition and Competition and Collaboration in Wireless - - PowerPoint PPT Presentation

competition and competition and collaboration in wireless
SMART_READER_LITE
LIVE PREVIEW

Competition and Competition and Collaboration in Wireless - - PowerPoint PPT Presentation

ISIT - Nice Nice ISIT - June 27, 2007 June 27, 2007 Competition and Competition and Collaboration in Wireless Collaboration in Wireless Networks Networks Vince Poor Vince Poor (poor@princeton poor@princeton. .edu edu) ) (


slide-1
SLIDE 1

Competition and Competition and Collaboration in Wireless Collaboration in Wireless Networks Networks

Vince Poor Vince Poor ( (poor@princeton poor@princeton. .edu edu) )

ISIT - ISIT - Nice Nice June 27, 2007 June 27, 2007

Competition & Collaboration in Wireless Networks

slide-2
SLIDE 2

Research Trends in Wireless Research Trends in Wireless Nets Nets

  • The Past Two Decades: Key Developments at the Link Level

– MIMO – MUD – Turbo

  • Today: An Increased Focus on Interactions Among Nodes

– Competition

  • Cognitive radio
  • Information theoretic security
  • Game theoretic modeling, analysis & design ←

– Collaboration

  • Network coding
  • Cooperative transmission & relaying
  • Multi-hop transmission & coalition games
  • Collaborative beam-forming
  • Collaborative inference ←

Competition & Collaboration in Wireless Networks

slide-3
SLIDE 3

Today Today’ ’s Talk - Two Parts s Talk - Two Parts

  • Energy Games: Competition in Multiple Access

Communication Networks

  • Distributed Inference: Collaboration in Wireless

Sensor Networks (WSNs)

Competition & Collaboration in Wireless Networks

slide-4
SLIDE 4

ENERGY GAMES: ENERGY GAMES: COMPETITION IN COMPETITION IN THE MAC THE MAC

Competition & Collaboration in Wireless Networks

[Joint work with [Joint work with Farhad Meshkati Farhad Meshkati, , Stuart Schwartz Stuart Schwartz, et al.] , et al.]

slide-5
SLIDE 5

Energy Games Energy Games

  • Terminals transmit to an access

point via a multiple-access channel.

  • Users are like players in a game,

competing for resources to transmit their data to the AP.

  • The action of each users affects

the others.

  • We can model this as a competitive game, with payoff

measured in bits-per-joule.

AP

T T T T Competition in MA Communication Networks

slide-6
SLIDE 6

Game Theoretic Framework Game Theoretic Framework

uk = utility = throughput transmit power = Tk pk bits Joule

  • Tk = Rk f(γk), where f(γk) is the frame success rate, and γk is

the received SINR of user k.

Game: G = [{1,…,K}, {Ak}, {uk}]

K: total number of terminals Ak: set of strategies for terminal k uk: utility function for terminal k

Competition in MA Communication Networks

[Goodman, [Goodman, Mandayam Mandayam,Yates, ,Yates, et et al.] al.]

slide-7
SLIDE 7

An Uplink Game An Uplink Game

  • For a fixed linear MUD at the uplink receiver, each user selects

its transmit power to maximize its own utility.

  • Th’m [w/ Mandayam, T-COM 05]: f sigmoidal ⇒ Nash equilibrium

(i.e., no user can unilaterally improve its utility) is reached when each user chooses a transmit power that achieves γ*:

  • I.e., Nash equilibrium (NE) requires SINR balancing.

f(γ*) = γ* f′(γ*)

Competition in MA Communication Networks

slide-8
SLIDE 8

Remarks Remarks

  • The NE is unique, and can be reached iteratively

as the unique fixed point of a nonlinear map.

  • The NE as an Analytical Tool:

– We can use the NE to examine the effects on energy efficiency of various network design choices. – E.g., we can compare receiver choices: the matched filter, (zero-forcing) decorrelator, and MMSE detector.

Competition in MA Communication Networks

slide-9
SLIDE 9

Flat SIMO Model Flat SIMO Model

r2(t)

rP(t)

... ... ... ...

r

1(t)

User 1: 010… User 2: 110… User K: 011…

Channel Gains Channel Gains: :

{ {h hk

k,p ,p}

}

h h1

1, ,1 1

h hK

K,P ,P

h h1

1,P ,P

h h2

2,P ,P

h hK

K, ,1 1 Competition in MA Communication Networks

slide-10
SLIDE 10
  • Random CDMA: K terminals; spreading gain N
  • Load: α = K/N (i.e., the number of users per dimension)
  • Large-system limit: K, N →∞ , with α fixed.

Nash Equilibrium Nash Equilibrium

Utility Utility vs

  • vs. Load (Large-System Limit)

. Load (Large-System Limit)

m = # receive antennas Competition in MA Communication Networks

slide-11
SLIDE 11

Social Optimality Social Optimality

  • The Pareto (or socially) optimal solution, chooses

the transmit power so that no user’s utility can be improved without decreasing that of another.

  • The Pareto solution is generally hard to find.
  • The Nash equilibrium solution not generally

Pareto optimal.

  • But, it’s close.

Competition in MA Communication Networks

slide-12
SLIDE 12

Example: Nash & Pareto Optima Example: Nash & Pareto Optima

Utility vs. Load Competition in MA Communication Networks

slide-13
SLIDE 13

Effects of Delay Effects of Delay QoS QoS

  • For some traffic, delay is a key element of service quality.
  • Delay model (ARQ):

– X represents the number of transmissions needed for a given packet to be received without error, so that: – We can represent a delay requirement as a pair (D,β): – Thus, we have a constrained game, with γk ≥ γk’.

P(X=m) = f(γ) [1 - f(γ)]m-1 , m = 0, 1, …

P(X≤D)≥β ⇔ γ ≥ γ’

[w/ [w/ FM & SC FM & SC, submitted to , submitted to Trans. IT

  • Trans. IT]

] Competition in MA Communication Networks

slide-14
SLIDE 14

NE for Multiple Delay Classes NE for Multiple Delay Classes

  • Traffic is typically heterogeneous

with multiple delay classes.

  • A given delay class c will have its
  • wn SINR constraint: γc’
  • At NE all users in class c will

SINR-balance to max{γ*,γc’}.

  • Tight delay constraints on one class can affect the energy

efficiencies of all traffic due to increased interference levels. SINR, γ Utility, u

γ* γ’

  • Competition in MA Communication Networks
slide-15
SLIDE 15

2-Class Example: Utility Loss 2-Class Example: Utility Loss

  • RCDMA in the large-system limit: K, N →∞ , with α = K/N fixed.
  • Class A: (DA,βA) = (1, 0.99)
  • Class B: (DB,βB) = (3, 0.90)

α = 0.1 α = 0.9

Competition in MA Communication Networks

slide-16
SLIDE 16
  • Poisson packet arrivals
  • FIFO’ed packets transmitted via ARQ
  • QoS: (source rate, ave. delay)
  • Translates into a lower bound on SINR
  • Constrained Nash game (on transmit power & rate)
  • Leads to “size” of a user quantifying the resources required to deliver QoS.
  • NE exists only when the sum of the users “sizes” is < 1.

Finite Backlog Case Finite Backlog Case

[w/ R. Balan, T-COM, to appear.] Competition in MA Communication Networks

slide-17
SLIDE 17

Utility is normalized by B × SNR, and the normalized delay is D×B. The combined “size” of the other users is 0.2.

Utility Utility vs

  • vs. Delay

. Delay

Competition in MA Communication Networks

slide-18
SLIDE 18

Enhancements Enhancements

  • Nonlinear MUD (ML, MAP, PIC, etc.): Results apply to nonlinear MUD

for RCDMA in the large system limit. [w/ D. Guo, FM & SC; T-WC, to appear]

  • Multicarrier CDMA: Actions include choice of a carrier. [w/ M. Chiang, FM &

SC; JSAC’06]

  • UWB: Rich scatter. [w/ G. Bacci, M. Luise & A. Tulino: JSTSP, to appear]
  • Adaptive Modulation: Actions include choice of a modulation index [w/
  • A. Goldsmith, FM & SC: JSAC’07] or waveform [w/ S. Buzzi; EWC’07].

Competition in MA Communication Networks

slide-19
SLIDE 19

COLLABORATIVE COLLABORATIVE INFERENCE IN INFERENCE IN WSNs WSNs

Competition & Collaboration in Wireless Networks

[Joint work with [Joint work with Sanj Kulkarni Sanj Kulkarni, , Joel Joel Predd Predd, et al.] , et al.]

slide-20
SLIDE 20

Sensor Field Sensor Field

Collaborative Inference

slide-21
SLIDE 21
  • Salient features of WSNs:

– The primary application is inference – Information at different terminals is often correlated – Energy is often severely limited

  • Collaborative inference:

– Sensors work together to make inferences, while conserving resources (i.e., “bandwidth & batteries”) – Here, we’ll examine collaborative learning

Motivation Motivation

Collaborative Inference

slide-22
SLIDE 22
  • Input space X

X = Rd; output space Y Y = R

  • (X,Y) is an X

X×Y Y-valued r.v. with (X,Y) ~PXY

  • Design f: X

X→Y Y to predict outputs from inputs and minimize expected loss; e.g., E{|f(X)-Y|2}

  • PXY is unknown
  • So, construct f from examples:

Classical (Supervised) Learning Classical (Supervised) Learning

Collaborative Inference

slide-23
SLIDE 23
  • Sensor i measures .

S4

   

S6     S8     S2     S10     S5     S1     S7     S3     S9     S11    

A Model for A Model for Dist Dist’ ’d d Learning in Learning in WSNs WSNs

Collaborative Inference

  • This division defines a

This division defines a topology topology, , which in turn which in turn shapes the shapes the nature of collaboration nature of collaboration. .

“ “A distributed sampling device A distributed sampling device with a wireless interface with a wireless interface” ”

slide-24
SLIDE 24
  • Sensor i sends to a centralized processor
  • “Learn” using (reproducing) kernel methods:

– For a positive semi-definite kernel K(,):

  • Assumption: energy and bandwidth constraints preclude the

sensors from sending for centralized processing.

A Centralized Approach A Centralized Approach

Collaborative Inference

slide-25
SLIDE 25

The Seed of a Model The Seed of a Model … …

  • Sensor i measures

.

  • Informal justification: local communication is efficient

S4

   

S6     S8     S2     S10     S5     S1     S7     S3     S9     S11    

Assumption Assumption: Sensor : Sensor i i can access all neighboring can access all neighboring sensors sensors’ ’ measured data. measured data.

Collaborative Inference

slide-26
SLIDE 26
  • m learning agents (i.e., sensors)
  • n training examples

A General Model A General Model

Collaborative Inference

slide-27
SLIDE 27

Example: Example:

Centralized Learning Centralized Learning

Collaborative Inference

slide-28
SLIDE 28

Example: Example:

Spatio-Temporal Spatio-Temporal Field Estimation Field Estimation

Collaborative Inference

slide-29
SLIDE 29

Example Example

A Public Database A Public Database

Collaborative Inference

slide-30
SLIDE 30
  • m learning agents (i.e., sensors)
  • n training examples

The General Case The General Case

Collaborative Inference

slide-31
SLIDE 31

“ “Local Local” ” Learning Learning

A Natural Approach A Natural Approach

Collaborative Inference

slide-32
SLIDE 32

Local incoherence: Sensor 1 and sensor m both train with but .

Local Learning is Local Learning is “ “Locally Locally Incoherent Incoherent” ”

Collaborative Inference

slide-33
SLIDE 33
  • “Local

learning” requires

  • nly

local communication.

  • However, it leads to local incoherence, which is

(provably) “undesirable”.

  • Can agents (i.e., sensors)

Can agents (i.e., sensors) collaborate collaborate to gain the to gain the “ “optimality

  • ptimality”

” of coherence, while retaining the

  • f coherence, while retaining the

efficiency of locality? efficiency of locality?

Collaboration? Collaboration?

Collaborative Inference

slide-34
SLIDE 34

A Collaborative Training Algorithm

Intuition

  • Use local learning as a building block.

Iterate over sensors s = 1,…, m sensor s Computes using local data: Updates labels of local data: end

Collaborative Regression

slide-35
SLIDE 35

A Collaborative Training Algorithm Intuition (cont’d)

  • Need multiple passes + inertia term

Initialize:

for t=1,…., T

Iterate over sensors s = 1,…, m

sensor s Computes using local data: Updates labels of local data:

Collaborative Regression

slide-36
SLIDE 36
  • To initialize, the sensors:
  • agree on a kernel K(.,.)
  • localize (i.e., estimate xi)
  • share positions with neighbors
  • measure field locally (i.e. observe yi)
  • set zi = yi
  • To estimate the field:

for t=1,…,T for s = 1,…, N Query: Sensor s queries zi from neighbors Compute: Update: Updates neighbors zi = fs,t(xi)

A Collaborative Algorithm

[ [w/ w/ SK & JP, SK & JP, ITW06 ITW06, Uruguay & submitted to , Uruguay & submitted to IT IT] ] Collaborative Inference

slide-37
SLIDE 37

A Collaborative Algorithm (Cont’d)

Collaborative Inference

slide-38
SLIDE 38

Properties of the Algorithm

Collaborative Inference

1. 1. converges converges (in norm) to a relaxation of centralized (in norm) to a relaxation of centralized estimate as estimate as T T → →∞ ∞ . . 2. 2. is is locally coherent locally coherent and satisfies and satisfies 3. 3. improves with every update. improves with every update. 4. 4. Within a Within a connectivity connectivity assumption, with assumption, with i.i.d. exemplars i.i.d. exemplars, , and appropriate behavior of the and appropriate behavior of the λ λs

s’

’s s, the local estimates , the local estimates converge (in RKHS norm) to the converge (in RKHS norm) to the conditional mean conditional mean as n as n → →∞ ∞

. .

slide-39
SLIDE 39
  • n=50 sensors uniformly distributed about [-1, 1]
  • Sensor i observes yi = f(xi) + ni

– {ni} is i.i.d. standard normal – regression function f is linear (Case 1) or sinusoidal (Case 2)

  • Sensors i and j are neighbors iff |xi - xj| < r
  • Sensors employ linear (Case 1) or Gaussian (Case 2) kernel

Case 1 Case 1 Case Case 2 2

Experiments

Collaborative Inference

slide-40
SLIDE 40

Case 1 Case 1 Case 2 Case 2

MSE MSE

Connectivity Connectivity

How Does Collaboration Affect Globalization Error?

Collaborative Inference

slide-41
SLIDE 41
  • Overall error decreases with

size of the neighborhoods.

  • But, energy consumed by

message-passing increases with neighborhood size.

  • Question: What are the trade-offs?

MSE MSE

Connectivity Connectivity

Collaborative Inference

Energy Efficiency

slide-42
SLIDE 42

MSE MSE n n (number of sensors) (number of sensors) r rN

N

= = n nα

α

α α ={.30, .35, .40, .45} PKP Centralized Local Averaging

Collaborative Inference

Mean-Square Error vs. n

slide-43
SLIDE 43

Total Total Energy Energy/n /n n n (number of sensors) (number of sensors) r rN

N

= = n nα

α

α α ={.30, .35, .40, .45} PKP Centralized Local Averaging

Collaborative Inference

Energy-per-Sensor vs. n

slide-44
SLIDE 44

Related Results Related Results

  • Consistency w. Limited Capacity

[w/ SK & JP, IT 06]

Collaborative Inference

Access Point S S S S

{X1 ,Y1} {X2 ,Y2} {X4 ,Y4} {X3 ,Y3}

X X X X

{0,1}/A {0,1}/A {0,1}/A {0,1}/A

  • 12
  • 6
  • 180
  • 120
  • 60

60 120 180

Observation Angle Observation Angle φ

φ [deg]

[deg] Power [dB] Power [dB] N N =16 =16

R R/ /λ λ= =2 2 Average Average Realization Realization

Maximum Maximum sidelobe sidelobe peak could be peak could be much much higher than its average! higher than its average!

  • 12
  • 6
  • 180
  • 120
  • 60

60 120 180

  • 18
  • Collaborative Beamforming

[w/ Mitran, Ochiai & Tarokh, SP 05]

  • Judgment Aggregation

[w/ Osherson, SK & JP, preprint]

slide-45
SLIDE 45

Summary Summary

Competition & Collaboration in Wireless Networks

  • Energy Games:

Characterization of Energy Efficiency Via Nash Equilibrium

  • Distributed Inference:

Collaboration Via Message Passing

slide-46
SLIDE 46

Thank You!