Information Capacity of the BSC Permutation Channel Anuran Makur - - PowerPoint PPT Presentation

information capacity of the bsc permutation channel
SMART_READER_LITE
LIVE PREVIEW

Information Capacity of the BSC Permutation Channel Anuran Makur - - PowerPoint PPT Presentation

Information Capacity of the BSC Permutation Channel Anuran Makur EECS Department, Massachusetts Institute of Technology Allerton Conference 2018 A. Makur (MIT) Capacity of BSC Permutation Channel 5 October 2018 1 / 21 Outline Introduction


slide-1
SLIDE 1

Information Capacity of the BSC Permutation Channel

Anuran Makur

EECS Department, Massachusetts Institute of Technology

Allerton Conference 2018

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 1 / 21

slide-2
SLIDE 2

Outline

1

Introduction Motivation: Coding for Communication Networks The Permutation Channel Model Capacity of the BSC Permutation Channel

2

Achievability

3

Converse

4

Conclusion

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 2 / 21

slide-3
SLIDE 3

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-4
SLIDE 4

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-5
SLIDE 5

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel: Alphabet symbols = all possible L-bit packets ⇒ 2L input symbols

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-6
SLIDE 6

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel: Alphabet symbols = all possible L-bit packets multipath routed network or evolving network topology

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-7
SLIDE 7

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel: Alphabet symbols = all possible L-bit packets multipath routed network ⇒ packets received with transpositions

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-8
SLIDE 8

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel: Alphabet symbols = all possible L-bit packets multipath routed network ⇒ packets received with transpositions packets are impaired (e.g. deletions, substitutions)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-9
SLIDE 9

Motivation: Point-to-point Communication in Networks

NETWORK SENDER RECEIVER

Model communication network as a channel: Alphabet symbols = all possible L-bit packets multipath routed network ⇒ packets received with transpositions packets are impaired ⇒ model using channel probabilities

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 3 / 21

slide-10
SLIDE 10

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-11
SLIDE 11

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER RANDOM DELETION RANDOM PERMUTATION

Abstraction: n-length codeword = sequence of n packets

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-12
SLIDE 12

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER RANDOM DELETION RANDOM PERMUTATION

Abstraction: n-length codeword = sequence of n packets Random deletion channel: Delete each symbol/packet of codeword independently with probability p ∈ (0, 1)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-13
SLIDE 13

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER RANDOM DELETION RANDOM PERMUTATION

Abstraction: n-length codeword = sequence of n packets Random deletion channel: Delete each symbol/packet of codeword independently with probability p ∈ (0, 1)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-14
SLIDE 14

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER RANDOM DELETION RANDOM PERMUTATION

Abstraction: n-length codeword = sequence of n packets Random deletion channel: Delete each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-15
SLIDE 15

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER RANDOM DELETION RANDOM PERMUTATION

Abstraction: n-length codeword = sequence of n packets Random deletion channel: Delete each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-16
SLIDE 16

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER ERASURE CHANNEL RANDOM PERMUTATION

? ?

Abstraction: n-length codeword = sequence of n packets Equivalent Erasure channel: Erase each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-17
SLIDE 17

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER ERASURE CHANNEL RANDOM PERMUTATION

? ? 1 2 3 3 3 1 1

Abstraction: n-length codeword = sequence of n packets Erasure channel: Erase each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers (packet size = L + log(n) bits, alphabet size = n 2L)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-18
SLIDE 18

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER ERASURE CHANNEL RANDOM PERMUTATION

? ? 1 2 3 3 3 1 1

Abstraction: n-length codeword = sequence of n packets Erasure channel: Erase each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers and use standard coding techniques

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-19
SLIDE 19

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER ERASURE CHANNEL RANDOM PERMUTATION

? ? 1 2 3 3 3 1 1

Abstraction: n-length codeword = sequence of n packets Erasure channel: Erase each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword Coding: Add sequence numbers and use standard coding techniques More refined coding techniques simulate sequence numbers, e.g. [Mitzenmacher 2006], [Metzner 2009]

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-20
SLIDE 20

Example: Coding for Random Deletion Network

Consider a communication network where packets can be dropped:

NETWORK SENDER RECEIVER ERASURE CHANNEL RANDOM PERMUTATION

? ?

Abstraction: n-length codeword = sequence of n packets Erasure channel: Erase each symbol/packet of codeword independently with probability p ∈ (0, 1) Random permutation block: Randomly permute packets of codeword How do you code in such channels without increasing alphabet size?

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 4 / 21

slide-21
SLIDE 21

The Permutation Channel Model

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Sender sends message M ∼ Uniform(M)
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 5 / 21

slide-22
SLIDE 22

The Permutation Channel Model

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Sender sends message M ∼ Uniform(M)

Possibly randomized encoder fn : M → X n produces codeword X n

1 = (X1, . . . , Xn) = fn(M) (with block-length n)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 5 / 21

slide-23
SLIDE 23

The Permutation Channel Model

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Sender sends message M ∼ Uniform(M)

Possibly randomized encoder fn : M → X n produces codeword X n

1 = (X1, . . . , Xn) = fn(M) (with block-length n)

Discrete memoryless channel PZ|X with input and output alphabets X and Y produces Z n

1 :

PZ n

1 |X n 1 (zn

1 |xn 1 ) = n

  • i=1

PZ|X(zi|xi)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 5 / 21

slide-24
SLIDE 24

The Permutation Channel Model

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Sender sends message M ∼ Uniform(M)

Possibly randomized encoder fn : M → X n produces codeword X n

1 = (X1, . . . , Xn) = fn(M) (with block-length n)

Discrete memoryless channel PZ|X with input and output alphabets X and Y produces Z n

1 :

PZ n

1 |X n 1 (zn

1 |xn 1 ) = n

  • i=1

PZ|X(zi|xi) Random permutation generates Y n

1 from Z n 1

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 5 / 21

slide-25
SLIDE 25

The Permutation Channel Model

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Sender sends message M ∼ Uniform(M)

Possibly randomized encoder fn : M → X n produces codeword X n

1 = (X1, . . . , Xn) = fn(M) (with block-length n)

Discrete memoryless channel PZ|X with input and output alphabets X and Y produces Z n

1 :

PZ n

1 |X n 1 (zn

1 |xn 1 ) = n

  • i=1

PZ|X(zi|xi) Random permutation generates Y n

1 from Z n 1

Possibly randomized decoder gn : Yn → M produces estimate ˆ M = gn(Y n

1 ) at receiver

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 5 / 21

slide-26
SLIDE 26

Coding for the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • General Principle:

“Encode the information in an object that is invariant under the [permutation] transformation.” [Kova˘ cevi´ c-Vukobratovi´ c 2013]

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 6 / 21

slide-27
SLIDE 27

Coding for the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • General Principle:

“Encode the information in an object that is invariant under the [permutation] transformation.” [Kova˘ cevi´ c-Vukobratovi´ c 2013] Multiset codes are studied in [Kova˘ cevi´ c-Vukobratovi´ c 2013], [Kova˘ cevi´ c-Vukobratovi´ c 2015], and [Kova˘ cevi´ c-Tan 2018]

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 6 / 21

slide-28
SLIDE 28

Coding for the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • General Principle:

“Encode the information in an object that is invariant under the [permutation] transformation.” [Kova˘ cevi´ c-Vukobratovi´ c 2013] Multiset codes are studied in [Kova˘ cevi´ c-Vukobratovi´ c 2013], [Kova˘ cevi´ c-Vukobratovi´ c 2015], and [Kova˘ cevi´ c-Tan 2018] What about the information theoretic aspects of this model?

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 6 / 21

slide-29
SLIDE 29

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-30
SLIDE 30

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M) “Rate” of encoder-decoder pair (fn, gn): R log(|M|) log(n)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-31
SLIDE 31

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M) “Rate” of encoder-decoder pair (fn, gn): R log(|M|) log(n) |M| = nR

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-32
SLIDE 32

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M) “Rate” of encoder-decoder pair (fn, gn): R log(|M|) log(n) |M| = nR because number of empirical distributions of Y n

1 is poly(n)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-33
SLIDE 33

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M) “Rate” of encoder-decoder pair (fn, gn): R log(|M|) log(n) |M| = nR Rate R ≥ 0 is achievable ⇔ ∃ {(fn, gn)}n∈N such that lim

n→∞ Pn error = 0

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-34
SLIDE 34

Information Capacity of the Permutation Channel

ENCODER CHANNEL RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Average probability of error Pn

error P(M = ˆ

M) “Rate” of encoder-decoder pair (fn, gn): R log(|M|) log(n) |M| = nR Rate R ≥ 0 is achievable ⇔ ∃ {(fn, gn)}n∈N such that lim

n→∞ Pn error = 0

Definition (Permutation Channel Capacity)

Cperm(PZ|X) sup{R ≥ 0 : R is achievable}

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 7 / 21

slide-35
SLIDE 35

Capacity of the BSC Permutation Channel

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Channel is binary symmetric channel, denoted BSC(p):

∀z, x ∈ {0, 1}, PZ|X(z|x) =

  • 1 − p,

for z = x p, for z = x

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 8 / 21

slide-36
SLIDE 36

Capacity of the BSC Permutation Channel

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Channel is binary symmetric channel, denoted BSC(p):

∀z, x ∈ {0, 1}, PZ|X(z|x) =

  • 1 − p,

for z = x p, for z = x Alphabets are X = Y = {0, 1}

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 8 / 21

slide-37
SLIDE 37

Capacity of the BSC Permutation Channel

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Channel is binary symmetric channel, denoted BSC(p):

∀z, x ∈ {0, 1}, PZ|X(z|x) =

  • 1 − p,

for z = x p, for z = x Alphabets are X = Y = {0, 1} Assume crossover probability p ∈ (0, 1) and p = 1

2

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 8 / 21

slide-38
SLIDE 38

Capacity of the BSC Permutation Channel

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Channel is binary symmetric channel, denoted BSC(p):

∀z, x ∈ {0, 1}, PZ|X(z|x) =

  • 1 − p,

for z = x p, for z = x Alphabets are X = Y = {0, 1} Assume crossover probability p ∈ (0, 1) and p = 1

2

Main Question

What is the permutation channel capacity of the BSC?

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 8 / 21

slide-39
SLIDE 39

Outline

1

Introduction

2

Achievability Encoder and Decoder Testing between Converging Hypotheses Intuition via Central Limit Theorem Second Moment Method for TV Distance

3

Converse

4

Conclusion

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 9 / 21

slide-40
SLIDE 40

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}

𝑟 1 3 𝑟 2 3 1

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-41
SLIDE 41

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}, and encode m as fn(m) = X n

1 i.i.d.

∼ Ber(qm)

𝑟 1 3 𝑟 2 3 1

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-42
SLIDE 42

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}, and encode m as fn(m) = X n

1 i.i.d.

∼ Ber(qm)

𝑟 1 3 𝑟 2 3 1

Memoryless BSC(p) outputs Z n

1 i.i.d.

∼ Ber(p ∗ qm), where p ∗ qm p(1 − qm) + qm(1 − p) is the convolution of p and qm ✶

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-43
SLIDE 43

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}, and encode m as fn(m) = X n

1 i.i.d.

∼ Ber(qm)

𝑟 1 3 𝑟 2 3 1

Memoryless BSC(p) outputs Z n

1 i.i.d.

∼ Ber(p ∗ qm), where p ∗ qm p(1 − qm) + qm(1 − p) is the convolution of p and qm Random permutation generates Y n

1 i.i.d.

∼ Ber(p ∗ qm) ✶

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-44
SLIDE 44

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}, and encode m as fn(m) = X n

1 i.i.d.

∼ Ber(qm)

𝑟 1 3 𝑟 2 3 1

Memoryless BSC(p) outputs Z n

1 i.i.d.

∼ Ber(p ∗ qm), where p ∗ qm p(1 − qm) + qm(1 − p) is the convolution of p and qm Random permutation generates Y n

1 i.i.d.

∼ Ber(p ∗ qm) Maximum Likelihood (ML) decoder: ˆ M = ✶ 1

n

n

i=1 Yi ≥ 1 2

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-45
SLIDE 45

Warm-up: Sending Two Messages

ENCODER BSC 𝒒 RANDOM PERMUTATION DECODER 𝑁 𝑌

  • 𝑎
  • 𝑍
  • 𝑁
  • Fix a message m ∈ {0, 1}, and encode m as fn(m) = X n

1 i.i.d.

∼ Ber(qm)

𝑟 1 3 𝑟 2 3 1

Memoryless BSC(p) outputs Z n

1 i.i.d.

∼ Ber(p ∗ qm), where p ∗ qm p(1 − qm) + qm(1 − p) is the convolution of p and qm Random permutation generates Y n

1 i.i.d.

∼ Ber(p ∗ qm) Maximum Likelihood (ML) decoder: ˆ M = ✶ 1

n

n

i=1 Yi ≥ 1 2

  • 1

n

n

i=1 Yi → p ∗ qm in probability as n → ∞

[WLLN] ⇒ lim

n→∞ Pn error = 0 as p ∗ q0 = p ∗ q1

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 10 / 21

slide-46
SLIDE 46

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0

1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-47
SLIDE 47

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-48
SLIDE 48

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

Given m ∈ M, Y n

1 i.i.d.

∼ Ber

  • p ∗ m

nR

  • (as before)
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-49
SLIDE 49

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

Given m ∈ M, Y n

1 i.i.d.

∼ Ber

  • p ∗ m

nR

  • ML decoder: For yn

1 ∈ {0, 1}n, gn(yn 1 ) = arg max m∈M

PY n

1 |M(yn

1 |m)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-50
SLIDE 50

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

Given m ∈ M, Y n

1 i.i.d.

∼ Ber

  • p ∗ m

nR

  • ML decoder: For yn

1 ∈ {0, 1}n, gn(yn 1 ) = arg max m∈M

PY n

1 |M(yn

1 |m)

Challenge: Although 1

n

n

i=1 Yi → p ∗ m nR in probability as n → ∞,

consecutive messages become indistinguishable i.e.

m nR − m+1 nR

→ 0

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-51
SLIDE 51

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

Given m ∈ M, Y n

1 i.i.d.

∼ Ber

  • p ∗ m

nR

  • ML decoder: For yn

1 ∈ {0, 1}n, gn(yn 1 ) = arg max m∈M

PY n

1 |M(yn

1 |m)

Challenge: Although 1

n

n

i=1 Yi → p ∗ m nR in probability as n → ∞,

consecutive messages become indistinguishable i.e.

m nR − m+1 nR

→ 0 Fact: Consecutive messages distinguishable ⇒ lim

n→∞ Pn error = 0

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-52
SLIDE 52

Encoder and Decoder

Suppose M = {1, . . . , nR} for some R > 0 Randomized encoder: Given m ∈ M, fn(m) = X n

1 i.i.d.

∼ Ber m nR

  • 1

𝑜

Given m ∈ M, Y n

1 i.i.d.

∼ Ber

  • p ∗ m

nR

  • ML decoder: For yn

1 ∈ {0, 1}n, gn(yn 1 ) = arg max m∈M

PY n

1 |M(yn

1 |m)

Challenge: Although 1

n

n

i=1 Yi → p ∗ m nR in probability as n → ∞,

consecutive messages become indistinguishable i.e.

m nR − m+1 nR

→ 0 Fact: Consecutive messages distinguishable ⇒ lim

n→∞ Pn error = 0

What is the largest R such that two consecutive messages can be distinguished?

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 11 / 21

slide-53
SLIDE 53

Testing between Converging Hypotheses

Binary Hypothesis Testing: Consider hypothesis H ∼ Ber 1

2

  • with uniform prior
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 12 / 21

slide-54
SLIDE 54

Testing between Converging Hypotheses

Binary Hypothesis Testing: Consider hypothesis H ∼ Ber 1

2

  • with uniform prior

For any n ∈ N, q ∈ (0, 1), and R > 0, consider likelihoods: Given H = 0 : X n

1 i.i.d.

∼ PX|H=0 = Ber(q) Given H = 1 : X n

1 i.i.d.

∼ PX|H=1 = Ber

  • q + 1

nR

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 12 / 21

slide-55
SLIDE 55

Testing between Converging Hypotheses

Binary Hypothesis Testing: Consider hypothesis H ∼ Ber 1

2

  • with uniform prior

For any n ∈ N, q ∈ (0, 1), and R > 0, consider likelihoods: Given H = 0 : X n

1 i.i.d.

∼ PX|H=0 = Ber(q) Given H = 1 : X n

1 i.i.d.

∼ PX|H=1 = Ber

  • q + 1

nR

  • Define the zero-mean sufficient statistic of X n

1 for H:

Tn 1 n

n

  • i=1

Xi − q − 1 2nR

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 12 / 21

slide-56
SLIDE 56

Testing between Converging Hypotheses

Binary Hypothesis Testing: Consider hypothesis H ∼ Ber 1

2

  • with uniform prior

For any n ∈ N, q ∈ (0, 1), and R > 0, consider likelihoods: Given H = 0 : X n

1 i.i.d.

∼ PX|H=0 = Ber(q) Given H = 1 : X n

1 i.i.d.

∼ PX|H=1 = Ber

  • q + 1

nR

  • Define the zero-mean sufficient statistic of X n

1 for H:

Tn 1 n

n

  • i=1

Xi − q − 1 2nR Let ˆ Hn

ML(Tn) denote the ML decoder for H based on Tn with

minimum probability of error Pn

ML P( ˆ

Hn

ML(Tn) = H)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 12 / 21

slide-57
SLIDE 57

Testing between Converging Hypotheses

Binary Hypothesis Testing: Consider hypothesis H ∼ Ber 1

2

  • with uniform prior

For any n ∈ N, q ∈ (0, 1), and R > 0, consider likelihoods: Given H = 0 : X n

1 i.i.d.

∼ PX|H=0 = Ber(q) Given H = 1 : X n

1 i.i.d.

∼ PX|H=1 = Ber

  • q + 1

nR

  • Define the zero-mean sufficient statistic of X n

1 for H:

Tn 1 n

n

  • i=1

Xi − q − 1 2nR Let ˆ Hn

ML(Tn) denote the ML decoder for H based on Tn with

minimum probability of error Pn

ML P( ˆ

Hn

ML(Tn) = H)

Want: Largest R > 0 such that lim

n→∞ Pn ML = 0?

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 12 / 21

slide-58
SLIDE 58

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] Figure:

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

1 2𝑜 1 2𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-59
SLIDE 59

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Figure:

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-60
SLIDE 60

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Standard deviations are Θ

  • 1/

√n

  • Figure:

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-61
SLIDE 61

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Standard deviations are Θ

  • 1/

√n

  • Case R < 1

2:

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-62
SLIDE 62

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Standard deviations are Θ

  • 1/

√n

  • Case R < 1

2: Decoding is possible

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-63
SLIDE 63

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Standard deviations are Θ

  • 1/

√n

  • Case R > 1

2:

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-64
SLIDE 64

Intuition via Central Limit Theorem

For large n, PTn|H(·|0) and PTn|H(·|1) are Gaussian distributions [CLT] |E[Tn|H = 0] − E[Tn|H = 1]| = 1/nR Standard deviations are Θ

  • 1/

√n

  • Case R > 1

2: Decoding is impossible

𝑢 𝑄

| 𝑢|0

𝑄

| 𝑢|1

𝛴 1 𝑜 1 𝑜 𝛴 1 𝑜

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 13 / 21

slide-65
SLIDE 65

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q.

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-66
SLIDE 66

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof: Let T +

n ∼ PTn|H=1 and T − n ∼ PTn|H=0

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn|H(t|1) − PTn|H(t|0)
  • 2
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-67
SLIDE 67

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof: Let T +

n ∼ PTn|H=1 and T − n ∼ PTn|H=0

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn(t)
  • PTn|H(t|1) − PTn|H(t|0)
  • PTn(t)
  • 2
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-68
SLIDE 68

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof: Cauchy-Schwarz inequality

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn(t)
  • PTn|H(t|1) − PTn|H(t|0)
  • PTn(t)
  • 2

  • t

t2PTn(t)

  • t
  • PTn|H(t|1) − PTn|H(t|0)

2 PTn(t)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-69
SLIDE 69

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof: Recall that Tn is zero-mean

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn(t)
  • PTn|H(t|1) − PTn|H(t|0)
  • PTn(t)
  • 2

≤ VAR(Tn)

  • t
  • PTn|H(t|1) − PTn|H(t|0)

2 PTn(t)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-70
SLIDE 70

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof: Hammersley-Chapman-Robbins bound

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn(t)
  • PTn|H(t|1) − PTn|H(t|0)
  • PTn(t)
  • 2

≤ 4 VAR(Tn)

  • 1

4

  • t
  • PTn|H(t|1) − PTn|H(t|0)

2 PTn(t)

  • Vincze-Le Cam distance
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-71
SLIDE 71

Second Moment Method for TV Distance

Lemma (2nd Moment Method [Evans-Kenyon-Peres-Schulman 2000])

  • PTn|H=1 − PTn|H=0
  • TV ≥ (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn) where P − QTV = 1

2 P − Qℓ1 is the total variation (TV) distance

between the distributions P and Q. Proof:

  • E
  • T +

n

  • − E
  • T −

n

2 =

  • t

t

  • PTn(t)
  • PTn|H(t|1) − PTn|H(t|0)
  • PTn(t)
  • 2

≤ 4 VAR(Tn)

  • 1

4

  • t
  • PTn|H(t|1) − PTn|H(t|0)

2 PTn(t)

  • ≤ 4 VAR(Tn)
  • PTn|H=1 − PTn|H=0
  • TV
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 14 / 21

slide-72
SLIDE 72

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Proof: Start with Le Cam’s relation Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-73
SLIDE 73

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Proof: Apply second moment method lemma Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • ≤ 1

2

  • 1 − (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-74
SLIDE 74

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Proof: After explicit computation and simplification... Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • ≤ 1

2

  • 1 − (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-75
SLIDE 75

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Proof: For any 0 < R < 1

2,

Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • ≤ 1

2

  • 1 − (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn)

3 2n1−2R

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-76
SLIDE 76

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Then, lim

n→∞ Pn ML = 0.

Proof: For any 0 < R < 1

2,

Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • ≤ 1

2

  • 1 − (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn)

3 2n1−2R → 0 as n → ∞

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-77
SLIDE 77

Achievability Proof

Theorem (Achievability)

For any 0 < R < 1/2, consider the binary hypothesis testing problem with H ∼ Ber 1

2

  • , and X n

1 i.i.d.

∼ Ber

  • q + h

nR

  • given H = h ∈ {0, 1}.

Then, lim

n→∞ Pn ML = 0. This implies that:

Cperm(BSC(p)) ≥ 1 2 . Proof: For any 0 < R < 1

2,

Pn

ML = 1

2

  • 1 −
  • PTn|H=1 − PTn|H=0
  • TV
  • ≤ 1

2

  • 1 − (E[Tn|H = 1] − E[Tn|H = 0])2

4 VAR(Tn)

3 2n1−2R → 0 as n → ∞

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 15 / 21

slide-78
SLIDE 78

Outline

1

Introduction

2

Achievability

3

Converse Fano’s Inequality Argument CLT Approximation

4

Conclusion

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 16 / 21

slide-79
SLIDE 79

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-80
SLIDE 80

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: M is uniform R log(n) = H(M)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-81
SLIDE 81

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: Fano’s inequality, DPI R log(n) = H(M| ˆ M) + I(M; ˆ M) ≤ 1 + Pn

errorR log(n) + I(M; Y n 1 )

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-82
SLIDE 82

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: sufficiency R log(n) = H(M| ˆ M) + I(M; ˆ M) ≤ 1 + Pn

errorR log(n) + I(M; Y n 1 )

= 1 + Pn

errorR log(n) + I(M; Sn)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-83
SLIDE 83

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: DPI R log(n) = H(M| ˆ M) + I(M; ˆ M) ≤ 1 + Pn

errorR log(n) + I(M; Y n 1 )

= 1 + Pn

errorR log(n) + I(M; Sn)

≤ 1 + Pn

errorR log(n) + I(X n 1 ; Sn)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-84
SLIDE 84

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: R log(n) = H(M| ˆ M) + I(M; ˆ M) ≤ 1 + Pn

errorR log(n) + I(M; Y n 1 )

= 1 + Pn

errorR log(n) + I(M; Sn)

≤ 1 + Pn

errorR log(n) + I(X n 1 ; Sn)

Divide by log(n) R ≤ 1 log(n) + Pn

errorR + I(X n 1 ; Sn)

log(n)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-85
SLIDE 85

Converse: Fano’s Inequality Argument

Consider the Markov chain M → X n

1 → Z n 1 → Y n 1 → Sn n i=1 Yi,

and a sequence of encoder-decoder pairs {(fn, gn)}n∈N such that |M| = nR and lim

n→∞ Pn error = 0

Standard argument, cf. [Cover-Thomas 2006]: R log(n) = H(M| ˆ M) + I(M; ˆ M) ≤ 1 + Pn

errorR log(n) + I(M; Y n 1 )

= 1 + Pn

errorR log(n) + I(M; Sn)

≤ 1 + Pn

errorR log(n) + I(X n 1 ; Sn)

Divide by log(n) and let n → ∞: R ≤ lim

n→∞

I(X n

1 ; Sn)

log(n)

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 17 / 21

slide-86
SLIDE 86

Converse: CLT Approximation

Upper bound on I(X n

1 ; Sn):

I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-87
SLIDE 87

Converse: CLT Approximation

Since Sn ∈ {0, . . . , n}, I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(Sn|X n 1 = xn 1 )

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-88
SLIDE 88

Converse: CLT Approximation

Given X n

1 = xn 1 with n i=1 xi = k, Sn = bin(k, 1 − p) + bin(n − k, p):

I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-89
SLIDE 89

Converse: CLT Approximation

Using Problem 2.14 in [Cover-Thomas 2006], I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H

  • bin

n 2, p

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-90
SLIDE 90

Converse: CLT Approximation

Approximate binomial entropy using CLT, cf. [Adell-Lekuona-Yu 2010]: I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H

  • bin

n 2, p

  • = log(n + 1) −
  • xn

1 ∈{0,1}n

PX n

1 (xn

1 )

1 2 log(πep(1 − p)n) + O 1 n

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-91
SLIDE 91

Converse: CLT Approximation

Upper bound on I(X n

1 ; Sn):

I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H

  • bin

n 2, p

  • = log(n + 1) − 1

2 log(πep(1 − p)n) + O 1 n

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-92
SLIDE 92

Converse: CLT Approximation

Upper bound on I(X n

1 ; Sn):

I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H

  • bin

n 2, p

  • = log(n + 1) − 1

2 log(πep(1 − p)n) + O 1 n

  • Hence, we have:

R ≤ lim

n→∞

I(X n

1 ; Sn)

log(n) = 1 2

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-93
SLIDE 93

Converse: CLT Approximation

Upper bound on I(X n

1 ; Sn):

I(X n

1 ; Sn) = H(Sn) − H(Sn|X n 1 )

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H(bin(k, 1 − p) + bin(n − k, p))

≤ log(n + 1) −

  • xn

1 ∈{0,1}n

PX n

1 (xn

1 ) H

  • bin

n 2, p

  • = log(n + 1) − 1

2 log(πep(1 − p)n) + O 1 n

  • Hence, we have:

R ≤ lim

n→∞

I(X n

1 ; Sn)

log(n) = 1 2

Theorem (Converse)

Cperm(BSC(p)) ≤ 1 2

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 18 / 21

slide-94
SLIDE 94

Outline

1

Introduction

2

Achievability

3

Converse

4

Conclusion

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 19 / 21

slide-95
SLIDE 95

Conclusion

Theorem (Pemutation Channel Capacity of BSC)

Cperm(BSC(p)) =      1, for p = 0, 1

1 2,

for p ∈

  • 0, 1

2

1

2, 1

  • 0,

for p = 1

2

𝑞 𝐷perm BSC 𝑞 1 1

1 2 1 2

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 20 / 21

slide-96
SLIDE 96

Conclusion

Theorem (Pemutation Channel Capacity of BSC)

Cperm(BSC(p)) =      1, for p = 0, 1

1 2,

for p ∈

  • 0, 1

2

1

2, 1

  • 0,

for p = 1

2

𝑞 𝐷perm BSC 𝑞 1 1

1 2 1 2

Remarks: Cperm(·) is discontinuous and non-convex

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 20 / 21

slide-97
SLIDE 97

Conclusion

Theorem (Pemutation Channel Capacity of BSC)

Cperm(BSC(p)) =      1, for p = 0, 1

1 2,

for p ∈

  • 0, 1

2

1

2, 1

  • 0,

for p = 1

2

𝑞 𝐷perm BSC 𝑞 1 1

1 2 1 2

Remarks: Cperm(·) is discontinuous and non-convex Cperm(·) is generally agnostic to parameters of channel

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 20 / 21

slide-98
SLIDE 98

Conclusion

Theorem (Pemutation Channel Capacity of BSC)

Cperm(BSC(p)) =      1, for p = 0, 1

1 2,

for p ∈

  • 0, 1

2

1

2, 1

  • 0,

for p = 1

2

𝑞 𝐷perm BSC 𝑞 1 1

1 2 1 2

Remarks: Cperm(·) is discontinuous and non-convex Cperm(·) is generally agnostic to parameters of channel Computationally tractable coding scheme in proof

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 20 / 21

slide-99
SLIDE 99

Conclusion

Theorem (Pemutation Channel Capacity of BSC)

Cperm(BSC(p)) =      1, for p = 0, 1

1 2,

for p ∈

  • 0, 1

2

1

2, 1

  • 0,

for p = 1

2

𝑞 𝐷perm BSC 𝑞 1 1

1 2 1 2

Remarks: Cperm(·) is discontinuous and non-convex Cperm(·) is generally agnostic to parameters of channel Computationally tractable coding scheme in proof Proof technique yields more general results

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 20 / 21

slide-100
SLIDE 100

Thank You!

  • A. Makur (MIT)

Capacity of BSC Permutation Channel 5 October 2018 21 / 21