Trading Information Complexity for Error Yaqiao Li joint work with - - PowerPoint PPT Presentation

trading information complexity for error
SMART_READER_LITE
LIVE PREVIEW

Trading Information Complexity for Error Yaqiao Li joint work with - - PowerPoint PPT Presentation

Trading Information Complexity for Error Yaqiao Li joint work with Yuval Dagan, Yuval Filmus, Hamed Hatami School of Computer Science McGill University July 8, 2017 Yaqiao Li (McGill University) July 8, 2017 1 / 22 Main results How much


slide-1
SLIDE 1

Trading Information Complexity for Error

Yaqiao Li

joint work with Yuval Dagan, Yuval Filmus, Hamed Hatami

School of Computer Science McGill University

July 8, 2017

Yaqiao Li (McGill University) July 8, 2017 1 / 22

slide-2
SLIDE 2

Main results

How much information one can save by allowing an error ǫ.

Yaqiao Li (McGill University) July 8, 2017 2 / 22

slide-3
SLIDE 3

Main results

How much information one can save by allowing an error ǫ. Showed a separation between two concepts in information complexity.

Yaqiao Li (McGill University) July 8, 2017 2 / 22

slide-4
SLIDE 4

Main results

How much information one can save by allowing an error ǫ. Showed a separation between two concepts in information complexity. Determined communication complexity of computing disjointness function with error ǫ.

Yaqiao Li (McGill University) July 8, 2017 2 / 22

slide-5
SLIDE 5

Information complexity

Extension of Shannon’s information theory towards studying communication complexity. Shannon (1916-2001)

Yaqiao Li (McGill University) July 8, 2017 3 / 22

slide-6
SLIDE 6

Communication complexity

Alice receives an input X ∈ {0, 1}n, Bob receives Y ∈ {0, 1}n.

Yaqiao Li (McGill University) July 8, 2017 4 / 22

slide-7
SLIDE 7

Communication complexity

Alice receives an input X ∈ {0, 1}n, Bob receives Y ∈ {0, 1}n. They want to compute f(X, Y) : {0, 1}n × {0, 1}n → {0, 1} collaboratively using a protocol.

Yaqiao Li (McGill University) July 8, 2017 4 / 22

slide-8
SLIDE 8

Communication complexity

Alice receives an input X ∈ {0, 1}n, Bob receives Y ∈ {0, 1}n. They want to compute f(X, Y) : {0, 1}n × {0, 1}n → {0, 1} collaboratively using a protocol. A protocol π is an algorithm that defines what Alice and Bob do, in

  • rder to compute f(X, Y).

Yaqiao Li (McGill University) July 8, 2017 4 / 22

slide-9
SLIDE 9

Communication complexity

Alice receives an input X ∈ {0, 1}n, Bob receives Y ∈ {0, 1}n. They want to compute f(X, Y) : {0, 1}n × {0, 1}n → {0, 1} collaboratively using a protocol. A protocol π is an algorithm that defines what Alice and Bob do, in

  • rder to compute f(X, Y).

CC(π) := how many bits of communication are sent in π ?

Yaqiao Li (McGill University) July 8, 2017 4 / 22

slide-10
SLIDE 10

Information complexity

Same setting;

Yaqiao Li (McGill University) July 8, 2017 5 / 22

slide-11
SLIDE 11

Information complexity

Same setting; assume inputs (X, Y) ∼ µ, to measure information (entropy).

Yaqiao Li (McGill University) July 8, 2017 5 / 22

slide-12
SLIDE 12

Information complexity

Same setting; assume inputs (X, Y) ∼ µ, to measure information (entropy). How much information the players have to reveal about their inputs?

Yaqiao Li (McGill University) July 8, 2017 5 / 22

slide-13
SLIDE 13

Information complexity

Same setting; assume inputs (X, Y) ∼ µ, to measure information (entropy). How much information the players have to reveal about their inputs? Information cost of a protocol π: ICµ(π) = information about Y that Alice learns + information about X that Bob learns.

Yaqiao Li (McGill University) July 8, 2017 5 / 22

slide-14
SLIDE 14

Information cost: Example 1

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Yaqiao Li (McGill University) July 8, 2017 6 / 22

slide-15
SLIDE 15

Information cost: Example 1

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Yaqiao Li (McGill University) July 8, 2017 6 / 22

slide-16
SLIDE 16

Information cost: Example 1

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H(Y) = 1;

Yaqiao Li (McGill University) July 8, 2017 6 / 22

slide-17
SLIDE 17

Information cost: Example 1

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H(Y) = 1; Bob learns Alice’s input: learned information = H(X) = 1;

Yaqiao Li (McGill University) July 8, 2017 6 / 22

slide-18
SLIDE 18

Information cost: Example 1

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice. Alice learns Bob’s input: learned information = H(Y) = 1; Bob learns Alice’s input: learned information = H(X) = 1; Information cost of π: ICµ(π) = 1 + 1 = 2 = CC(π).

Yaqiao Li (McGill University) July 8, 2017 6 / 22

slide-19
SLIDE 19

Information cost: Example 2

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Yaqiao Li (McGill University) July 8, 2017 7 / 22

slide-20
SLIDE 20

Information cost: Example 2

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Yaqiao Li (McGill University) July 8, 2017 7 / 22

slide-21
SLIDE 21

Information cost: Example 2

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y). Note: CC(τ) = 2 = CC(π).

Yaqiao Li (McGill University) July 8, 2017 7 / 22

slide-22
SLIDE 22

Information cost: Example 2

AND: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y). Note: CC(τ) = 2 = CC(π). Why is the protocol τ better than π?

Yaqiao Li (McGill University) July 8, 2017 7 / 22

slide-23
SLIDE 23

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-24
SLIDE 24

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Bob always learns X : learned information = H(X) = 1;

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-25
SLIDE 25

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Bob always learns X : learned information = H(X) = 1; When X = 1, Alice learns Y;

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-26
SLIDE 26

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Bob always learns X : learned information = H(X) = 1; When X = 1, Alice learns Y; When X = 0, Alice learns nothing;

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-27
SLIDE 27

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Bob always learns X : learned information = H(X) = 1; When X = 1, Alice learns Y; When X = 0, Alice learns nothing; Alice learns information = 1

2H(Y) = 0.5.

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-28
SLIDE 28

Information cost: Example 2 - continued

better protocol τ

Alice sends her input X to Bob; Bob computes and outputs AND(X, Y).

Information cost ICµ(τ) =?

Bob always learns X : learned information = H(X) = 1; When X = 1, Alice learns Y; When X = 0, Alice learns nothing; Alice learns information = 1

2H(Y) = 0.5.

Information cost of the protocol τ: ICµ(τ) = 1 + 0.5 = 1.5 < 2 = ICµ(π) = ⇒ τ is better!.

Yaqiao Li (McGill University) July 8, 2017 8 / 22

slide-29
SLIDE 29

Information complexity of a function f

Definition

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Yaqiao Li (McGill University) July 8, 2017 9 / 22

slide-30
SLIDE 30

Information complexity of a function f

Definition

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Example

We saw for µ being the uniform distribution, ICµ(AND, 0) ≤ ICµ(τ) = 1.5

Yaqiao Li (McGill University) July 8, 2017 9 / 22

slide-31
SLIDE 31

Information complexity of a function f

Definition

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Example

We saw for µ being the uniform distribution, ICµ(AND, 0) ≤ ICµ(τ) = 1.5

  • ptimal?

Yaqiao Li (McGill University) July 8, 2017 9 / 22

slide-32
SLIDE 32

Information complexity of AND

Theorem (BGPW’13)

Let µ be the uniform distribution, then ICµ(AND, 0) ≈ 1.36...

Yaqiao Li (McGill University) July 8, 2017 10 / 22

slide-33
SLIDE 33

Information complexity of AND

Theorem (BGPW’13)

Let µ be the uniform distribution, then ICµ(AND, 0) ≈ 1.36...

Theorem (BGPW’13)

max

µ

ICµ(AND, 0) ≈ 1.49...

Yaqiao Li (McGill University) July 8, 2017 10 / 22

slide-34
SLIDE 34

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-35
SLIDE 35

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-36
SLIDE 36

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Information complexity ICµ(XOR, 0)

ICµ(π) = 1 + 1 = 2.

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-37
SLIDE 37

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Information complexity ICµ(XOR, 0)

ICµ(π) = 1 + 1 = 2.

  • ptimal!

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-38
SLIDE 38

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Information complexity ICµ(XOR, 0)

ICµ(π) = 1 + 1 = 2.

  • ptimal! =

⇒ ICµ(XOR, 0) = 2.

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-39
SLIDE 39

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Information complexity ICµ(XOR, 0)

ICµ(π) = 1 + 1 = 2.

  • ptimal! =

⇒ ICµ(XOR, 0) = 2. Q: what if we want to use less information?

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-40
SLIDE 40

Information complexity: Example 3

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Naive protocol π

Alice sends her input X to Bob; Bob sends his input Y to Alice.

Information complexity ICµ(XOR, 0)

ICµ(π) = 1 + 1 = 2.

  • ptimal! =

⇒ ICµ(XOR, 0) = 2. Q: what if we want to use less information? Great Idea: ALLOW ERROR!

Yaqiao Li (McGill University) July 8, 2017 11 / 22

slide-41
SLIDE 41

Trading information complexity for error!

Recall

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Yaqiao Li (McGill University) July 8, 2017 12 / 22

slide-42
SLIDE 42

Trading information complexity for error!

Recall

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Definition

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f

Yaqiao Li (McGill University) July 8, 2017 12 / 22

slide-43
SLIDE 43

Trading information complexity for error!

Recall

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Definition

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ

Yaqiao Li (McGill University) July 8, 2017 12 / 22

slide-44
SLIDE 44

Trading information complexity for error!

Recall

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Definition

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ for every input (X, Y).

Yaqiao Li (McGill University) July 8, 2017 12 / 22

slide-45
SLIDE 45

Trading information complexity for error!

Recall

ICµ(f, 0) := inf

π ICµ(π),

where π computes f with no error.

Definition

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ for every input (X, Y).

Q: How much information can one save?

ICµ(XOR, 0) − ICµ(XOR, ǫ) =?

Yaqiao Li (McGill University) July 8, 2017 12 / 22

slide-46
SLIDE 46

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-47
SLIDE 47

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-48
SLIDE 48

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X;

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-49
SLIDE 49

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X; Alice sends X ′ to Bob;

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-50
SLIDE 50

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X; Alice sends X ′ to Bob; Bob sends his input Y to Alice;

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-51
SLIDE 51

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X; Alice sends X ′ to Bob; Bob sends his input Y to Alice; Alice and Bob compute XOR(X ′, Y),

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-52
SLIDE 52

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X; Alice sends X ′ to Bob; Bob sends his input Y to Alice; Alice and Bob compute XOR(X ′, Y), having error at most ǫ!

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-53
SLIDE 53

Trading information complexity for error - example

XOR: {0, 1} × {0, 1} → {0, 1}

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

protocol πǫ

Alice flips her input X to X ′ with probability ǫ; Alice sets X ′ = 1 − X with probability ǫ, otherwise X ′ = X; Alice sends X ′ to Bob; Bob sends his input Y to Alice; Alice and Bob compute XOR(X ′, Y), having error at most ǫ! Information cost ICµ(πǫ) = ?

Yaqiao Li (McGill University) July 8, 2017 13 / 22

slide-54
SLIDE 54

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-55
SLIDE 55

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-56
SLIDE 56

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is.

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-57
SLIDE 57

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is. Bayesian: Pr[X = 1|X ′ = 1] = Pr[X ′ = 1|X = 1] Pr[X = 1] Pr[X ′ = 1] = 1 − ǫ.

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-58
SLIDE 58

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is. Bayesian: Pr[X = 1|X ′ = 1] = Pr[X ′ = 1|X = 1] Pr[X = 1] Pr[X ′ = 1] = 1 − ǫ. Bob learns less information about Alice’s input. Indeed, ICµ(πǫ) = 1+(1−H(1−ǫ)) = 2−H(ǫ).

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-59
SLIDE 59

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is. Bayesian: Pr[X = 1|X ′ = 1] = Pr[X ′ = 1|X = 1] Pr[X = 1] Pr[X ′ = 1] = 1 − ǫ. Bob learns less information about Alice’s input. Indeed, ICµ(πǫ) = 1+(1−H(1−ǫ)) = 2−H(ǫ). = ⇒ ICµ(XOR, ǫ) ≤ 2−H(ǫ).

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-60
SLIDE 60

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is. Bayesian: Pr[X = 1|X ′ = 1] = Pr[X ′ = 1|X = 1] Pr[X = 1] Pr[X ′ = 1] = 1 − ǫ. Bob learns less information about Alice’s input. Indeed, ICµ(πǫ) = 1+(1−H(1−ǫ)) = 2−H(ǫ). = ⇒ ICµ(XOR, ǫ) ≤ 2−H(ǫ). Hence the information saved by allowing error ǫ: ICµ(XOR, 0) − ICµ(XOR, ǫ) ≥ H(ǫ).

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-61
SLIDE 61

Analysis of πǫ

Product distribution µ: Pr[X = 1] = 1

2,

Pr[Y = 1] = 1

2.

After flipping, X ′ is still uniform: Pr[X ′ = 1] = 1

2;

On receiving X ′, Bob is not sure what Alice’s true input is. Bayesian: Pr[X = 1|X ′ = 1] = Pr[X ′ = 1|X = 1] Pr[X = 1] Pr[X ′ = 1] = 1 − ǫ. Bob learns less information about Alice’s input. Indeed, ICµ(πǫ) = 1+(1−H(1−ǫ)) = 2−H(ǫ). = ⇒ ICµ(XOR, ǫ) ≤ 2−H(ǫ). Hence the information saved by allowing error ǫ: ICµ(XOR, 0) − ICµ(XOR, ǫ) ≥ H(ǫ). True for all functions f?

Yaqiao Li (McGill University) July 8, 2017 14 / 22

slide-62
SLIDE 62

Main results

Theorem (Dagan-Filmus-Hatami-L ’16)

For all functions f, for all distributions µ such that ICµ(f, 0) > 0, Ω(H(ǫ)) ≤ ICµ(f, 0) − ICµ(f, ǫ) ≤ O(H(√ǫ)).

Theorem (Dagan-Filmus-Hatami-L ’16)

For all µ such that ICµ(AND, 0) > 0, ICµ(AND, 0) − ICµ(AND, ǫ) = Θ(H(ǫ)).

Yaqiao Li (McGill University) July 8, 2017 15 / 22

slide-63
SLIDE 63

Proof idea of lower bound

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(f, 0) − ICµ(f, ǫ) ≥ Ω(H(ǫ)).

Yaqiao Li (McGill University) July 8, 2017 16 / 22

slide-64
SLIDE 64

Proof idea of lower bound

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(f, 0) − ICµ(f, ǫ) ≥ Ω(H(ǫ)).

Proof idea - converting a zero-error protocol π to ǫ error πǫ

Yaqiao Li (McGill University) July 8, 2017 16 / 22

slide-65
SLIDE 65

Proof idea of lower bound

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(f, 0) − ICµ(f, ǫ) ≥ Ω(H(ǫ)).

Proof idea - converting a zero-error protocol π to ǫ error πǫ

Alice smartly flips her input X with probability ǫ to X ′;

Yaqiao Li (McGill University) July 8, 2017 16 / 22

slide-66
SLIDE 66

Proof idea of lower bound

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(f, 0) − ICµ(f, ǫ) ≥ Ω(H(ǫ)).

Proof idea - converting a zero-error protocol π to ǫ error πǫ

Alice smartly flips her input X with probability ǫ to X ′; They compute f using π with (X ′, Y).

Yaqiao Li (McGill University) July 8, 2017 16 / 22

slide-67
SLIDE 67

Proof idea of lower bound

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(f, 0) − ICµ(f, ǫ) ≥ Ω(H(ǫ)).

Proof idea - converting a zero-error protocol π to ǫ error πǫ

Alice smartly flips her input X with probability ǫ to X ′; They compute f using π with (X ′, Y). Show ICµ(π) − ICµ(πǫ) ≥ Ω(H(ǫ)).

Yaqiao Li (McGill University) July 8, 2017 16 / 22

slide-68
SLIDE 68

Tight bound for AND

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(AND, 0) − ICµ(AND, ǫ) ≤ O(H(ǫ)).

Yaqiao Li (McGill University) July 8, 2017 17 / 22

slide-69
SLIDE 69

Tight bound for AND

Theorem (Dagan-Filmus-Hatami-L ’16)

ICµ(AND, 0) − ICµ(AND, ǫ) ≤ O(H(ǫ)).

Some new ideas

study the stability of protocols; may be useful to study other functions. parametrization of all distributions by product distributions. applicable to all functions!

Yaqiao Li (McGill University) July 8, 2017 17 / 22

slide-70
SLIDE 70

Worst case error v.s. distributional error

Recall

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ for all inputs (X, Y): worst case error.

Yaqiao Li (McGill University) July 8, 2017 18 / 22

slide-71
SLIDE 71

Worst case error v.s. distributional error

Recall

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ for all inputs (X, Y): worst case error.

Definition

ICD

µ(f, ǫ) := inf π ICµ(π),

where π computes f with distributional error ǫ: Pr

(X,Y)∼µ[π(X, Y) = f(X, Y)] ≤ ǫ.

Yaqiao Li (McGill University) July 8, 2017 18 / 22

slide-72
SLIDE 72

Worst case error v.s. distributional error

Recall

ICµ(f, ǫ) := inf

π ICµ(π),

where π computes f with error ǫ for all inputs (X, Y): worst case error.

Definition

ICD

µ(f, ǫ) := inf π ICµ(π),

where π computes f with distributional error ǫ: Pr

(X,Y)∼µ[π(X, Y) = f(X, Y)] ≤ ǫ.

Obviously, ICD

µ(f, ǫ) ≤ ICµ(f, ǫ), and the inequality can be strict!

Yaqiao Li (McGill University) July 8, 2017 18 / 22

slide-73
SLIDE 73

Prior-free information complexity

Definition [Braverman’12]

IC(f, ǫ) := max

µ

ICµ(f, ǫ), ICD(f, ǫ) := max

µ

ICD

µ(f, ǫ).

Yaqiao Li (McGill University) July 8, 2017 19 / 22

slide-74
SLIDE 74

Prior-free information complexity

Definition [Braverman’12]

IC(f, ǫ) := max

µ

ICµ(f, ǫ), ICD(f, ǫ) := max

µ

ICD

µ(f, ǫ).

Obviously, ICD(f, ǫ) ≤ IC(f, ǫ).

Yaqiao Li (McGill University) July 8, 2017 19 / 22

slide-75
SLIDE 75

Prior-free information complexity

Definition [Braverman’12]

IC(f, ǫ) := max

µ

ICµ(f, ǫ), ICD(f, ǫ) := max

µ

ICD

µ(f, ǫ).

Obviously, ICD(f, ǫ) ≤ IC(f, ǫ).

Theorem (Braverman’12)

If ǫ = 0, i.e., no error, IC(f, 0) = ICD(f, 0). If ǫ > 0, then 1 2IC(f, 2ǫ) ≤ ICD(f, ǫ) ≤ IC(f, ǫ).

Yaqiao Li (McGill University) July 8, 2017 19 / 22

slide-76
SLIDE 76

Prior-free information complexity

Definition [Braverman’12]

IC(f, ǫ) := max

µ

ICµ(f, ǫ), ICD(f, ǫ) := max

µ

ICD

µ(f, ǫ).

Obviously, ICD(f, ǫ) ≤ IC(f, ǫ).

Theorem (Braverman’12)

If ǫ = 0, i.e., no error, IC(f, 0) = ICD(f, 0). If ǫ > 0, then 1 2IC(f, 2ǫ) ≤ ICD(f, ǫ) ≤ IC(f, ǫ). Q: ICD(f, ǫ) = IC(f, ǫ)?

Yaqiao Li (McGill University) July 8, 2017 19 / 22

slide-77
SLIDE 77

Main results - continued

Theorem (Dagan-Filmus-Hatami-L ’16)

For n sufficiently large, ICD(DISJn, ǫ) < IC(DISJn, ǫ).

Yaqiao Li (McGill University) July 8, 2017 20 / 22

slide-78
SLIDE 78

Implication on communication complexity

Theorem (BGPW’13)

lim

ǫ→0 CC(DISJn, ǫ) ≈ 0.4827n.

Yaqiao Li (McGill University) July 8, 2017 21 / 22

slide-79
SLIDE 79

Implication on communication complexity

Theorem (BGPW’13)

lim

ǫ→0 CC(DISJn, ǫ) ≈ 0.4827n.

Theorem (Dagan-Filmus-Hatami-L ’16)

CC(DISJn, ǫ) ≈ (0.4827 − Θ(H(ǫ)))n. Proof: follows from results on IC(AND, ǫ). Conjectured by [BGPW13].

Yaqiao Li (McGill University) July 8, 2017 21 / 22

slide-80
SLIDE 80

Summary and Open Problems

Theorem (Dagan-Filmus-Hatami-L ’16)

Ω(H(ǫ)) ≤ ICµ(f, 0) − ICµ(f, ǫ) ≤ O(H(√ǫ)); ICD(f, ǫ) = IC(f, ǫ), example: DISJn; CC(DISJn, ǫ) ≈ (0.4827 − Θ(H(ǫ)))n.

Yaqiao Li (McGill University) July 8, 2017 22 / 22

slide-81
SLIDE 81

Summary and Open Problems

Theorem (Dagan-Filmus-Hatami-L ’16)

Ω(H(ǫ)) ≤ ICµ(f, 0) − ICµ(f, ǫ) ≤ O(H(√ǫ)); ICD(f, ǫ) = IC(f, ǫ), example: DISJn; CC(DISJn, ǫ) ≈ (0.4827 − Θ(H(ǫ)))n.

Open Problem

ICµ(f, 0) − ICµ(f, ǫ) = Θ(H(ǫ))? limn→∞

IC(DISJn,ǫ) n

= maxµ:µ(11)=0 ICµ(AND, ǫ, 1 → 0)?

Yaqiao Li (McGill University) July 8, 2017 22 / 22