Noise Sensitivity of Boolean Functions and Critical Percolation - - PowerPoint PPT Presentation

noise sensitivity of boolean functions and critical
SMART_READER_LITE
LIVE PREVIEW

Noise Sensitivity of Boolean Functions and Critical Percolation - - PowerPoint PPT Presentation

Noise Sensitivity of Boolean Functions and Critical Percolation Jeffrey Steif Pictures borrowed from C. Garban and others Jeff Steif October 21, 2019 1 / 35 Outline of talk Jeff Steif October 21, 2019 2 / 35 Outline of talk 1.


slide-1
SLIDE 1

Noise Sensitivity of Boolean Functions and Critical Percolation

Jeffrey Steif

Pictures “borrowed” from C. Garban and others

Jeff Steif October 21, 2019 1 / 35

slide-2
SLIDE 2

Outline of talk

Jeff Steif October 21, 2019 2 / 35

slide-3
SLIDE 3

Outline of talk

  • 1. Critical Percolation and Critical Exponents (with a fast detour through

the Bieberbach conjecture).

Jeff Steif October 21, 2019 2 / 35

slide-4
SLIDE 4

Outline of talk

  • 1. Critical Percolation and Critical Exponents (with a fast detour through

the Bieberbach conjecture).

  • 2. The concept of Noise Sensitivity and Influences in general and then with

respect to critical percolation.

Jeff Steif October 21, 2019 2 / 35

slide-5
SLIDE 5

Outline of talk

  • 1. Critical Percolation and Critical Exponents (with a fast detour through

the Bieberbach conjecture).

  • 2. The concept of Noise Sensitivity and Influences in general and then with

respect to critical percolation.

  • 3. Fourier analysis on the hypercube (with a mention of the three methods

for proving noise sensitivity for percolation).

Jeff Steif October 21, 2019 2 / 35

slide-6
SLIDE 6

Outline of talk

  • 1. Critical Percolation and Critical Exponents (with a fast detour through

the Bieberbach conjecture).

  • 2. The concept of Noise Sensitivity and Influences in general and then with

respect to critical percolation.

  • 3. Fourier analysis on the hypercube (with a mention of the three methods

for proving noise sensitivity for percolation). Everything is contained in: Noise sensitivity of Boolean functions and percolation by Christophe Garban and J.S. (Cambridge University Press). Available free on my homepage.

Jeff Steif October 21, 2019 2 / 35

slide-7
SLIDE 7

Part 1: Percolation on the hexagonal lattice

Jeff Steif October 21, 2019 3 / 35

slide-8
SLIDE 8

Part 1: Percolation on the hexagonal lattice

Let each hexagon be black independently with probability p.

Jeff Steif October 21, 2019 3 / 35

slide-9
SLIDE 9

Part 1: Percolation on the hexagonal lattice

Let each hexagon be black independently with probability p. Does there exist an infinite black component?

Jeff Steif October 21, 2019 3 / 35

slide-10
SLIDE 10

Theorem: (Harris, 1960) If p = 1/2, then there is no infinite black component (with probability 1).

Jeff Steif October 21, 2019 4 / 35

slide-11
SLIDE 11

Theorem: (Harris, 1960) If p = 1/2, then there is no infinite black component (with probability 1). Ted Harris has an honorary Ph.D. from Chalmers!

Jeff Steif October 21, 2019 4 / 35

slide-12
SLIDE 12

Theorem: (Harris, 1960) If p = 1/2, then there is no infinite black component (with probability 1). Ted Harris has an honorary Ph.D. from Chalmers! Theorem: (Kesten, 1980) If p > 1/2, then there is an infinite black component (with probability 1).

Jeff Steif October 21, 2019 4 / 35

slide-13
SLIDE 13

Theorem: (Harris, 1960) If p = 1/2, then there is no infinite black component (with probability 1). Ted Harris has an honorary Ph.D. from Chalmers! Theorem: (Kesten, 1980) If p > 1/2, then there is an infinite black component (with probability 1). We have a phase transition and the critical value is 1/2.

Jeff Steif October 21, 2019 4 / 35

slide-14
SLIDE 14

Critical Exponents: The one-arm event (p = 1/2)

.

R

Jeff Steif October 21, 2019 5 / 35

slide-15
SLIDE 15

Critical Exponents: The one-arm event (p = 1/2)

.

R

Theorem (Lawler, Schramm & Werner 2002)

The probability of this event, denoted α1(R), is R−5/48+o(1) as R → ∞

Jeff Steif October 21, 2019 5 / 35

slide-16
SLIDE 16

Critical Exponents: Greg Lawler, Oded Schramm and Wendelin Werner

Jeff Steif October 21, 2019 6 / 35

slide-17
SLIDE 17

Critical Exponents : The four-arms event (p = 1/2)

.

R

Jeff Steif October 21, 2019 7 / 35

slide-18
SLIDE 18

Critical Exponents : The four-arms event (p = 1/2)

.

R

Theorem (Smirnov & Werner, 2001)

The probability of this event, denoted α4(R), is R−5/4+o(1) as R → ∞

Jeff Steif October 21, 2019 7 / 35

slide-19
SLIDE 19

Critical Exponents : The four-arms event (p = 1/2)

.

R

Theorem (Smirnov & Werner, 2001)

The probability of this event, denoted α4(R), is R−5/4+o(1) as R → ∞ Critical exponents are believed to be universal while the critical value is not.

Jeff Steif October 21, 2019 7 / 35

slide-20
SLIDE 20

Critical Exponents : Smirnov and Werner

Jeff Steif October 21, 2019 8 / 35

slide-21
SLIDE 21

How does one calculate critical exponents?

The following 4 slides are an aside!

Jeff Steif October 21, 2019 9 / 35

slide-22
SLIDE 22

How does one calculate critical exponents?

The following 4 slides are an aside! These 4 slides will be extremely sketchy, fast and not expected at all! to be followed.

Jeff Steif October 21, 2019 9 / 35

slide-23
SLIDE 23

How does one calculate critical exponents?

The following 4 slides are an aside! These 4 slides will be extremely sketchy, fast and not expected at all! to be followed. So, either put on your seat belt for the 4 slides or close your eyes and rejoin us in 4 slides.

Jeff Steif October 21, 2019 9 / 35

slide-24
SLIDE 24

Schramm–Löwner evolution (SLE).

Jeff Steif October 21, 2019 10 / 35

slide-25
SLIDE 25

Schramm–Löwner evolution (SLE).

Step 1: The Bieberbach conjecture

Jeff Steif October 21, 2019 10 / 35

slide-26
SLIDE 26

Schramm–Löwner evolution (SLE).

Step 1: The Bieberbach conjecture If f is an injective holomorphic function from the unit disk into the complex plane with f (z) = z +

  • n≥2

anzn, then |an| ≤ n for all n.

Jeff Steif October 21, 2019 10 / 35

slide-27
SLIDE 27

Schramm–Löwner evolution (SLE).

Step 1: The Bieberbach conjecture If f is an injective holomorphic function from the unit disk into the complex plane with f (z) = z +

  • n≥2

anzn, then |an| ≤ n for all n. The Koebe function z → z/(1 − z)2 has an = n for every n.

Jeff Steif October 21, 2019 10 / 35

slide-28
SLIDE 28

The Bieberbach conjecture/theorem

Ludwig Bieberbach (1916), Charles Löwner (1923), John Littlewood (1925), Paul Garabedian and Menahem Schiffer (1955) and Louis de Branges (1984).

Jeff Steif October 21, 2019 11 / 35

slide-29
SLIDE 29

Löwner differential equation.

Step 2. The Löwner differential equation (used both by himself for n = 3 and by DeBranges). Given a driving function f (t) from R to R with f (0) = 0, one obtains a family {gt}t≥0 of conformal mappings from subsets of the upper half plane to the upper half plane which satisfies a certain time dependent differential equation.

Jeff Steif October 21, 2019 12 / 35

slide-30
SLIDE 30

Löwner differential equation.

Step 2. The Löwner differential equation (used both by himself for n = 3 and by DeBranges). Given a driving function f (t) from R to R with f (0) = 0, one obtains a family {gt}t≥0 of conformal mappings from subsets of the upper half plane to the upper half plane which satisfies a certain time dependent differential equation. The curve γ(t) := gt(f (t)) is called the Löwner curve associated to f .

Jeff Steif October 21, 2019 12 / 35

slide-31
SLIDE 31

Löwner differential equation.

Step 2. The Löwner differential equation (used both by himself for n = 3 and by DeBranges). Given a driving function f (t) from R to R with f (0) = 0, one obtains a family {gt}t≥0 of conformal mappings from subsets of the upper half plane to the upper half plane which satisfies a certain time dependent differential equation. The curve γ(t) := gt(f (t)) is called the Löwner curve associated to f . Take home message: We have a way to go from curves in R to curves in C relevant to the solution of the Bieberbach conjecture.

Jeff Steif October 21, 2019 12 / 35

slide-32
SLIDE 32

Stochastic/Schramm–Löwner evolution (SLE).

Step 3. Oded Schramm had the insight to realize that if you take your driving function f to be a Brownian motion, then the resulting random curve γ(t) will be the key to understanding many 2-dimensional critical systems in statistical mechanics, including percolation theory.

Jeff Steif October 21, 2019 13 / 35

slide-33
SLIDE 33

Stochastic/Schramm–Löwner evolution (SLE).

Step 3. Oded Schramm had the insight to realize that if you take your driving function f to be a Brownian motion, then the resulting random curve γ(t) will be the key to understanding many 2-dimensional critical systems in statistical mechanics, including percolation theory. This allowed the reduction of the computation of critical exponents to studying questions involving 1-dimensional diffusion processes.

Jeff Steif October 21, 2019 13 / 35

slide-34
SLIDE 34

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)

Jeff Steif October 21, 2019 14 / 35

slide-35
SLIDE 35

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1}

Jeff Steif October 21, 2019 14 / 35

slide-36
SLIDE 36

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1} (f is called a Boolean function)

Jeff Steif October 21, 2019 14 / 35

slide-37
SLIDE 37

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1} (f is called a Boolean function)
  • xǫ := xǫ

1, . . . , xǫ n small perturbation of x: each xi is independently

with probability ǫ flipped.

Jeff Steif October 21, 2019 14 / 35

slide-38
SLIDE 38

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1} (f is called a Boolean function)
  • xǫ := xǫ

1, . . . , xǫ n small perturbation of x: each xi is independently

with probability ǫ flipped. Question: Is f (x) and f (xǫ) very likely to be the same (high correlation) or almost independent (low correlation)?

Jeff Steif October 21, 2019 14 / 35

slide-39
SLIDE 39

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1} (f is called a Boolean function)
  • xǫ := xǫ

1, . . . , xǫ n small perturbation of x: each xi is independently

with probability ǫ flipped. Question: Is f (x) and f (xǫ) very likely to be the same (high correlation) or almost independent (low correlation)? Of course if n and f are fixed and ǫ is very small, then f (x) and f (xǫ) are very likely to be the same.

Jeff Steif October 21, 2019 14 / 35

slide-40
SLIDE 40

Part 2: Boolean functions and Noise sensitivity

Basic Set up for Noise Sensitivity

  • x := x1, . . . , xn i.i.d. ±1 (1/2, 1/2)
  • f : {−1, 1}n → {±1} (f is called a Boolean function)
  • xǫ := xǫ

1, . . . , xǫ n small perturbation of x: each xi is independently

with probability ǫ flipped. Question: Is f (x) and f (xǫ) very likely to be the same (high correlation) or almost independent (low correlation)? Of course if n and f are fixed and ǫ is very small, then f (x) and f (xǫ) are very likely to be the same. So, we think of ǫ as fixed (and small) and then n is taken to be very large.

Jeff Steif October 21, 2019 14 / 35

slide-41
SLIDE 41

Noise Sensitivity

Definition (Benjamini, Kalai, Schramm)

A sequence of Boolean functions fn : {−1, 1}n → {±1} is called noise sensitive (NS) if for any fixed ǫ > 0, lim

n→∞ E

  • fn(x)fn(xǫ)
  • − E
  • fn(x)

2 = 0 .

Jeff Steif October 21, 2019 15 / 35

slide-42
SLIDE 42

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Jeff Steif October 21, 2019 16 / 35

slide-43
SLIDE 43

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive.

Jeff Steif October 21, 2019 16 / 35

slide-44
SLIDE 44

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive. (In fact Noise Stable)

Jeff Steif October 21, 2019 16 / 35

slide-45
SLIDE 45

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive. (In fact Noise Stable)

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

Jeff Steif October 21, 2019 16 / 35

slide-46
SLIDE 46

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive. (In fact Noise Stable)

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

Noise sensitive.

Jeff Steif October 21, 2019 16 / 35

slide-47
SLIDE 47

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive. (In fact Noise Stable)

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

Noise sensitive.

  • fn(x1, . . . , xn) := sign(

i xi)

(Majority Function, n odd) Not noise sensitive.

Jeff Steif October 21, 2019 16 / 35

slide-48
SLIDE 48

3 examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Not noise sensitive. (In fact Noise Stable)

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

Noise sensitive.

  • fn(x1, . . . , xn) := sign(

i xi)

(Majority Function, n odd) Not noise sensitive. (In fact Noise Stable)

Jeff Steif October 21, 2019 16 / 35

slide-49
SLIDE 49

Percolation on the hexagonal lattice

Consider an R × R box and let fR : {−1, 1}R2 → {−1, 1}, be the Boolean function given by:

  • +1 if there is a Left-Right black

crossing

  • −1 otherwise

Jeff Steif October 21, 2019 17 / 35

slide-50
SLIDE 50

Are percolation crossing events noise sensitive?

ω ǫ-noised ωǫ

?

slide-51
SLIDE 51

Noise sensitivity of percolation

Theorem (Benjamini, Kalai & Schramm 1999)

Percolation crossings are noise sensitive.

Jeff Steif October 21, 2019 19 / 35

slide-52
SLIDE 52

Noise sensitivity of percolation

Theorem (Benjamini, Kalai & Schramm 1999)

Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫR decreases to 0 with R?

Jeff Steif October 21, 2019 19 / 35

slide-53
SLIDE 53

Noise sensitivity of percolation

Theorem (Benjamini, Kalai & Schramm 1999)

Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫR decreases to 0 with R? (If it decreases too quickly to 0, one trivially has high correlation.)

Jeff Steif October 21, 2019 19 / 35

slide-54
SLIDE 54

Noise sensitivity of percolation

Theorem (Benjamini, Kalai & Schramm 1999)

Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫR decreases to 0 with R? (If it decreases too quickly to 0, one trivially has high correlation.) Benjamini, Kalai & Schramm showed decorrelation (asymptotic independence) still occurs if ǫR ≥ C/ log(R) for a sufficiently large C.

Jeff Steif October 21, 2019 19 / 35

slide-55
SLIDE 55

Noise sensitivity of percolation

Theorem (Benjamini, Kalai & Schramm 1999)

Percolation crossings are noise sensitive. Question: What happens if the amount of noise ǫR decreases to 0 with R? (If it decreases too quickly to 0, one trivially has high correlation.) Benjamini, Kalai & Schramm showed decorrelation (asymptotic independence) still occurs if ǫR ≥ C/ log(R) for a sufficiently large C. Question: What happens if the amount of noise ǫ = ǫR decreases to 0 as 1/Rǫ for a small ǫ > 0?

Jeff Steif October 21, 2019 19 / 35

slide-56
SLIDE 56

Pivotality and Influences (A key player)

Definition

For a Boolean function f on n variables and for i ∈ {1, 2, . . . , n}, the event that i is pivotal is the event that changing the ith bit changes the output

  • f the function.

Jeff Steif October 21, 2019 20 / 35

slide-57
SLIDE 57

Pivotality and Influences (A key player)

Definition

For a Boolean function f on n variables and for i ∈ {1, 2, . . . , n}, the event that i is pivotal is the event that changing the ith bit changes the output

  • f the function.

The influence of the ith bit, Ii(f ), is the probability that i is pivotal. (Also called the Banzhaf-Penrose index.)

Jeff Steif October 21, 2019 20 / 35

slide-58
SLIDE 58

Pivotality and Influences (A key player)

Definition

For a Boolean function f on n variables and for i ∈ {1, 2, . . . , n}, the event that i is pivotal is the event that changing the ith bit changes the output

  • f the function.

The influence of the ith bit, Ii(f ), is the probability that i is pivotal. (Also called the Banzhaf-Penrose index.) John Banzhaf and Lionel Penrose

Jeff Steif October 21, 2019 20 / 35

slide-59
SLIDE 59

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

Jeff Steif October 21, 2019 21 / 35

slide-60
SLIDE 60

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1.

Jeff Steif October 21, 2019 21 / 35

slide-61
SLIDE 61

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

Jeff Steif October 21, 2019 21 / 35

slide-62
SLIDE 62

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

Jeff Steif October 21, 2019 21 / 35

slide-63
SLIDE 63

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

All the bits are always pivotal and so all have influence 1.

Jeff Steif October 21, 2019 21 / 35

slide-64
SLIDE 64

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

All the bits are always pivotal and so all have influence 1.

  • fn(x1, . . . , xn) := sign(

i xi)

(Majority Function, n odd)

Jeff Steif October 21, 2019 21 / 35

slide-65
SLIDE 65

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

All the bits are always pivotal and so all have influence 1.

  • fn(x1, . . . , xn) := sign(

i xi)

(Majority Function, n odd) A bit is pivotal iff there is a tie among the other bits.

Jeff Steif October 21, 2019 21 / 35

slide-66
SLIDE 66

Pivotality and Influences: Examples

  • fn(x1, . . . , xn) := x1 (Dictator)

The first bit is always pivotal and so I1(f ) = 1. The other bits are never pivotal and so have influence are 0.

  • fn(x1, . . . , xn) := n

i=1 xi (Parity)

All the bits are always pivotal and so all have influence 1.

  • fn(x1, . . . , xn) := sign(

i xi)

(Majority Function, n odd) A bit is pivotal iff there is a tie among the other bits. Hence all influences are about c/n1/2.

Jeff Steif October 21, 2019 21 / 35

slide-67
SLIDE 67

Influences are relevant for noise sensitivity

Jeff Steif October 21, 2019 22 / 35

slide-68
SLIDE 68

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive.

Jeff Steif October 21, 2019 22 / 35

slide-69
SLIDE 69

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive. Remarks:

  • The Parity function shows that this condition is not necessary.

Jeff Steif October 21, 2019 22 / 35

slide-70
SLIDE 70

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive. Remarks:

  • The Parity function shows that this condition is not necessary.
  • However, this condition is necessary for monotone (increasing)

functions.

Jeff Steif October 21, 2019 22 / 35

slide-71
SLIDE 71

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive. Remarks:

  • The Parity function shows that this condition is not necessary.
  • However, this condition is necessary for monotone (increasing)
  • functions. (All examples other than Parity you have seen are

monotone.)

Jeff Steif October 21, 2019 22 / 35

slide-72
SLIDE 72

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive. Remarks:

  • The Parity function shows that this condition is not necessary.
  • However, this condition is necessary for monotone (increasing)
  • functions. (All examples other than Parity you have seen are

monotone.)

  • The majority functions just miss satisfying this condition and are an

extremal sequence in many respects.

Jeff Steif October 21, 2019 22 / 35

slide-73
SLIDE 73

Influences are relevant for noise sensitivity

Theorem (Benjamini, Kalai & Schramm 1999)

If lim

n→∞

  • i

I 2

i (fn) = 0,

then {fn} is noise sensitive. Remarks:

  • The Parity function shows that this condition is not necessary.
  • However, this condition is necessary for monotone (increasing)
  • functions. (All examples other than Parity you have seen are

monotone.)

  • The majority functions just miss satisfying this condition and are an

extremal sequence in many respects. How do we get noise sensitivity of percolation crossings from this?

Jeff Steif October 21, 2019 22 / 35

slide-74
SLIDE 74

What is the probability of a hexagon being pivotal for percolation?

The probability of being pivotal is (away from the boundary) about 1/R5/4 (4-arm exponent!) yielding the sum of the squared influences to be R2 × 1/R5/2 which goes to 0.

Jeff Steif October 21, 2019 23 / 35

slide-75
SLIDE 75

Quantitative Noise sensitivity of percolation

Recall BKS showed percolation crossings decorrelate under noise even if ǫR → 0 provided ǫR ≥ C/ log(R) for a sufficiently large C.

Jeff Steif October 21, 2019 24 / 35

slide-76
SLIDE 76

Quantitative Noise sensitivity of percolation

Recall BKS showed percolation crossings decorrelate under noise even if ǫR → 0 provided ǫR ≥ C/ log(R) for a sufficiently large C. Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα for sufficiently small α?

Jeff Steif October 21, 2019 24 / 35

slide-77
SLIDE 77

Quantitative Noise sensitivity of percolation

Recall BKS showed percolation crossings decorrelate under noise even if ǫR → 0 provided ǫR ≥ C/ log(R) for a sufficiently large C. Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα for sufficiently small α? If so, what is the largest α we could use?

Jeff Steif October 21, 2019 24 / 35

slide-78
SLIDE 78

Quantitative Noise sensitivity of percolation

Recall BKS showed percolation crossings decorrelate under noise even if ǫR → 0 provided ǫR ≥ C/ log(R) for a sufficiently large C. Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα for sufficiently small α? If so, what is the largest α we could use? Heuristic: Yes and the largest α is 3/4.

Jeff Steif October 21, 2019 24 / 35

slide-79
SLIDE 79

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα?

Jeff Steif October 21, 2019 25 / 35

slide-80
SLIDE 80

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

Jeff Steif October 21, 2019 25 / 35

slide-81
SLIDE 81

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

Jeff Steif October 21, 2019 25 / 35

slide-82
SLIDE 82

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.

Jeff Steif October 21, 2019 25 / 35

slide-83
SLIDE 83

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.
  • Therefore (with ǫR = 1/Rα) the expected number of pivotal hexagons

that we flip is R3/4−α.

Jeff Steif October 21, 2019 25 / 35

slide-84
SLIDE 84

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.
  • Therefore (with ǫR = 1/Rα) the expected number of pivotal hexagons

that we flip is R3/4−α.

  • If α > 3/4, we don’t flip a pivotal and things don’t change.

Jeff Steif October 21, 2019 25 / 35

slide-85
SLIDE 85

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.
  • Therefore (with ǫR = 1/Rα) the expected number of pivotal hexagons

that we flip is R3/4−α.

  • If α > 3/4, we don’t flip a pivotal and things don’t change.

(This can be made rigorous relatively easily).

Jeff Steif October 21, 2019 25 / 35

slide-86
SLIDE 86

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.
  • Therefore (with ǫR = 1/Rα) the expected number of pivotal hexagons

that we flip is R3/4−α.

  • If α > 3/4, we don’t flip a pivotal and things don’t change.

(This can be made rigorous relatively easily).

  • If α < 3/4, we are likely to hit a pivotal and things should get “mixed

up” and decorrelate.

Jeff Steif October 21, 2019 25 / 35

slide-87
SLIDE 87

A heuristic for the noise sensitivity exponent for percolation

Might we believe that percolation crossings can decorrelate even if ǫR is 1/Rα? Heuristic: Yes and the largest α is 3/4.

  • We have seen that the probability that a hexagon is pivotal is about

R−5/4.

  • Hence the expected number of pivotal hexagons is about R3/4.
  • Therefore (with ǫR = 1/Rα) the expected number of pivotal hexagons

that we flip is R3/4−α.

  • If α > 3/4, we don’t flip a pivotal and things don’t change.

(This can be made rigorous relatively easily).

  • If α < 3/4, we are likely to hit a pivotal and things should get “mixed

up” and decorrelate. (This is much harder to make rigorous).

Jeff Steif October 21, 2019 25 / 35

slide-88
SLIDE 88

Randomized algorithms approach

Definition

Let f be a Boolean function. A randomized algorithm A for f examines the input bits one by one (where the choice of the next bit examined can be random and may depend on the values of the bits examined so far). Let J ⊆ {1, 2, . . . , n} be the (random) set of bits examined by the

  • algorithm. Define the revealment of A to be

δA := sup{P(i ∈ J) : i ∈ {1, 2, . . . , n}}.

Jeff Steif October 21, 2019 26 / 35

slide-89
SLIDE 89

Noise sensitivity and Randomized algorithms

Theorem (Schramm & S. 2010)

Let {fn} be a sequence of Boolean functions with An being a randomized algorithm for fn.

Jeff Steif October 21, 2019 27 / 35

slide-90
SLIDE 90

Noise sensitivity and Randomized algorithms

Theorem (Schramm & S. 2010)

Let {fn} be a sequence of Boolean functions with An being a randomized algorithm for fn.

  • 1. If limn→∞ δAn = 0, then {fn} is noise sensitive.

Jeff Steif October 21, 2019 27 / 35

slide-91
SLIDE 91

Noise sensitivity and Randomized algorithms

Theorem (Schramm & S. 2010)

Let {fn} be a sequence of Boolean functions with An being a randomized algorithm for fn.

  • 1. If limn→∞ δAn = 0, then {fn} is noise sensitive.
  • 2. If δAn ≤ C/nα, then for all β < α/2

lim

n→∞ E

  • fn(x)fn(x1/nβ)
  • − E
  • fn(x)

2 = 0 .

Jeff Steif October 21, 2019 27 / 35

slide-92
SLIDE 92

Oded Schramm and J.S.

Jeff Steif October 21, 2019 28 / 35

slide-93
SLIDE 93

What does this give for percolation?

Follow the interface from the bottom right corner to the top left corner.

Jeff Steif October 21, 2019 29 / 35

slide-94
SLIDE 94

What does this give for percolation?

Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a (the two arm event) which has probability about ( 1

R )

1 4 . Jeff Steif October 21, 2019 29 / 35

slide-95
SLIDE 95

What does this give for percolation?

Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a (the two arm event) which has probability about ( 1

R )

1 4 .

Hence we obtain decorrelation if ǫR is larger than 1/R1/8.

Jeff Steif October 21, 2019 29 / 35

slide-96
SLIDE 96

What does this give for percolation?

Follow the interface from the bottom right corner to the top left corner. A hexagon H near the center of the picture is examined only if there is a (the two arm event) which has probability about ( 1

R )

1 4 .

Hence we obtain decorrelation if ǫR is larger than 1/R1/8. This is a factor of 6 off from the 3/4 conjecture.

Jeff Steif October 21, 2019 29 / 35

slide-97
SLIDE 97

Part 3: The Fourier set-up (used in all! three methods)

The set of all functions f : {−1, 1}n → R is a 2n dimensional vector space with orthogonal basis {χS}S⊆{1,...,n} where χS(x1, . . . , xn) :=

  • i∈S

xi

Jeff Steif October 21, 2019 30 / 35

slide-98
SLIDE 98

Part 3: The Fourier set-up (used in all! three methods)

The set of all functions f : {−1, 1}n → R is a 2n dimensional vector space with orthogonal basis {χS}S⊆{1,...,n} where χS(x1, . . . , xn) :=

  • i∈S

xi We then can write f :=

  • S⊆{1,...,n}

^ f(S)χS.

Jeff Steif October 21, 2019 30 / 35

slide-99
SLIDE 99

Part 3: The Fourier set-up (used in all! three methods)

The set of all functions f : {−1, 1}n → R is a 2n dimensional vector space with orthogonal basis {χS}S⊆{1,...,n} where χS(x1, . . . , xn) :=

  • i∈S

xi We then can write f :=

  • S⊆{1,...,n}

^ f(S)χS. It is elementary to check that E

  • f (x)f (xǫ)
  • − E
  • f (x)

2 =

n

  • k=1

(1 − ǫ)k

|S|=k

^ f(S)2.

Jeff Steif October 21, 2019 30 / 35

slide-100
SLIDE 100

Part 3: The Fourier set-up (used in all! three methods)

The set of all functions f : {−1, 1}n → R is a 2n dimensional vector space with orthogonal basis {χS}S⊆{1,...,n} where χS(x1, . . . , xn) :=

  • i∈S

xi We then can write f :=

  • S⊆{1,...,n}

^ f(S)χS. It is elementary to check that E

  • f (x)f (xǫ)
  • − E
  • f (x)

2 =

n

  • k=1

(1 − ǫ)k

|S|=k

^ f(S)2. Noise sensitivity corresponds to the “Fourier weights” being concentrated

  • n large S’s.

Jeff Steif October 21, 2019 30 / 35

slide-101
SLIDE 101

Method 1 (yielding the C/ log n result) by Benjamini, Kalai and Schramm

Hypercontractivity of the (Markov) operator Tf :=

  • S⊆{1,...,n}

ρ|S| ˆ f (S)χS. for ρ ∈ (0, 1). Tf 2 ≤ f 1+ρ2.

Jeff Steif October 21, 2019 31 / 35

slide-102
SLIDE 102

Method 1 (yielding the C/ log n result) by Benjamini, Kalai and Schramm

Hypercontractivity of the (Markov) operator Tf :=

  • S⊆{1,...,n}

ρ|S| ˆ f (S)χS. for ρ ∈ (0, 1). Tf 2 ≤ f 1+ρ2. Key to proving hypercontractivity for the heat kernel: things diffuse quickly.

Jeff Steif October 21, 2019 31 / 35

slide-103
SLIDE 103

Method 2 (randomized algorithms yielding the 1/n1/8 result)

Theorem (Schramm & S. 2010)

If A is a randomized algorithm for a Boolean function f with revealment δA, then for all k,

  • S:|S|=k

ˆ f 2(S) ≤ kδA.

Jeff Steif October 21, 2019 32 / 35

slide-104
SLIDE 104

Method 2 (randomized algorithms yielding the 1/n1/8 result)

Theorem (Schramm & S. 2010)

If A is a randomized algorithm for a Boolean function f with revealment δA, then for all k,

  • S:|S|=k

ˆ f 2(S) ≤ kδA. We don’t know how to prove the earlier result on randomized algorithms without going through this Fourier analysis result.

Jeff Steif October 21, 2019 32 / 35

slide-105
SLIDE 105

Oded Schramm and J.S.

Jeff Steif October 21, 2019 33 / 35

slide-106
SLIDE 106

Method 3 (viewing the spectrum as a random 2-dimensional Cantor set yielding the optimal 1/n3/4 result)

Christophe Garban, Gabor Pete and Oded Schramm

Jeff Steif October 21, 2019 34 / 35

slide-107
SLIDE 107

Thank you very much for your attention!

Jeff Steif October 21, 2019 35 / 35