Configurations in Lattices & Multiple Mixing Alex Gorodnik - - PowerPoint PPT Presentation

configurations in lattices multiple mixing
SMART_READER_LITE
LIVE PREVIEW

Configurations in Lattices & Multiple Mixing Alex Gorodnik - - PowerPoint PPT Presentation

Configurations in Lattices & Multiple Mixing Alex Gorodnik (University of Bristol) joint work with Michael Bj orklund and Manfred Einsiedler Configurations in R d Configurations in R d = a large subset of R d . Question Does


slide-1
SLIDE 1

Configurations in Lattices & Multiple Mixing

Alex Gorodnik (University of Bristol) joint work with Michael Bj¨

  • rklund and Manfred Einsiedler
slide-2
SLIDE 2

Configurations in Rd

slide-3
SLIDE 3

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd?

slide-4
SLIDE 4

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|

|BR|

> 0).

slide-5
SLIDE 5

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|

|BR|

> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t.

slide-6
SLIDE 6

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|

|BR|

> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d,

slide-7
SLIDE 7

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|

|BR|

> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d, but the case of when ∆ is a triangle in R2 is still open!

slide-8
SLIDE 8

Configurations in Rd

Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|

|BR|

> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d, but the case of when ∆ is a triangle in R2 is still open! Furstenberg-Katznelson-Weiss, Ziegler: In general, every ε-neighbourhood of Ω contains an isometric copy of the dilation t∆ for sufficiently large t.

slide-9
SLIDE 9

Configurations in other groups

G = a group (with a right-invariant metric), Ω = a “large” subset of G.

slide-10
SLIDE 10

Configurations in other groups

G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G?

slide-11
SLIDE 11

Configurations in other groups

G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? Bergelson-McCutcheon-Zhang: Every Ω ⊂ G × G of positive upper density (here G is a countable amenable group) contains many configurations of the form {(1, 1), (g, 1), (g, g)} · h with g ∈ G, h ∈ G × G.

slide-12
SLIDE 12

Configurations in other groups

G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? Bergelson-McCutcheon-Zhang: Every Ω ⊂ G × G of positive upper density (here G is a countable amenable group) contains many configurations of the form {(1, 1), (g, 1), (g, g)} · h with g ∈ G, h ∈ G × G. Furstenberg-Glasner : Given Ω ⊂ SL2(R) of positive measure (w.r.t. a suitable mean on SL2(R)), every ε-neighbourhood of Ω contains many configurations of the form {g, g2, . . . , gk} · h with g, h ∈ SL2(R).

slide-13
SLIDE 13

Configurations in lattice subgroups

G = a connected Lie group, Γ = a discrete subgroup of G with finite covolume. Question Does an ε-neighbourhood of Γ contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G?

slide-14
SLIDE 14

Configurations in lattice subgroups

G = a connected Lie group, Γ = a discrete subgroup of G with finite covolume. Question Does an ε-neighbourhood of Γ contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? In particular, which R > 0 can be approximated by d(γ, e), γ ∈ Γ?

slide-15
SLIDE 15

Example: SL2(Z)

Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2.

slide-16
SLIDE 16

Example: SL2(Z)

Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d

  • ∈ Γ,

d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2.

slide-17
SLIDE 17

Example: SL2(Z)

Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d

  • ∈ Γ,

d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2. One can show that if R ≥ const · log(ε−1), then there exists γ ∈ Γ such that |R − d(γi, i)| < ε.

slide-18
SLIDE 18

Example: SL2(Z)

Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d

  • ∈ Γ,

d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2. One can show that if R ≥ const · log(ε−1), then there exists γ ∈ Γ such that |R − d(γi, i)| < ε. However, this fails for R = o(log(ε−1))!

slide-19
SLIDE 19

Configurations in lattice subgroups

G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume.

slide-20
SLIDE 20

Configurations in lattice subgroups

G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G.

slide-21
SLIDE 21

Configurations in lattice subgroups

G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G. Theorem (Bj¨

  • rklund, Einsiedler, G.)

Let ∆ = {g1, . . . , gk} be a configuration in G such that d(gi, gj) ≥ const · log(ε−1) for i = j. Then ε-neighbourhood of Γ contains the configuration ∆ · h for some h ∈ G.

slide-22
SLIDE 22

Configurations in lattice subgroups

G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G. Theorem (Bj¨

  • rklund, Einsiedler, G.)

Let ∆ = {g1, . . . , gk} be a configuration in G such that d(gi, gj) ≥ const · log(ε−1) for i = j. Then ε-neighbourhood of Γ contains the configuration ∆ · h for some h ∈ G. Main ingredient of the proof: analysis of higher-order correlations.

slide-23
SLIDE 23

Exponential multiple mixing

Notation: X = G/Γ with the normalised volume m, φℓ :=

  • |α|≤ℓ
  • X |Dαφ|2 dm

1/2 – the Sobolev norm.

slide-24
SLIDE 24

Exponential multiple mixing

Notation: X = G/Γ with the normalised volume m, φℓ :=

  • |α|≤ℓ
  • X |Dαφ|2 dm

1/2 – the Sobolev norm. Theorem (Bj¨

  • rklund, Einsiedler, G.)

There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,

  • X

φ1(g1x) · · · φk(gkx) dx =

  • X

φ1 dm

  • · · ·
  • X

φk dm

  • + O
  • e−δN(g1,...,gk)φ1ℓ · · · φkℓ
  • where N(g1, . . . , gk) = mini=j d(gi, gj).
slide-25
SLIDE 25

Exponential multiple mixing

Notation: X = G/Γ with the normalised volume m, φℓ :=

  • |α|≤ℓ
  • X |Dαφ|2 dm

1/2 – the Sobolev norm. Theorem (Bj¨

  • rklund, Einsiedler, G.)

There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,

  • X

φ1(g1x) · · · φk(gkx) dx =

  • X

φ1 dm

  • · · ·
  • X

φk dm

  • + O
  • e−δN(g1,...,gk)φ1ℓ · · · φkℓ
  • where N(g1, . . . , gk) = mini=j d(gi, gj).

Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing,

slide-26
SLIDE 26

Exponential multiple mixing

Notation: X = G/Γ with the normalised volume m, φℓ :=

  • |α|≤ℓ
  • X |Dαφ|2 dm

1/2 – the Sobolev norm. Theorem (Bj¨

  • rklund, Einsiedler, G.)

There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,

  • X

φ1(g1x) · · · φk(gkx) dx =

  • X

φ1 dm

  • · · ·
  • X

φk dm

  • + O
  • e−δN(g1,...,gk)φ1ℓ · · · φkℓ
  • where N(g1, . . . , gk) = mini=j d(gi, gj).

Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing, Mozes: multiple mixing without quantitative estimate,

slide-27
SLIDE 27

Exponential multiple mixing

Notation: X = G/Γ with the normalised volume m, φℓ :=

  • |α|≤ℓ
  • X |Dαφ|2 dm

1/2 – the Sobolev norm. Theorem (Bj¨

  • rklund, Einsiedler, G.)

There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,

  • X

φ1(g1x) · · · φk(gkx) dx =

  • X

φ1 dm

  • · · ·
  • X

φk dm

  • + O
  • e−δN(g1,...,gk)φ1ℓ · · · φkℓ
  • where N(g1, . . . , gk) = mini=j d(gi, gj).

Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing, Mozes: multiple mixing without quantitative estimate, Konstantoulas: independent different proof.

slide-28
SLIDE 28

Ideas of the proof: Invariance

For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =

  • X

φ(g1x, . . . , gkx) dx.

slide-29
SLIDE 29

Ideas of the proof: Invariance

For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =

  • X

φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1

1 , . . . , gkhg−1 k ) : h ∈ G}.

slide-30
SLIDE 30

Ideas of the proof: Invariance

For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =

  • X

φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1

1 , . . . , gkhg−1 k ) : h ∈ G}.

Take v = (v1, . . . , vk) ∈ Lie(D) with nilpotent vi’s such that v1 ≥ . . . ≥ vk (after changing indices).

slide-31
SLIDE 31

Ideas of the proof: Invariance

For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =

  • X

φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1

1 , . . . , gkhg−1 k ) : h ∈ G}.

Take v = (v1, . . . , vk) ∈ Lie(D) with nilpotent vi’s such that v1 ≥ . . . ≥ vk (after changing indices). For suitable vi’s, v1 vk ≈ max

i,i′

  • ec d(gi,gi′)

with c > 0.

slide-32
SLIDE 32

Ideas of the proof: 2-mixing ⇒ k-mixing

slide-33
SLIDE 33

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt.

slide-34
SLIDE 34

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j.

slide-35
SLIDE 35

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:

slide-36
SLIDE 36

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:

1 η(φ) ≈ η(K−φ)

since vj+1 is “small”,

slide-37
SLIDE 37

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:

1 η(φ) ≈ η(K−φ)

since vj+1 is “small”,

2 η(K−φ) = η(K+φ)

by invariance,

slide-38
SLIDE 38

Ideas of the proof: 2-mixing ⇒ k-mixing

Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:

1 η(φ) ≈ η(K−φ)

since vj+1 is “small”,

2 η(K−φ) = η(K+φ)

by invariance,

3 η(K+φ) ≈ (η+ ⊗ η−)(φ) ≈ mk(φ)

by (k − 1)-mixing.

slide-39
SLIDE 39

Discrepancy

For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=

  • X j |K+φ − ν(φ)|2 dν.
slide-40
SLIDE 40

Discrepancy

For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=

  • X j |K+φ − ν(φ)|2 dν.

To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing.

slide-41
SLIDE 41

Discrepancy

For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=

  • X j |K+φ − ν(φ)|2 dν.

To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing. D(φ, η+) ≈ D(φ, mj): uses the inductive assumption η+ ≈ mj.

slide-42
SLIDE 42

Discrepancy

For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=

  • X j |K+φ − ν(φ)|2 dν.

To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing. D(φ, η+) ≈ D(φ, mj): uses the inductive assumption η+ ≈ mj. |η(K+φ) − (η+ ⊗ η−)(φ)| is controlled by D(φ, η+): uses “interpolation” and Chebychev-type arguments.

slide-43
SLIDE 43

Estimating K+: D(φ, mj) ≈ 0

Proposition D(φ, mj) ≪ vj−αφ2

with α > 0.

slide-44
SLIDE 44

Estimating K+: D(φ, mj) ≈ 0

Proposition D(φ, mj) ≪ vj−αφ2

with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with

  • X φi dm = 0.
slide-45
SLIDE 45

Estimating K+: D(φ, mj) ≈ 0

Proposition D(φ, mj) ≪ vj−αφ2

with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with

  • X φi dm = 0.

Then D(φ, mj) =

  • K+φ, K+φ
slide-46
SLIDE 46

Estimating K+: D(φ, mj) ≈ 0

Proposition D(φ, mj) ≪ vj−αφ2

with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with

  • X φi dm = 0.

Then D(φ, mj) =

  • K+φ, K+φ
  • =
  • [0,1]2
  • X j φ(exp(sv+)¯

x)φ(exp(tv+)¯ x) d¯ x

  • dsdt
slide-47
SLIDE 47

Estimating K+: D(φ, mj) ≈ 0

Proposition D(φ, mj) ≪ vj−αφ2

with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with

  • X φi dm = 0.

Then D(φ, mj) =

  • K+φ, K+φ
  • =
  • [0,1]2
  • X j φ(exp(sv+)¯

x)φ(exp(tv+)¯ x) d¯ x

  • dsdt

=

  • [0,1]2

j

  • i=1
  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi

  • dsdt.
slide-48
SLIDE 48

Estimating K+: D(φ, mj) ≈ 0

Using exponential 2-mixing,

  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2

slide-49
SLIDE 49

Estimating K+: D(φ, mj) ≈ 0

Using exponential 2-mixing,

  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2

≪ (t − s)vi−α2φi2

ℓ,

slide-50
SLIDE 50

Estimating K+: D(φ, mj) ≈ 0

Using exponential 2-mixing,

  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2

≪ (t − s)vi−α2φi2

ℓ,

Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12

ℓ · · · φk2 ℓ

slide-51
SLIDE 51

Estimating K+: D(φ, mj) ≈ 0

Using exponential 2-mixing,

  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2

≪ (t − s)vi−α2φi2

ℓ,

Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12

ℓ · · · φk2 ℓ = vj−α3φ2 ℓ.

slide-52
SLIDE 52

Estimating K+: D(φ, mj) ≈ 0

Using exponential 2-mixing,

  • X

φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2

≪ (t − s)vi−α2φi2

ℓ,

Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12

ℓ · · · φk2 ℓ = vj−α3φ2 ℓ.

This implies that D(φ, mj) ≪ vj−α3φ2

ℓ′

for some ℓ′ > ℓ for general functions φ’s.

slide-53
SLIDE 53

Estimating K+: D(φ, η+) ≈ D(φ, mj)

Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2

for some c > 0.

slide-54
SLIDE 54

Estimating K+: D(φ, η+) ≈ D(φ, mj)

Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2

for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =

  • X j |K+φ − mj(φ)|2 dmj −
  • X j |K+φ − η+(φ)|2 dη+
slide-55
SLIDE 55

Estimating K+: D(φ, η+) ≈ D(φ, mj)

Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2

for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =

  • X j |K+φ − mj(φ)|2 dmj −
  • X j |K+φ − η+(φ)|2 dη+
  • X j(K+φ)2 d(mj − η+) + · · ·
slide-56
SLIDE 56

Estimating K+: D(φ, η+) ≈ D(φ, mj)

Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2

for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =

  • X j |K+φ − mj(φ)|2 dmj −
  • X j |K+φ − η+(φ)|2 dη+
  • X j(K+φ)2 d(mj − η+) + · · ·

≪e−δN(g1,...,gk)((K+φ)2ℓ + · · · ) by j-mixing

slide-57
SLIDE 57

Estimating K+: D(φ, η+) ≈ D(φ, mj)

Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2

for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =

  • X j |K+φ − mj(φ)|2 dmj −
  • X j |K+φ − η+(φ)|2 dη+
  • X j(K+φ)2 d(mj − η+) + · · ·

≪e−δN(g1,...,gk)((K+φ)2ℓ + · · · ) by j-mixing ≪e−δN(g1,...,gk)v1cφ2

ℓ′

with some ℓ′ > ℓ.

slide-58
SLIDE 58

Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)

Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ

with some ρ ∈ (0, 1).

slide-59
SLIDE 59

Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)

Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ

with some ρ ∈ (0, 1). Proof: For φy = φ(·, y) uniformly over y ∈ X k−j,

  • X j |K+φy − η+(φy)|2 dη+ ≤ D(φ, η+)φy2

≤ D(φ, η+)φ2

ℓ′

for some ℓ′ > ℓ.

slide-60
SLIDE 60

Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)

Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ

with some ρ ∈ (0, 1). Proof: For φy = φ(·, y) uniformly over y ∈ X k−j,

  • X j |K+φy − η+(φy)|2 dη+ ≤ D(φ, η+)φy2

≤ D(φ, η+)φ2

ℓ′

for some ℓ′ > ℓ. Then by Chebyshev inequality, for y in an ε-net E in X k−j and for x in a set of “large” measure in X j, |K+φy(x) − η+(φy)| ≪ |E| · D(φ, η+)ρφ1−2ρ

ℓ′

.

slide-61
SLIDE 61

Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)

Hence, for all y ∈ X k−j, |K+φy(x) − η+(φy)| ≪ ε−s · D(φ, η+)ρφ1−2ρ

ℓ′

+ εφℓ′.

slide-62
SLIDE 62

Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)

Hence, for all y ∈ X k−j, |K+φy(x) − η+(φy)| ≪ ε−s · D(φ, η+)ρφ1−2ρ

ℓ′

+ εφℓ′. Finally, integrate over η . . .

slide-63
SLIDE 63

Estimating K−

Proposition For a function φ : X k → R, sup |K−φ − φ| ≪ vj+1φℓ.

slide-64
SLIDE 64

Estimating K−

Proposition For a function φ : X k → R, sup |K−φ − φ| ≪ vj+1φℓ. Proof: K−φ(¯ x) = 1 φ(exp(tv−)¯ x) dt = φ(¯ x) + O

  • max

t∈[0,1] d(exp(tv−)¯

x, ¯ x)φℓ

  • = φ(¯

x) + O (vj+1φℓ) .

slide-65
SLIDE 65

Finishing the estimate

Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)|

slide-66
SLIDE 66

Finishing the estimate

Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ

ℓ′

slide-67
SLIDE 67

Finishing the estimate

Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ

ℓ′

≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′

slide-68
SLIDE 68

Finishing the estimate

Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ

ℓ′

≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′ ≪vj+1φℓ + (vj−α + e−δN(g1,...,gk)v1c)ρφℓ′.

slide-69
SLIDE 69

Finishing the estimate

Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ

ℓ′

≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′ ≪vj+1φℓ + (vj−α + e−δN(g1,...,gk)v1c)ρφℓ′. Finally, we optimise in v1, . . . , vk.