SLIDE 1 Configurations in Lattices & Multiple Mixing
Alex Gorodnik (University of Bristol) joint work with Michael Bj¨
- rklund and Manfred Einsiedler
SLIDE 2
Configurations in Rd
SLIDE 3
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd?
SLIDE 4
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|
|BR|
> 0).
SLIDE 5
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|
|BR|
> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t.
SLIDE 6
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|
|BR|
> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d,
SLIDE 7
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|
|BR|
> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d, but the case of when ∆ is a triangle in R2 is still open!
SLIDE 8
Configurations in Rd
Ω = a “large” subset of Rd. Question Does Ω contain an isometric copy of a given configuration ∆ = {v1, . . . , vk+1} ⊂ Rd? Assume that Ω has positive upper density (i.e., lim |Ω∩BR|
|BR|
> 0). Furstenberg-Katznelson-Weiss, Bourgain, Quas: If k < d and ∆ is a simplex, then Ω contains an isometric copy of the dilation t∆ for sufficiently large t. Bourgain, Graham: some counterexaples when k ≥ d, but the case of when ∆ is a triangle in R2 is still open! Furstenberg-Katznelson-Weiss, Ziegler: In general, every ε-neighbourhood of Ω contains an isometric copy of the dilation t∆ for sufficiently large t.
SLIDE 9
Configurations in other groups
G = a group (with a right-invariant metric), Ω = a “large” subset of G.
SLIDE 10
Configurations in other groups
G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G?
SLIDE 11
Configurations in other groups
G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? Bergelson-McCutcheon-Zhang: Every Ω ⊂ G × G of positive upper density (here G is a countable amenable group) contains many configurations of the form {(1, 1), (g, 1), (g, g)} · h with g ∈ G, h ∈ G × G.
SLIDE 12
Configurations in other groups
G = a group (with a right-invariant metric), Ω = a “large” subset of G. Question Does Ω contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? Bergelson-McCutcheon-Zhang: Every Ω ⊂ G × G of positive upper density (here G is a countable amenable group) contains many configurations of the form {(1, 1), (g, 1), (g, g)} · h with g ∈ G, h ∈ G × G. Furstenberg-Glasner : Given Ω ⊂ SL2(R) of positive measure (w.r.t. a suitable mean on SL2(R)), every ε-neighbourhood of Ω contains many configurations of the form {g, g2, . . . , gk} · h with g, h ∈ SL2(R).
SLIDE 13
Configurations in lattice subgroups
G = a connected Lie group, Γ = a discrete subgroup of G with finite covolume. Question Does an ε-neighbourhood of Γ contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G?
SLIDE 14
Configurations in lattice subgroups
G = a connected Lie group, Γ = a discrete subgroup of G with finite covolume. Question Does an ε-neighbourhood of Γ contain an isometric copy of a given configuration ∆ = {g1, . . . , gk} ⊂ G? In particular, which R > 0 can be approximated by d(γ, e), γ ∈ Γ?
SLIDE 15
Example: SL2(Z)
Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2.
SLIDE 16 Example: SL2(Z)
Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d
d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2.
SLIDE 17 Example: SL2(Z)
Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d
d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2. One can show that if R ≥ const · log(ε−1), then there exists γ ∈ Γ such that |R − d(γi, i)| < ε.
SLIDE 18 Example: SL2(Z)
Consider the orbit Γ · i of Γ = SL2(Z) in the hyperbolic plane H2. For γ = a b c d
d(γi, i) = cosh−1(a2 + b2 + c2 + d2)/2. One can show that if R ≥ const · log(ε−1), then there exists γ ∈ Γ such that |R − d(γi, i)| < ε. However, this fails for R = o(log(ε−1))!
SLIDE 19
Configurations in lattice subgroups
G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume.
SLIDE 20
Configurations in lattice subgroups
G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G.
SLIDE 21 Configurations in lattice subgroups
G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G. Theorem (Bj¨
Let ∆ = {g1, . . . , gk} be a configuration in G such that d(gi, gj) ≥ const · log(ε−1) for i = j. Then ε-neighbourhood of Γ contains the configuration ∆ · h for some h ∈ G.
SLIDE 22 Configurations in lattice subgroups
G = a simple connected noncompact Lie group (e.g, G = SLn(R)), Γ = a discrete subgroup with finite covolume. We fix a right-invariant Riemannian metric d(·, ·) on G. Theorem (Bj¨
Let ∆ = {g1, . . . , gk} be a configuration in G such that d(gi, gj) ≥ const · log(ε−1) for i = j. Then ε-neighbourhood of Γ contains the configuration ∆ · h for some h ∈ G. Main ingredient of the proof: analysis of higher-order correlations.
SLIDE 23 Exponential multiple mixing
Notation: X = G/Γ with the normalised volume m, φℓ :=
1/2 – the Sobolev norm.
SLIDE 24 Exponential multiple mixing
Notation: X = G/Γ with the normalised volume m, φℓ :=
1/2 – the Sobolev norm. Theorem (Bj¨
There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,
φ1(g1x) · · · φk(gkx) dx =
φ1 dm
φk dm
- + O
- e−δN(g1,...,gk)φ1ℓ · · · φkℓ
- where N(g1, . . . , gk) = mini=j d(gi, gj).
SLIDE 25 Exponential multiple mixing
Notation: X = G/Γ with the normalised volume m, φℓ :=
1/2 – the Sobolev norm. Theorem (Bj¨
There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,
φ1(g1x) · · · φk(gkx) dx =
φ1 dm
φk dm
- + O
- e−δN(g1,...,gk)φ1ℓ · · · φkℓ
- where N(g1, . . . , gk) = mini=j d(gi, gj).
Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing,
SLIDE 26 Exponential multiple mixing
Notation: X = G/Γ with the normalised volume m, φℓ :=
1/2 – the Sobolev norm. Theorem (Bj¨
There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,
φ1(g1x) · · · φk(gkx) dx =
φ1 dm
φk dm
- + O
- e−δN(g1,...,gk)φ1ℓ · · · φkℓ
- where N(g1, . . . , gk) = mini=j d(gi, gj).
Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing, Mozes: multiple mixing without quantitative estimate,
SLIDE 27 Exponential multiple mixing
Notation: X = G/Γ with the normalised volume m, φℓ :=
1/2 – the Sobolev norm. Theorem (Bj¨
There exists δ > 0 such that for any functions φ1, . . . , φk : X → R in a suitable Sobolev space and any g1, . . . , gk ∈ G,
φ1(g1x) · · · φk(gkx) dx =
φ1 dm
φk dm
- + O
- e−δN(g1,...,gk)φ1ℓ · · · φkℓ
- where N(g1, . . . , gk) = mini=j d(gi, gj).
Borel-Wallach, Cowling, Howe-Moore: exponential 2-mixing, Mozes: multiple mixing without quantitative estimate, Konstantoulas: independent different proof.
SLIDE 28 Ideas of the proof: Invariance
For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =
φ(g1x, . . . , gkx) dx.
SLIDE 29 Ideas of the proof: Invariance
For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =
φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1
1 , . . . , gkhg−1 k ) : h ∈ G}.
SLIDE 30 Ideas of the proof: Invariance
For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =
φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1
1 , . . . , gkhg−1 k ) : h ∈ G}.
Take v = (v1, . . . , vk) ∈ Lie(D) with nilpotent vi’s such that v1 ≥ . . . ≥ vk (after changing indices).
SLIDE 31 Ideas of the proof: Invariance
For (g1, . . . , gk), consider probability measures η = ηg1,...,gk on X k: η(φ) =
φ(g1x, . . . , gkx) dx. This measure is invariant under the subgroup D = {(g1hg−1
1 , . . . , gkhg−1 k ) : h ∈ G}.
Take v = (v1, . . . , vk) ∈ Lie(D) with nilpotent vi’s such that v1 ≥ . . . ≥ vk (after changing indices). For suitable vi’s, v1 vk ≈ max
i,i′
with c > 0.
SLIDE 32
Ideas of the proof: 2-mixing ⇒ k-mixing
SLIDE 33
Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt.
SLIDE 34
Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j.
SLIDE 35
Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:
SLIDE 36 Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:
1 η(φ) ≈ η(K−φ)
since vj+1 is “small”,
SLIDE 37 Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:
1 η(φ) ≈ η(K−φ)
since vj+1 is “small”,
2 η(K−φ) = η(K+φ)
by invariance,
SLIDE 38 Ideas of the proof: 2-mixing ⇒ k-mixing
Let v+ = (v1, . . . , vj, 0, . . . , 0), v− = (0, . . . , 0, −vj+1, . . . , −vk), and consider the averaging operators K+ and K−: K±φ(¯ x) = 1 φ(exp(tv±)¯ x) dt. Let η+ and η− denote the projection of η to X j and X k−j. By induction, η+ ≈ mj and η− ≈ mk−j. The argument proceeds as follows:
1 η(φ) ≈ η(K−φ)
since vj+1 is “small”,
2 η(K−φ) = η(K+φ)
by invariance,
3 η(K+φ) ≈ (η+ ⊗ η−)(φ) ≈ mk(φ)
by (k − 1)-mixing.
SLIDE 39 Discrepancy
For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=
SLIDE 40 Discrepancy
For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=
To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing.
SLIDE 41 Discrepancy
For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=
To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing. D(φ, η+) ≈ D(φ, mj): uses the inductive assumption η+ ≈ mj.
SLIDE 42 Discrepancy
For a function φ : X j → R and probability measure ν, define discrepancy of K+: D(φ, ν) :=
To prove that η(K+φ) ≈ (η+ ⊗ η−)(φ), we show that: D(φ, mj) ≈ 0: uses just 2-mixing. D(φ, η+) ≈ D(φ, mj): uses the inductive assumption η+ ≈ mj. |η(K+φ) − (η+ ⊗ η−)(φ)| is controlled by D(φ, η+): uses “interpolation” and Chebychev-type arguments.
SLIDE 43
Estimating K+: D(φ, mj) ≈ 0
Proposition D(φ, mj) ≪ vj−αφ2
ℓ
with α > 0.
SLIDE 44 Estimating K+: D(φ, mj) ≈ 0
Proposition D(φ, mj) ≪ vj−αφ2
ℓ
with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with
SLIDE 45 Estimating K+: D(φ, mj) ≈ 0
Proposition D(φ, mj) ≪ vj−αφ2
ℓ
with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with
Then D(φ, mj) =
SLIDE 46 Estimating K+: D(φ, mj) ≈ 0
Proposition D(φ, mj) ≪ vj−αφ2
ℓ
with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with
Then D(φ, mj) =
- K+φ, K+φ
- =
- [0,1]2
- X j φ(exp(sv+)¯
x)φ(exp(tv+)¯ x) d¯ x
SLIDE 47 Estimating K+: D(φ, mj) ≈ 0
Proposition D(φ, mj) ≪ vj−αφ2
ℓ
with α > 0. Proof: Suppose that φ = φ1 ⊗ · · · ⊗ φk with
Then D(φ, mj) =
- K+φ, K+φ
- =
- [0,1]2
- X j φ(exp(sv+)¯
x)φ(exp(tv+)¯ x) d¯ x
=
j
φi(exp(svi)xi)φi(exp(tvi)xi) dxi
SLIDE 48 Estimating K+: D(φ, mj) ≈ 0
Using exponential 2-mixing,
φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2
ℓ
SLIDE 49 Estimating K+: D(φ, mj) ≈ 0
Using exponential 2-mixing,
φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2
ℓ
≪ (t − s)vi−α2φi2
ℓ,
SLIDE 50 Estimating K+: D(φ, mj) ≈ 0
Using exponential 2-mixing,
φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2
ℓ
≪ (t − s)vi−α2φi2
ℓ,
Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12
ℓ · · · φk2 ℓ
SLIDE 51 Estimating K+: D(φ, mj) ≈ 0
Using exponential 2-mixing,
φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2
ℓ
≪ (t − s)vi−α2φi2
ℓ,
Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12
ℓ · · · φk2 ℓ = vj−α3φ2 ℓ.
SLIDE 52 Estimating K+: D(φ, mj) ≈ 0
Using exponential 2-mixing,
φi(exp(svi)xi)φi(exp(tvi)xi) dxi ≪ e−α1d(exp((t−s)vi),e)φi2
ℓ
≪ (t − s)vi−α2φi2
ℓ,
Hence, averaging over (s, t), we deduce that D(φ, mj) ≪ vj−α3φ12
ℓ · · · φk2 ℓ = vj−α3φ2 ℓ.
This implies that D(φ, mj) ≪ vj−α3φ2
ℓ′
for some ℓ′ > ℓ for general functions φ’s.
SLIDE 53
Estimating K+: D(φ, η+) ≈ D(φ, mj)
Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2
ℓ
for some c > 0.
SLIDE 54 Estimating K+: D(φ, η+) ≈ D(φ, mj)
Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2
ℓ
for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =
- X j |K+φ − mj(φ)|2 dmj −
- X j |K+φ − η+(φ)|2 dη+
SLIDE 55 Estimating K+: D(φ, η+) ≈ D(φ, mj)
Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2
ℓ
for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =
- X j |K+φ − mj(φ)|2 dmj −
- X j |K+φ − η+(φ)|2 dη+
- ≤
- X j(K+φ)2 d(mj − η+) + · · ·
SLIDE 56 Estimating K+: D(φ, η+) ≈ D(φ, mj)
Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2
ℓ
for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =
- X j |K+φ − mj(φ)|2 dmj −
- X j |K+φ − η+(φ)|2 dη+
- ≤
- X j(K+φ)2 d(mj − η+) + · · ·
≪e−δN(g1,...,gk)((K+φ)2ℓ + · · · ) by j-mixing
SLIDE 57 Estimating K+: D(φ, η+) ≈ D(φ, mj)
Proposition Assuming (k − 1)-exponential mixing, for φ : X j → R, |D(φ, mj) − D(φ, η+)| ≪ e−δN(g1,...,gk)v1cφ2
ℓ
for some c > 0. Proof: |D(φ, mj) − D(φ, η+)| =
- X j |K+φ − mj(φ)|2 dmj −
- X j |K+φ − η+(φ)|2 dη+
- ≤
- X j(K+φ)2 d(mj − η+) + · · ·
≪e−δN(g1,...,gk)((K+φ)2ℓ + · · · ) by j-mixing ≪e−δN(g1,...,gk)v1cφ2
ℓ′
with some ℓ′ > ℓ.
SLIDE 58
Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)
Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ
ℓ
with some ρ ∈ (0, 1).
SLIDE 59 Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)
Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ
ℓ
with some ρ ∈ (0, 1). Proof: For φy = φ(·, y) uniformly over y ∈ X k−j,
- X j |K+φy − η+(φy)|2 dη+ ≤ D(φ, η+)φy2
ℓ
≤ D(φ, η+)φ2
ℓ′
for some ℓ′ > ℓ.
SLIDE 60 Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)
Proposition For a function φ : X k → R, |η(K+φ) − (η+ ⊗ η−)(φ)| ≪ D(φ, η+)ρφ1−2ρ
ℓ
with some ρ ∈ (0, 1). Proof: For φy = φ(·, y) uniformly over y ∈ X k−j,
- X j |K+φy − η+(φy)|2 dη+ ≤ D(φ, η+)φy2
ℓ
≤ D(φ, η+)φ2
ℓ′
for some ℓ′ > ℓ. Then by Chebyshev inequality, for y in an ε-net E in X k−j and for x in a set of “large” measure in X j, |K+φy(x) − η+(φy)| ≪ |E| · D(φ, η+)ρφ1−2ρ
ℓ′
.
SLIDE 61
Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)
Hence, for all y ∈ X k−j, |K+φy(x) − η+(φy)| ≪ ε−s · D(φ, η+)ρφ1−2ρ
ℓ′
+ εφℓ′.
SLIDE 62
Estimating K+: η(K+φ) ≈ (η+ ⊗ η−)(φ)
Hence, for all y ∈ X k−j, |K+φy(x) − η+(φy)| ≪ ε−s · D(φ, η+)ρφ1−2ρ
ℓ′
+ εφℓ′. Finally, integrate over η . . .
SLIDE 63
Estimating K−
Proposition For a function φ : X k → R, sup |K−φ − φ| ≪ vj+1φℓ.
SLIDE 64 Estimating K−
Proposition For a function φ : X k → R, sup |K−φ − φ| ≪ vj+1φℓ. Proof: K−φ(¯ x) = 1 φ(exp(tv−)¯ x) dt = φ(¯ x) + O
t∈[0,1] d(exp(tv−)¯
x, ¯ x)φℓ
x) + O (vj+1φℓ) .
SLIDE 65
Finishing the estimate
Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)|
SLIDE 66
Finishing the estimate
Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ
ℓ′
SLIDE 67
Finishing the estimate
Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ
ℓ′
≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′
SLIDE 68
Finishing the estimate
Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ
ℓ′
≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′ ≪vj+1φℓ + (vj−α + e−δN(g1,...,gk)v1c)ρφℓ′.
SLIDE 69
Finishing the estimate
Combining the previous estimates, |η(φ) − (η+ ⊗ η−)(φ)| ≤|η(φ) − η(K−φ)| + |η(K+φ) − (η+ ⊗ η−)(φ)| ≪vj+1φℓ + D(φ, η+)ρφ1−2ρ
ℓ′
≪vj+1φℓ + (D(φ, mj) + e−δN(g1,...,gk)v1c)ρφℓ′ ≪vj+1φℓ + (vj−α + e−δN(g1,...,gk)v1c)ρφℓ′. Finally, we optimise in v1, . . . , vk.