Nonconcentration, L p -Improving Estimates, and Multilinear Kakeya - - PowerPoint PPT Presentation

nonconcentration l p improving estimates and multilinear
SMART_READER_LITE
LIVE PREVIEW

Nonconcentration, L p -Improving Estimates, and Multilinear Kakeya - - PowerPoint PPT Presentation

Nonconcentration, L p -Improving Estimates, and Multilinear Kakeya Philip T. Gressman Department of Mathematics University of Pennsylvania 13 May 2019 Madison Lectures in Fourier Analysis N-C, L p , and M-K Philip T. Gressman 0 / 23 0. The


slide-1
SLIDE 1

Nonconcentration, Lp-Improving Estimates, and Multilinear Kakeya

Philip T. Gressman

Department of Mathematics University of Pennsylvania

13 May 2019 Madison Lectures in Fourier Analysis

Philip T. Gressman N-C, Lp , and M-K 0 / 23

slide-2
SLIDE 2
  • 0. The Problem of Geometry in Fourier Analysis
  • There are a number of deeply geometric operators, integrals,

etc., in harmonic analysis. The structure is usually defined by smooth spaces with measures and maps between those spaces.

  • With minimal structure, it’s often not even clear what key

quantities are or how to proceed.

  • Introducing artificial structures (e.g., coordinates) reduces to

more familiar settings, but doing so breaks fundamental invariances of the problem and risks missing important features.

  • There is a third way: introduce artificial auxiliary structures

and study the group action induced by transformations of these structures.

Philip T. Gressman N-C, Lp , and M-K 1 / 23

slide-3
SLIDE 3

An Example: An n-dimensional vector space V is called a Peano space when it is equipped with a nontrivial alternating n-linear form [v1, . . . , vn]. It is natural to use bases with [v1, . . . , vn] = 1. Suppose U : V × V → R is a symmetric bilinear functional on a real Peano space V such that U(v, v) ≥ 0 for all v ∈ V . Is there an easy way to detect degeneracy or nondegeneracy of U?

  • Option 1: Do calculations invariant under natural choices:

det U := det[U(vi, vj)]i,j=1,...,n.

  • Option 2: Do anything, then optimize over all choices. E.g.:

inf

[v1,...,vn]=1

 

n

  • j=1

|U(vi, vi)|p  

1/p

, p ∈ (0, ∞]. Theorem: det U = inf

[v1,...,vn]=1

 1 n

n

  • j=1

|U(vi, vi)|p  

n p

.

Philip T. Gressman N-C, Lp , and M-K 2 / 23

slide-4
SLIDE 4
  • 1. Brascamp-Lieb Constant

H, Hj : Hilbert(?) spaces of dimension d, dj, j = 1, . . . , m; πj : Surjective linear maps H → Hj; θj: constants in [0, 1]. Brascamp-Lieb Inequality RBL(π, θ)

  • H

m

  • j=1

(fj ◦ πj)θj ≤

m

  • j=1
  • Hj

fj θj Bennett, Carbery, Christ, and Tao (2005) RBL(π, θ) > 0 if and only if dim V ≤

m

  • j=1

θj dim πj(V ) for all V ⊂ H with equality when V = H. Bennett, Bez, Cowling, Flock (2016) Fixing dimensions and θ, RBL(π, θ) is continuous in π.

Philip T. Gressman N-C, Lp , and M-K 3 / 23

slide-5
SLIDE 5

RBL(π, θ)

  • H

m

  • j=1

(fj ◦ πj)θj ≤

m

  • j=1
  • Hj

fj θj Lieb (1990): Gaussians extremize the inequality RBL(π, θ) = inf

Aj∈GL(Hj) j=1,...,m

  • det m

j=1 θjπ∗ j A∗ j Ajπj

1

2

m

j=1 | detHj Aj|θj

Change determinant to infimum of trace: [RBL(π, θ)]

2 d =

inf

Aj∈GL(Hj) A∈SL(H)

d−1tr m

j=1 θjA∗π∗ j A∗ j AjπjA

m

j=1 | detHj Aj|2θj/d

= inf

Aj∈SL(Hj) A∈SL(H) tj∈(0,∞)

d−1t− 2θ

d

m

  • j=1

θjt

2 dj

j |||AjπjA|||2

where ||| · ||| is the Hilbert-Schmidt (sum of squares) matrix norm.

Philip T. Gressman N-C, Lp , and M-K 4 / 23

slide-6
SLIDE 6

Keep Going: Use AM-GM Inequality again to eliminate tj: [RBL(π, θ)]

2 d =

inf

Aj∈SL(Hj) A∈SL(H) tj∈(0,∞)

d−1t− 2θ

d

m

  • j=1

θjdj d t

2 dj

j

d dj |||AjπjA|||2

  • =

 

m

  • j=1

d

θj dj d

j

  inf

Aj∈SL(Hj) A∈SL(H) m

  • j=1

|||AjπjA|||

2θj dj d

Assuming rational θj, there exist integers N, Nj such that θjdj d = Nj N , j = 1, . . . , m, [RBL(π, θ)]

N d =

 

m

  • j=1

d

Nj 2

j

  inf

Aj∈SL(Hj) A∈SL(H) m

  • j=1

|||AjπjA|||Nj.

Philip T. Gressman N-C, Lp , and M-K 5 / 23

slide-7
SLIDE 7

For integers N = N1 + · · · + Nm, [RBL(π, N)]

N d

  • m
  • j=1

fj ◦ πj

  • Ld/N(H)

m

  • j=1

||fj||Ldj /Nj (Hj). Define ΠN : HN × HN1

1

× · · · × HNm

m

→ R by the formula ΠN(x(1), . . . , x(N), x(1)

1 , . . . , x(N1) 1

, . . . , x(Nm)

m

) :=

  • π1x(1), x(1)

1

  • H1

· · ·

  • π1x(N1), x(N1)

1

  • H1

· · ·

  • πmx(N), x(Nm)

m

  • Hm

and let G := SL(H) × SL(H1) × · · · × SL(Hm). Then [RBL(π, N)]

N d =

 

m

  • j=1

d

Nj 2

j

  inf

G∈G |||ρGΠN|||,

ρG is the action of G on HN × · · · × HNm

m , ||| · ||| is Hilbert-Schmidt.

A Good Question: Why did we do this lovely calculation?

Philip T. Gressman N-C, Lp , and M-K 6 / 23

slide-8
SLIDE 8
  • 2. Geometric Nonconcentration Inequalities

Suppose Φ is some polynomial function from (Rn)k into Rm. |Φ(x1, . . . , xk)| measures nondegeneracy of k-point configurations. Example: if ϕ(x) := (xα)|α|≤d, then Φ(x1, . . . , xN) := det(ϕ(x1), . . . , ϕ(xN)) = 0 iff x1, . . . , xN lie on some real algebraic variety of deg. ≤ d. Nonconcentration Inequalities For a given Φ and s, find the “best possible” measure µ such that S(E) := ess.sup

(x1,...,xk)∈E k |Φ(x1, . . . , xk)| (µ(E))

1 s ,

I(E) :=

  • E k |Φ(x1, . . . , xk)|dµ(x1) · · · dµ(xk) (µ(E))k+ 1

s .

We call these inequalities “Nonconcentration Inequalities” because they dictate that product sets E k cannot be degenerate (as measured by Φ) when µ(E) > 0.

Philip T. Gressman N-C, Lp , and M-K 7 / 23

slide-9
SLIDE 9

Simple Observations

  • The inequality

S(E) := ess.sup

x1,...,xk∈E

|Φ(x1, . . . , xk)| µ(E)

1 s

is strictly easier to prove than I(E) :=

  • E k |Φ(x1, . . . , xk)|dµ(x1) · · · dµ(xk) [µ(E)]k+ 1

s

  • Looking at small sets suggests that the diagonal

x1 = · · · = xk = x is the important part; presumably Φ and its derivatives through order Q − 1 vanish there for some Q ≥ 1.

  • By a simple scaling argument, there is a particularly important

exponent s, 1 s = Q n , i.e., s = n Q . One doesn’t expect either inequality for S(E) or I(E) to hold for “nice” µ with larger values of s.

Philip T. Gressman N-C, Lp , and M-K 8 / 23

slide-10
SLIDE 10

Basic Properties of Nonconcentration Functionals

Theorem 1 (G. 2018) ∀F S(F) ≥ c[µ(F)]

1 s ⇔ ∀F I(F) ≥ c′[µ(F)]k+ 1 s

Theorem 2

  • For s > n

Q , only the zero measure satisfies the inequality.

  • For s = n

Q , there is a “best possible” choice of µ which comes

from a generalization of Hausdorff measure. It is possible to estimate the density (Think of relating Hausdorff and Lebesgue measures on a curve.)

Philip T. Gressman N-C, Lp , and M-K 9 / 23

slide-11
SLIDE 11

Basic Properties of Nonconcentration Functionals

Theorem 3: Frostman’s Lemma Let weighted Φ-Hausdorff measure of dim. s, Hs

Φ(E), be given by

lim inf

δ→0+

  • i

ci [S(Ei)]s

  • χE ≤
  • i

ciχEi, ci ≥ 0, diamEi ≤ δ

  • .

Suppose E is compact. Then Hs

Φ(E) > 0 if and only if there exists

a nonzero, nonnegative Borel measure µ supported on E such that I(F) [µ(F)]k+ 1

s and S(F) [µ(F)] 1 s

for all Borel sets F.

  • A Good Question:

What does this measure “measure”?

  • Note: For fixed n there are many possible interesting values of

s because one can restrict to polynomial graphs in Rn.

Philip T. Gressman N-C, Lp , and M-K 10 / 23

slide-12
SLIDE 12

Quick Proof of Theorem 3

  • The proof of the Frostman Lemma (in terms of weighted

Φ-Hausdorff measure) follows essentially identically the proof (due to Howroyd) found in Mattila’s book which uses Hahn-Banach.

  • Start with a homogeneous subadditive functional

pδ(f ) := inf

  • i

ci (S(Ei))s

  • f ≤
  • i

ciχEi, ci ≥ 0, diam(Ei) ≤ δ

  • Extend the functional which equals pδ(χE) > 0 on χE; it’s got

to be a positive linear functional on continuous functions

  • Riesz Representation gives a measure which you (fix and)

check works out.

  • What about the non-weighted generalization of Hausdorff? In

the classical case they are comparable in every dimension (see Federer’s book), but those arguments break down.

  • In this specific case, comparability for dimension n

Q follows

manually.

Philip T. Gressman N-C, Lp , and M-K 11 / 23

slide-13
SLIDE 13

Upper Bounds on µ

S(E) := ess.sup

x1,...,xk∈E

|Φ(x1, . . . , xk)| ≥ (µ(E))

Q n

puts obvious constraints on the size of µ. Assume µ is absolutely continuous with respect to Lebesgue measure. Pick a point x ∈ Rn, and let E = Br(x) as r → 0+. Recall we assume derivatives of order < Q of Φ vanish on the diagonal. Let P be the degree Q Taylor polynomial of Φ at (x, . . . , x). Then dµ dx Q/n |B1(0)|Q/n ≤ sup

||x1||,...,||xk||≤1

|P(x1, . . . , xk)|. We could use any coordinates with the same volume element. dµ dx Q/n |B1(0)|Q/n ≤ inf

G∈SL(n,R) |||ρGP|||

where ρG is natural action of SL(n, R) on polynomials and ||| · ||| is sup norm on (B1(0))k.

Philip T. Gressman N-C, Lp , and M-K 12 / 23

slide-14
SLIDE 14
  • 3. Geometric Invariant Theory

General Algebraic Setup

  • V : Finite-dimensional real vector space
  • G : Real reductive algebraic group
  • ρ : polynomial G-Representation on V
  • ||| · ||| : Norm on V , ρ-invariant on maximal compact K < G.

Main Question Given v ∈ V , how does one understand, compute, inf

G∈G |||ρGv|||?

Key Idea Study ρ-invariant polynomials on V .

Philip T. Gressman N-C, Lp , and M-K 13 / 23

slide-15
SLIDE 15

SL(d, R) Invariant Polynomials

Suppose M ∈ Rn×n. The Caley Ω operator is defined as ΩM := det   

∂ ∂M11

· · ·

∂ ∂M1n

. . . ... . . .

∂ ∂Mn1

· · ·

∂ ∂Mnn

   . Basic Features:

  • ΩM [f (AM)] = (det A) [(Ωf )(AM)].
  • ΩM(det M)k = cn,k(det M)k−1, cn,k > 0 when k > 0.

Renyolds Operator When ρ is a polynomial representation, we can explicitly write a projection operator from polynomials on V to SL(n, R)-invariant

  • polynomials. For homogeneous f of fixed degree:

f → c Ωk

M [f ◦ ρM]

Philip T. Gressman N-C, Lp , and M-K 14 / 23

slide-16
SLIDE 16

Thm (Hilbert): The algebra of G-invariant polys is fin. gen’d. Pf: Let I be the ideal generated by G-invariant homogeneous polys (nonconstant); there must be homogeneous G-invariant f1, . . . , fN generating I (unbdd N violates Noetherianity). If f is invariant, f = ϕ1f1 + · · · + ϕNfN. Apply Renyolds operator: f = (Rϕ1)f1 + · · · + (RϕN)fN where the Rϕj are invariant and have lower degrees than f . Computing the Infimum If f1, . . . , fN are homogeneous and generate the algebra, then inf

G∈G |||ρGv||| ≈ N

  • i=1

|fi(v)|1/ deg fi. Proof: is trivial; is a compactness argument. Key point: if fi(v) = 0 for all i iff inf = 0 (aka Hilbert-Mumford Criterion). A Good Question: Why do I care so much about the algebra?

Philip T. Gressman N-C, Lp , and M-K 15 / 23

slide-17
SLIDE 17

The following are equivalent:

  • RBL(π, N) > 0
  • For all V ⊂ H,

dim V ≤

m

  • j=1

Njd Ndj dim πj(V )

  • There exists SL(H) × SL(H1) × · · · × SL(Hm)-invariant

polynomial f with f (0) = 0, f (ΠN) = 0.

  • It is easy to prove that all SL(d, R)-invariant polynomials of

M-linear forms must be expressible as d-linear contractions: Ai1···id →

  • σ∈Sd

(−1)σAσ1···σd. In our case, these are known as “dotted bracket polynomials.”

  • Harder to know when two such polynomials are independent

and when you can stop looking.

  • To do analysis, it is sometimes hard to find easily-computable

polynomials, sometimes easier to work with the infimum.

Philip T. Gressman N-C, Lp , and M-K 16 / 23

slide-18
SLIDE 18
  • 4. Multilinear Kakeya and Lp-Improving Estimates

Consider a geometric averaging operator which integrates functions

  • n Rn over k-dimensional algebraic submanifolds:

Tf (x) :=

  • xΣ fdσ. Let ρ(x, y) = 0 be the incidence relation.

Theorem For any nonnegative continuous functions f1, . . . , fm on Rn,

  • Rn

 

· · ·

[RBL(Dxρ)]

m(n−k) n

m

  • j=1

fj(yj)dσ(y1) · · · dσ(ym)  

n m(n−k)

dx

  • m
  • j=1

||fj||

n m(n−k)

L1(Rn).

This is simply a continuous version of Zorin-Kranich’s Kakeya-(Rogers)-Brascamp-Lieb inequality.

Philip T. Gressman N-C, Lp , and M-K 17 / 23

slide-19
SLIDE 19

A machine to prove Lp-improving estimates I

1 Weighted Kakeya-Brascamp-Lieb is an inequality which relies

transversality of cotangent spaces. The Brascamp-Lieb weight compensates for lack of transversality on the diagonal.

2 The stuff inside weighted Kakeya-Brascamp-Lieb is a

nonconcentration quantity. Precisely, pick m points y1, . . . , ym

  • n xΣ (submanifold associated to x).

Φ(y1, . . . , ym) := (RBL(Dxρ(x, y1), . . . , Dxρ(x, ym)))

m(n−k) n

.

3 Curvature = Infinitesimal Transversality of Cotangent

Spaces

4 Extracting curvature effects reduces to proving

nonconcentration inequality.

5 Exploit that BLW is effectively a polynomial in Dxρ(x, yi).

Philip T. Gressman N-C, Lp , and M-K 18 / 23

slide-20
SLIDE 20

A machine to prove Lp-improving estimates II

Benefits:

  • “Good Transversality” and “Good Curvature” mean some

polynomial is nonzero. Consequently valid proofs of this sort for example operators will automatically remain valid for the right kind of small algebraic perturbations.

  • This way of packaging things avoids some seemingly very

difficult challenges posed by the method of inflation. For example, there are no longer any arithmetic constraints on dimension and codimension. Challenges:

  • There is potential to prove a very general weighted

Lp-improving inequality with this machinery, but there are a number of additional obstacles to overcome.

  • Comparing to Tao-Wright, it seems that the story is not yet

finished for multilinear Kakeya? A Good Question: Can this be worked out in any concrete cases?

Philip T. Gressman N-C, Lp , and M-K 19 / 23

slide-21
SLIDE 21

A machine to prove Lp-improving estimates III

  • Example: convolution with measures on the surface

  t1, . . . , tk,   

k

  • j=1

λijt2

j

  

n−k i=1

   where k ≥ n/2 and in λij, all (n − k) × (n − k) minors of cyclically adjacent columns are nondegenerate.

  • Identify a workable degree of multilinearity and a workable

transversality polynomial. In this case, m = n copies of the submanifold works.

  • Replace Brascamp-Lieb weight with this polynomial.
  • To prove the nonconcentration inequality, you must bound

transversality below in small ball limit in an effectively arbitrary coordinate system.

  • The calculation still has high algebraic complexity despite the

initial reduction.

Philip T. Gressman N-C, Lp , and M-K 20 / 23

slide-22
SLIDE 22

A nice invariant polynomial

det B1 B2 B2 B3 B3 B4 B5 B5 ... Bk Bk+1 Bk+2 Bk+3 ... Bn A1 A2 A2 A3 A3 A4 A5 A5 ... Ak Ak+1 Ak+2 Ak+3 ... An

Philip T. Gressman N-C, Lp , and M-K 21 / 23

slide-23
SLIDE 23

Φ := det

  • Each B1, . . . , Bk needs n − k derivatives (let derivs act on cols;

undifferentiated cols will be zero after column operations).

  • In our example, the various t-coordinate functions all reside
  • n their own rows. Differentiate with respect to the variables

that cross the diagonal and argue that lower-priority derivatives must always be zero (so coordinate independent).

  • Conclusions: (N.B. N = m = n; Φ is degree n − k in ΠN)
  • [RBL(Dxρ)]n−k ≈ inf

G |||ρGΠN|||n−k |Φ(t(1), . . . , t(n))|

  • sup

t(1),...,t(n)∈xΣ∩E

|Φ(t(1), . . . , t(n))| |xΣ ∩ E|n−k.

  • Convolution satisfies: ||TχE|| 2n−k

n−k |E| n 2n−k .

Philip T. Gressman N-C, Lp , and M-K 22 / 23

slide-24
SLIDE 24

Where do things stand?

  • Arbitrary dimension and codimension: Yes?
  • Weighted inequalities: Some but likely not all
  • But even for curves, do we fully understand the implication

relationships between weighted estimates?

  • Kakeya-fication of Brascamp-Lieb is only the first step?
  • Better understanding of algebra and geometry is needed
  • There exists a contraction-type formula using Ω to compute all

invariants of fixed degree.

  • GIT people never tried to actually compute the infimum.
  • Is there a nice explicit formula for the Brascamp-Lieb constant?

Philip T. Gressman N-C, Lp , and M-K 23 / 23