A CLT for Wishart Tensors Dan Mikulincer Weizmann Institute of - - PowerPoint PPT Presentation

a clt for wishart tensors
SMART_READER_LITE
LIVE PREVIEW

A CLT for Wishart Tensors Dan Mikulincer Weizmann Institute of - - PowerPoint PPT Presentation

A CLT for Wishart Tensors Dan Mikulincer Weizmann Institute of Science 1 Wishart Tensors Let { X i } d i =1 be i.i.d. copies of an isotropic random vector X in R n . Denote by W p n , d ( ) the law of d 1 X


slide-1
SLIDE 1

A CLT for Wishart Tensors

Dan Mikulincer

Weizmann Institute of Science

1

slide-2
SLIDE 2

Wishart Tensors

Let {Xi}d

i=1 be i.i.d. copies of an isotropic random vector X ∼ µ in

  • Rn. Denote by Wp

n,d(µ) the law of

1 √ d

d

  • i=1
  • X ⊗p

i

− E

  • X ⊗p

i

  • .

We are interested in the behavior as d → ∞. Specifically, when is it true that Wp

n,d(µ) is approximately Gaussian?

2

slide-3
SLIDE 3

Wishart Tensors

Let {Xi}d

i=1 be i.i.d. copies of an isotropic random vector X ∼ µ in

  • Rn. Denote by Wp

n,d(µ) the law of

1 √ d

d

  • i=1
  • X ⊗p

i

− E

  • X ⊗p

i

  • .

We are interested in the behavior as d → ∞. Specifically, when is it true that Wp

n,d(µ) is approximately Gaussian?

2

slide-4
SLIDE 4

Technicalities

Wp

n,d(µ) is a measure on the tensor space (Rn)⊗p, which we

identify with Rn·p, through the basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1, . . . , ip ≤ n}. For simplicity we will focus on the sub-space of ’principal’ tensors, with basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1 < · · · < ip ≤ n}. The projection of Wp

n,d(µ) will be denoted by

Wp

n,d(µ).

3

slide-5
SLIDE 5

Technicalities

Wp

n,d(µ) is a measure on the tensor space (Rn)⊗p, which we

identify with Rn·p, through the basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1, . . . , ip ≤ n}. For simplicity we will focus on the sub-space of ’principal’ tensors, with basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1 < · · · < ip ≤ n}. The projection of Wp

n,d(µ) will be denoted by

Wp

n,d(µ).

3

slide-6
SLIDE 6

Technicalities

Wp

n,d(µ) is a measure on the tensor space (Rn)⊗p, which we

identify with Rn·p, through the basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1, . . . , ip ≤ n}. For simplicity we will focus on the sub-space of ’principal’ tensors, with basis, {ei1 ⊗ · · · ⊗ eip|1 ≤ i1 < · · · < ip ≤ n}. The projection of Wp

n,d(µ) will be denoted by

Wp

n,d(µ).

3

slide-7
SLIDE 7

Wishart Matrices

When p = 2 and X ∼ µ is isotropic, W2

n,d(µ) can be realized as

the law of XXT − d · Id √ d . Here, X is an n × d matrix, with columns being i.i.d. copies of X. In this case, W2

n,d(µ) is the law of the upper triangular part.

4

slide-8
SLIDE 8

Some Observations

Let us restrict our attention to the case p = 2. ❼ for fixed n, by the central limit theorem W2

n,d(µ) → N(0, Σ).

❼ If n = d, then the spectral measure of XXT converges to the Marchenko-Pastur distribution. In particular, W2

n,d(µ) is not

Gaussian. Question How should n depend on d so that Wp

n,d(µ) is approximately

Gaussian.

5

slide-9
SLIDE 9

Some Observations

Let us restrict our attention to the case p = 2. ❼ for fixed n, by the central limit theorem W2

n,d(µ) → N(0, Σ).

❼ If n = d, then the spectral measure of XXT converges to the Marchenko-Pastur distribution. In particular, W2

n,d(µ) is not

Gaussian. Question How should n depend on d so that Wp

n,d(µ) is approximately

Gaussian.

5

slide-10
SLIDE 10

Some Observations

Let us restrict our attention to the case p = 2. ❼ for fixed n, by the central limit theorem W2

n,d(µ) → N(0, Σ).

❼ If n = d, then the spectral measure of XXT converges to the Marchenko-Pastur distribution. In particular, W2

n,d(µ) is not

Gaussian. Question How should n depend on d so that Wp

n,d(µ) is approximately

Gaussian.

5

slide-11
SLIDE 11

Some Observations

Let us restrict our attention to the case p = 2. ❼ for fixed n, by the central limit theorem W2

n,d(µ) → N(0, Σ).

❼ If n = d, then the spectral measure of XXT converges to the Marchenko-Pastur distribution. In particular, W2

n,d(µ) is not

Gaussian. Question How should n depend on d so that Wp

n,d(µ) is approximately

Gaussian.

5

slide-12
SLIDE 12

Random Geometric Graphs

From now on, let γ stand for the standard Gaussian, in different

  • dimensions. In (Bubeck, Ding, Eldan, R´

acz 15’) and independently in (Jiang, Li 15’) it was shown, ❼ If n3

d → 0, then TV

  • W2

n,d(γ), γ

  • → 0.

This is tight, in the sense, ❼ If n3

d → ∞, then TV

  • W2

n,d(γ), γ

  • → 1.

(R´ acz, Richey 16’) shows that the phase transition is smooth.

6

slide-13
SLIDE 13

Extensions

(Bubeck, Ganguly 15’) extended the result to any log-concave product measure. That is, Xi,j are i.i.d. as e−ϕ(x)dx for some convex ϕ. ❼ Original motivation came from random geomteric graphs. ❼ (Fang, Koike 20’) removed the log-concavity assumption.

7

slide-14
SLIDE 14

Extensions

(Nourdin, Zheng 18’) gave the following results, as an answer to questions raised in (Bubeck, Ganguly 15’) ❼ If the rows of X are i.i.d. N(0, Σ), for some positive definite Σ. Then W1

  • W2

n,d, γ

  • n3

d . (See also (Eldan, M 16’)) ❼ W1

  • Wp

n,d(γ), γ

  • n2p−1

d

.

8

slide-15
SLIDE 15

Extensions

(Nourdin, Zheng 18’) gave the following results, as an answer to questions raised in (Bubeck, Ganguly 15’) ❼ If the rows of X are i.i.d. N(0, Σ), for some positive definite Σ. Then W1

  • W2

n,d, γ

  • n3

d . (See also (Eldan, M 16’)) ❼ W1

  • Wp

n,d(γ), γ

  • n2p−1

d

.

8

slide-16
SLIDE 16

Extensions

(Nourdin, Zheng 18’) gave the following results, as an answer to questions raised in (Bubeck, Ganguly 15’) ❼ If the rows of X are i.i.d. N(0, Σ), for some positive definite Σ. Then W1

  • W2

n,d, γ

  • n3

d . (See also (Eldan, M 16’)) ❼ W1

  • Wp

n,d(γ), γ

  • n2p−1

d

.

8

slide-17
SLIDE 17

Main Result

Today: Theorem If µ is a measure on Rn which is uniformly log-concave and unconditional, then dist

  • W p

n,d(µ), γ

  • n2p−1

d . ❼ dist stands from some notion of distance to be introduced

  • soon. But could be replaced with W2.

❼ The assumptions of uniform log-concavity and unconditionality may be relaxed. ❼ The result also holds for a large class of product measures.

9

slide-18
SLIDE 18

Main Result

Today: Theorem If µ is a measure on Rn which is uniformly log-concave and unconditional, then dist

  • W p

n,d(µ), γ

  • n2p−1

d . ❼ dist stands from some notion of distance to be introduced

  • soon. But could be replaced with W2.

❼ The assumptions of uniform log-concavity and unconditionality may be relaxed. ❼ The result also holds for a large class of product measures.

9

slide-19
SLIDE 19

Main Result

Today: Theorem If µ is a measure on Rn which is uniformly log-concave and unconditional, then dist

  • W p

n,d(µ), γ

  • n2p−1

d . ❼ dist stands from some notion of distance to be introduced

  • soon. But could be replaced with W2.

❼ The assumptions of uniform log-concavity and unconditionality may be relaxed. ❼ The result also holds for a large class of product measures.

9

slide-20
SLIDE 20

Main Result

Today: Theorem If µ is a measure on Rn which is uniformly log-concave and unconditional, then dist

  • W p

n,d(µ), γ

  • n2p−1

d . ❼ dist stands from some notion of distance to be introduced

  • soon. But could be replaced with W2.

❼ The assumptions of uniform log-concavity and unconditionality may be relaxed. ❼ The result also holds for a large class of product measures.

9

slide-21
SLIDE 21

The Challenge

By considering,

1 √ d d

  • i=1
  • X ⊗p

i

− E

  • X ⊗p

i

  • , one may hope to be

able to apply an estimate of the high-dimensional central limit theorem. Optimistically, such estimates give: dist

  • W p

n,d(µ), γ

  • ≤ E
  • X ⊗p3

√ d . Thus, to obtain optimal convergence rates, we need to exploit the low dimensional structure of W p

n,d(µ).

10

slide-22
SLIDE 22

The Challenge

By considering,

1 √ d d

  • i=1
  • X ⊗p

i

− E

  • X ⊗p

i

  • , one may hope to be

able to apply an estimate of the high-dimensional central limit theorem. Optimistically, such estimates give: dist

  • W p

n,d(µ), γ

  • ≤ E
  • X ⊗p3

√ d . Thus, to obtain optimal convergence rates, we need to exploit the low dimensional structure of W p

n,d(µ).

10

slide-23
SLIDE 23

The Challenge

By considering,

1 √ d d

  • i=1
  • X ⊗p

i

− E

  • X ⊗p

i

  • , one may hope to be

able to apply an estimate of the high-dimensional central limit theorem. Optimistically, such estimates give: dist

  • W p

n,d(µ), γ

  • ≤ E
  • X ⊗p3

√ d . Thus, to obtain optimal convergence rates, we need to exploit the low dimensional structure of W p

n,d(µ).

10

slide-24
SLIDE 24

Stein’s Method

Basic observation: If G ∼ γ on Rn. Then, for any smooth test function f : Rn → Rn, E [G, f (G)] = E [divf (G)] . Moreover, the Gaussian is the only measure which satisfies this relation. Stein’s idea: E [X, f (X)] ≃ E [divf (X)] = ⇒ X ≃ G.

11

slide-25
SLIDE 25

Stein’s Method

Basic observation: If G ∼ γ on Rn. Then, for any smooth test function f : Rn → Rn, E [G, f (G)] = E [divf (G)] . Moreover, the Gaussian is the only measure which satisfies this relation. Stein’s idea: E [X, f (X)] ≃ E [divf (X)] = ⇒ X ≃ G.

11

slide-26
SLIDE 26

Stein Kernels

A Stein kernel of X ∼ µ is a matrix valued map τ : Rn → Mn(R), such that E [X, f (X)] = E [τ(X), Df (X)HS] . We have that τ ≡ Id iff µ = γ. The discrepancy is then defined as S2(µ) = Eµ

  • τ − Id2

HS

  • .

12

slide-27
SLIDE 27

Stein Kernels

A Stein kernel of X ∼ µ is a matrix valued map τ : Rn → Mn(R), such that E [X, f (X)] = E [τ(X), Df (X)HS] . We have that τ ≡ Id iff µ = γ. The discrepancy is then defined as S2(µ) = Eµ

  • τ − Id2

HS

  • .

12

slide-28
SLIDE 28

Stein Kernels - Properties

Stein kernels are well behaved under linear transformations. If τX is a stein kernel for X, and A is a linear transformation. Then τAX(x) := AE [τX(X)|AX = x] AT, is a Stein kernel for AX. In particular, if Sd :=

1 √ d d

  • i=1

Xi, τSd(x) = 1 d

d

  • i=1

E [τX(Xi)|Sd = x] , is a Stein kernel for Sd.

13

slide-29
SLIDE 29

Stein Kernels - Properties

Stein kernels are well behaved under linear transformations. If τX is a stein kernel for X, and A is a linear transformation. Then τAX(x) := AE [τX(X)|AX = x] AT, is a Stein kernel for AX. In particular, if Sd :=

1 √ d d

  • i=1

Xi, τSd(x) = 1 d

d

  • i=1

E [τX(Xi)|Sd = x] , is a Stein kernel for Sd.

13

slide-30
SLIDE 30

Stein’s Discrepancy Along the CLT

If X is isotropic and f (x) := xiej, we get δi,j = E [X, f (X)] = E [τX(X), Df (X)] = E [τX(X)i,j] . So, E [τX(X)] = Id. Thus, S2(Sd) = E

  • ||τSd(Sd) − Id||2

HS

  • = E

 

  • 1

d

d

  • i=1

E [τX(Xi) − Id|Sd]

  • 2

HS

  ≤ 1 d2

  • E

d

  • i=1

τX(Xi) − Id

  • 2

HS

= 1 d E

  • ||τX(X) − Id||2

HS

  • = S2(X)

d .

14

slide-31
SLIDE 31

Stein’s Discrepancy Along the CLT

If X is isotropic and f (x) := xiej, we get δi,j = E [X, f (X)] = E [τX(X), Df (X)] = E [τX(X)i,j] . So, E [τX(X)] = Id. Thus, S2(Sd) = E

  • ||τSd(Sd) − Id||2

HS

  • = E

 

  • 1

d

d

  • i=1

E [τX(Xi) − Id|Sd]

  • 2

HS

  ≤ 1 d2

  • E

d

  • i=1

τX(Xi) − Id

  • 2

HS

= 1 d E

  • ||τX(X) − Id||2

HS

  • = S2(X)

d .

14

slide-32
SLIDE 32

Stein’s Discrepancy Compared to Other Distances

It’s a nice exercise to show, W1(µ, γ) ≤ S(µ). What is more impressive is that W2(µ, γ) ≤ S(µ), as well, as shown in (Ledoux, Nourdin, Pecatti 14’). In fact, Ent(µ||γ) ≤ 1 2S2(µ) ln

  • 1 + I(µ||γ)

S2(µ)

  • .

15

slide-33
SLIDE 33

Stein’s Discrepancy Compared to Other Distances

It’s a nice exercise to show, W1(µ, γ) ≤ S(µ). What is more impressive is that W2(µ, γ) ≤ S(µ), as well, as shown in (Ledoux, Nourdin, Pecatti 14’). In fact, Ent(µ||γ) ≤ 1 2S2(µ) ln

  • 1 + I(µ||γ)

S2(µ)

  • .

15

slide-34
SLIDE 34

Stein’s Discrepancy Compared to Other Distances

It’s a nice exercise to show, W1(µ, γ) ≤ S(µ). What is more impressive is that W2(µ, γ) ≤ S(µ), as well, as shown in (Ledoux, Nourdin, Pecatti 14’). In fact, Ent(µ||γ) ≤ 1 2S2(µ) ln

  • 1 + I(µ||γ)

S2(µ)

  • .

15

slide-35
SLIDE 35

Proof of Main Theorem

The main theorem is implied by Lemma (Rank 1 Lemma) Let X ∼ µ be an isotropic random vector in Rn. Then, for any transport map, such that ϕ∗γ = µ, there exists a Stein kernel τ, such that E

  • τ
  • X ⊗p − E
  • X ⊗p

2

HS

  • ≤ p4n
  • E
  • X8(p−1)

E

  • Dϕ(G)8
  • p
  • .

16

slide-36
SLIDE 36

Proof of Main Theorem

Proof of Main Theorem. Let A be the linear projection, such that A∗W p

n,d(µ) =

W p

n,d(µ).

Take ϕ, with Dϕ < L, almost surely. Then S2( W p

n,d(µ)) ≤ S2 (A (X ⊗p − E [X ⊗p]))

d ≤ C d

  • E
  • τ
  • X ⊗p − E
  • X ⊗p

2

HS

  • + E
  • Id2

HS

  • ≤ C

d

  • E
  • X8(p−1)

E

  • Dϕ(G)8
  • p
  • + np
  • ≤ C n2p−1

d .

17

slide-37
SLIDE 37

Plan

The plan for the rest of the talk is to prove the rank 1 lemma. We need the following ingredients: ❼ Given a transport map ψ such that ψ∗γ = ν. Construct a Stein kernel for ν with small norm. ❼ Show that if ϕ is such that ϕ∗γ = µ has tame tails, then this is also true for that map x → ϕ(x)⊗p. ❼ Use the fact that x → ϕ(x)⊗p is a map from a low-dimensional space.

18

slide-38
SLIDE 38

Analysis in Finite Dimensional Gauss Space

Let us first show how to construct a Stein kernel from a given transport map: ❼ We work in the space L2(γ). Introduce D as the total (weak) derivative operator and δ as its adjoint. ❼ The Orenstein-Uhlenbeck operator is defined as L := −δ ◦ D. ❼ Fact: there exists an operator L−1 such that for any f with Eγ[f ] = 0, LL−1f = f .

19

slide-39
SLIDE 39

Analysis in Finite Dimensional Gauss Space

Let us first show how to construct a Stein kernel from a given transport map: ❼ We work in the space L2(γ). Introduce D as the total (weak) derivative operator and δ as its adjoint. ❼ The Orenstein-Uhlenbeck operator is defined as L := −δ ◦ D. ❼ Fact: there exists an operator L−1 such that for any f with Eγ[f ] = 0, LL−1f = f .

19

slide-40
SLIDE 40

Analysis in Finite Dimensional Gauss Space

Let us first show how to construct a Stein kernel from a given transport map: ❼ We work in the space L2(γ). Introduce D as the total (weak) derivative operator and δ as its adjoint. ❼ The Orenstein-Uhlenbeck operator is defined as L := −δ ◦ D. ❼ Fact: there exists an operator L−1 such that for any f with Eγ[f ] = 0, LL−1f = f .

19

slide-41
SLIDE 41

Analysis in Finite Dimensional Gauss Space

Let us first show how to construct a Stein kernel from a given transport map: ❼ We work in the space L2(γ). Introduce D as the total (weak) derivative operator and δ as its adjoint. ❼ The Orenstein-Uhlenbeck operator is defined as L := −δ ◦ D. ❼ Fact: there exists an operator L−1 such that for any f with Eγ[f ] = 0, LL−1f = f .

19

slide-42
SLIDE 42

Constructing a Kernel

Lemma Let γm be the standard Gaussian measure on Rm and let ϕ : Rm → RN. Set ν = ϕ∗γm and suppose that

  • RN

xdν = 0. Then τϕ(x) := EG∼γm

  • (−DL−1)ϕ(G)(Dϕ(G))T|ϕ(G) = x
  • ,

is a Stein kernel of ν.

20

slide-43
SLIDE 43

Proof of Construction

Proof. E [Df (Y ), τϕ(Y )HS] = E

  • Df (Y ), E
  • (−DL−1)ϕ(G)(Dϕ(G))T|ϕ(G) = Y
  • HS
  • = E
  • Df (ϕ(G))Dϕ(G), (−DL−1)ϕ(G)HS
  • = E
  • Df (ϕ(G)), (−DL−1)ϕ(G)HS
  • (Chain rule)

= E

  • f ◦ ϕ(G), (−δDL−1)ϕ(G)
  • (Adjoint operator)

= E

  • f ◦ ϕ(G), LL−1ϕ(G)
  • L = −δD

= E [f ◦ ϕ(G), ϕ(G)] E[ϕ(G)] = 0 = E [f (Y ), Y ] . ϕ∗γm = ν

21

slide-44
SLIDE 44

Contraction

We have for any matrix norm, the following contraction property τϕ(x)2 ≤ EG∼γm

  • (−DL−1)ϕ(G)(Dϕ(G))T
  • 2

|ϕ(G) = x

  • ≤ EG∼γm
  • Dϕ(G)4|ϕ(G) = x
  • .

Thus, for example, if ϕ is 1-Lipschitz, τϕop ≤ 1.

22

slide-45
SLIDE 45

Contraction

We have for any matrix norm, the following contraction property τϕ(x)2 ≤ EG∼γm

  • (−DL−1)ϕ(G)(Dϕ(G))T
  • 2

|ϕ(G) = x

  • ≤ EG∼γm
  • Dϕ(G)4|ϕ(G) = x
  • .

Thus, for example, if ϕ is 1-Lipschitz, τϕop ≤ 1.

22

slide-46
SLIDE 46

Contraction

The contraction property can be obtained from the commutation relation −DL−1ϕ =

  • e−tPtDϕdt,

where Pt is the Ornstein-Uhlenbeck semi-group. For then τϕ(x) =

  • e−tEG∼γm [Dϕ(G)Pt (Dϕ(G)) |ϕ(G) = x] .

23

slide-47
SLIDE 47

Contraction

The contraction property can be obtained from the commutation relation −DL−1ϕ =

  • e−tPtDϕdt,

where Pt is the Ornstein-Uhlenbeck semi-group. For then τϕ(x) =

  • e−tEG∼γm [Dϕ(G)Pt (Dϕ(G)) |ϕ(G) = x] .

23

slide-48
SLIDE 48

Back to Rank 1 Tensors

Suppose we have a transport map, such that ϕ∗γ = µ and X ∼ µ. We now consider the map u → ϕ(u)⊗p − E

  • X ⊗p

. Define τ(˜ v⊗p) :=E

  • (−DL−1)ϕ(G)⊗p(Dϕ(G)⊗p)T|ϕ(G)⊗p = v⊗p

=E

  • (−DL−1)ϕ(G)⊗p(Dϕ(G)⊗p)T|ϕ(G) = (±1)pv
  • ,

which is a Stein kernel for X ⊗p − E

  • X ⊗p

.

24

slide-49
SLIDE 49

Back to Rank 1 Tensors

Recall, we wish to bound E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • . For any

two matrices A, B, we have ABHS ≤ rank(A)ABop. So, since rank(Dϕ(v)⊗p) ≤ n, contraction gives E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • ≤ nE
  • Dϕ(G)⊗p4
  • p
  • .

25

slide-50
SLIDE 50

Back to Rank 1 Tensors

Recall, we wish to bound E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • . For any

two matrices A, B, we have ABHS ≤ rank(A)ABop. So, since rank(Dϕ(v)⊗p) ≤ n, contraction gives E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • ≤ nE
  • Dϕ(G)⊗p4
  • p
  • .

25

slide-51
SLIDE 51

A Little Algebra

Write, for the Kronecker product, Dϕ(v)⊗p =

p

  • i=1

ϕ(x)⊗i−1 ⊗ Dϕ(v) ⊗ ϕ(v)⊗p−i. This gives E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • ≤ np4E
  • Dϕ(G)4
  • pϕ(G)4(p−1)

≤ np4

  • E
  • Dϕ(G)8
  • p
  • E
  • X8(p−1)

.

26

slide-52
SLIDE 52

A Little Algebra

Write, for the Kronecker product, Dϕ(v)⊗p =

p

  • i=1

ϕ(x)⊗i−1 ⊗ Dϕ(v) ⊗ ϕ(v)⊗p−i. This gives E

  • τ(X ⊗p − E
  • X ⊗p

)2

HS

  • ≤ np4E
  • Dϕ(G)4
  • pϕ(G)4(p−1)

≤ np4

  • E
  • Dϕ(G)8
  • p
  • E
  • X8(p−1)

.

26

slide-53
SLIDE 53

Future Directions

❼ What about the full tensor Wp

n,d(µ)? (Related to

anti-concentration of polynomials) ❼ What about general log-concave measures (Related to the KLS and thin shell conjectures). ❼ What about other dependence structures? ❼ What about lower bounds when p > 2?

27

slide-54
SLIDE 54

Future Directions

❼ What about the full tensor Wp

n,d(µ)? (Related to

anti-concentration of polynomials) ❼ What about general log-concave measures (Related to the KLS and thin shell conjectures). ❼ What about other dependence structures? ❼ What about lower bounds when p > 2?

27

slide-55
SLIDE 55

Future Directions

❼ What about the full tensor Wp

n,d(µ)? (Related to

anti-concentration of polynomials) ❼ What about general log-concave measures (Related to the KLS and thin shell conjectures). ❼ What about other dependence structures? ❼ What about lower bounds when p > 2?

27

slide-56
SLIDE 56

Future Directions

❼ What about the full tensor Wp

n,d(µ)? (Related to

anti-concentration of polynomials) ❼ What about general log-concave measures (Related to the KLS and thin shell conjectures). ❼ What about other dependence structures? ❼ What about lower bounds when p > 2?

27

slide-57
SLIDE 57

Future Directions

❼ What about the full tensor Wp

n,d(µ)? (Related to

anti-concentration of polynomials) ❼ What about general log-concave measures (Related to the KLS and thin shell conjectures). ❼ What about other dependence structures? ❼ What about lower bounds when p > 2?

27

slide-58
SLIDE 58

Thank you!