Non asymptotic study of the singular values of some random - - PowerPoint PPT Presentation

non asymptotic study of the singular values of some
SMART_READER_LITE
LIVE PREVIEW

Non asymptotic study of the singular values of some random - - PowerPoint PPT Presentation

Non asymptotic study of the singular values of some random covariance matrices. Olivier Gu edon Universit e Paris-Est Marne-la-Vall ee High-dimensional problems and quantum physics. June 2015 The setting. Let A be a random matrix


slide-1
SLIDE 1

Non asymptotic study of the singular values of some random covariance matrices.

Olivier Gu´ edon

Universit´ e Paris-Est Marne-la-Vall´ ee

High-dimensional problems and quantum physics. June 2015

slide-2
SLIDE 2

The setting.

Let A be a random matrix defined as A = (X1 . . . XN) where X1, . . . , XN are independent random vectors in Rn. What can we say about the singular values of A ? Study of M = AAT = (X1 . . . XN)    XT

1

. . . XT

N

   =

N

  • i=1

XiXT

i

slide-3
SLIDE 3

The setting.

Let A be a random matrix defined as A = (X1 . . . XN) where X1, . . . , XN are independent random vectors in Rn. What can we say about the singular values of A ? Study of M = AAT = (X1 . . . XN)    XT

1

. . . XT

N

   =

N

  • i=1

XiXT

i

           λmax(AAT) = sup

a∈Sn−1 N

  • i=1

Xi, a2 λmin(AAT) = inf

a∈Sn−1 N

  • i=1

Xi, a2

slide-4
SLIDE 4

Random Matrix Theory.

All Xi’s have identically independent random entries. A = (aij)1≤i≤n,1≤j≤N, Mn = AAT Little n and N go to infinity and N/n → c > 1.

slide-5
SLIDE 5

Random Matrix Theory.

All Xi’s have identically independent random entries. A = (aij)1≤i≤n,1≤j≤N, Mn = AAT Little n and N go to infinity and N/n → c > 1. Counting probability measure : νn,m = 1 n

n

  • i=1

δλk(Mn/

√ N)

slide-6
SLIDE 6

Random Matrix Theory.

Marchenko-Pastur ’67 (bulk of the spectrum) If Ea2

i,j = 1, then with probability one, for any continuous

bounded functon f : [0, +∞) → R, lim

n→+∞

  • f dνn,m =
  • f dν

where ν is the semi-circle law.

slide-7
SLIDE 7

Random Matrix Theory.

Marchenko-Pastur ’67 (bulk of the spectrum) If Ea2

i,j = 1, then with probability one, for any continuous

bounded functon f : [0, +∞) → R, lim

n→+∞

  • f dνn,m =
  • f dν

where ν is the semi-circle law. Bai-Yin ’93 (edge of the spectrum) If Ea2

i,j = 1, Eai,j = 0, and Ea4 i,j < +∞, then with probability

  • ne

       lim

n→∞ λmax( 1

N AAT) = 1 + n N lim

n→∞ λmin( 1

N AAT) = 1 − n N

slide-8
SLIDE 8

Frame in Harmonic Analysis

Take an orthonormal basis from RM and project it on Rn (or on an n-dimensional subspace of RM). You get v1, . . . , vM : ∀x ∈ Rn, |x|2

2 = M

  • j=1

cjx, uj2 where uj =

vj |vj|2 and cj = |vj|2 2.

Define a random vector X in Rn as X = √n uj with proba cj

n .

Then for every vector θ ∈ Rn EX, θ2 =

M

  • j=1

cjuj, θ2 = |θ|2

2

slide-9
SLIDE 9

Frame in Harmonic Analysis

Take an orthonormal basis from RM and project it on Rn (or on an n-dimensional subspace of RM. You get v1, . . . , vM : ∀x ∈ Rn, |x|2

2 = M

  • j=1

cjx, uj2 where uj =

vj |vj|2 and cj = |vj|2 2.

Define a random vector X in Rn as X = √n uj with proba cj

n .

Hence Σ = EX ⊗ X = Id

slide-10
SLIDE 10

Frame in Harmonic Analysis

Take an orthonormal basis from RM and project it on Rn (or on an n-dimensional subspace of RM. You get v1, . . . , vM : ∀x ∈ Rn, |x|2

2 = M

  • j=1

cjx, uj2 where uj =

vj |vj|2 and cj = |vj|2 2.

Define a random vector X in Rn as X = √n uj with proba cj

n .

Hence Σ = EX ⊗ X = Id Question : Find the size N(ε) of a sample such that ∀θ ∈ Rn, (1 − ε)|θ|2

2 ≤ 1

N

N

  • j=1

Xj, θ2 ≤ (1 + ε)|θ|2

2

slide-11
SLIDE 11

Frame in Harmonic Analysis

Take an orthonormal basis from RM and project it on Rn (or on an n-dimensional subspace of RM. You get v1, . . . , vM : ∀x ∈ Rn, |x|2

2 = M

  • j=1

cjx, uj2 where uj =

vj |vj|2 and cj = |vj|2 2.

Define a random vector X in Rn as X = √n uj with proba cj

n .

Hence Σ = EX ⊗ X = Id Question : Find the size N(ε) of a sample such that ∀θ ∈ Rn, (1 − ε)|θ|2

2 ≤ 1

N

N

  • j=1

Xj, θ2 ≤ (1 + ε)|θ|2

2

This gives a subset with a very particular structure.

slide-12
SLIDE 12

Frame in Harmonic Analysis

Theorem (Rudelson, ’97). If X is a random vector in Rn such that |X|2 ≤ K√n a.s. and ∀θ ∈ Rn, EX, θ2 = |θ|2

2

: isotropy then for N ≈ CK(ε)n log n, E

  • 1

N

N

  • j=1

XjXT

j − Id

  • ≤ ε
slide-13
SLIDE 13

Frame in Harmonic Analysis

Theorem (Rudelson, ’97). If X is a random vector in Rn such that |X|2 ≤ K√n a.s. and ∀θ ∈ Rn, EX, θ2 = |θ|2

2

: isotropy then for N ≈ CK(ε)n log n, E

  • 1

N

N

  • j=1

XjXT

j − Id

  • ≤ ε

The main assumption is |X|2 ≤ K√n, otherwise, there is no moment assumption of the entries.

slide-14
SLIDE 14

Frame in Harmonic Analysis

Theorem (Rudelson, ’97). If X is a random vector in Rn such that |X|2 ≤ K√n a.s. and ∀θ ∈ Rn, EX, θ2 = |θ|2

2

: isotropy then for N ≈ CK(ε)n log n, E

  • 1

N

N

  • j=1

XjXT

j − Id

  • ≤ ε

The main assumption is |X|2 ≤ K√n, otherwise, there is no moment assumption of the entries. You can not do better ! Coupon collector.

slide-15
SLIDE 15

Computing the volume of a convex body

K ⊂ Rn is given by a separation oracle

slide-16
SLIDE 16

Computing the volume of a convex body

K ⊂ Rn is given by a separation oracle Elekes (’86), B´ ar´ any-F¨ uredi (’86) : it is not possible to compute with a deterministic algorithm in polynomial time the volume of a convex body (even approximately)

slide-17
SLIDE 17

Computing the volume of a convex body

K ⊂ Rn is given by a separation oracle Elekes (’86), B´ ar´ any-F¨ uredi (’86) : it is not possible to compute with a deterministic algorithm in polynomial time the volume of a convex body (even approximately) Randomization - Given ε and η, Dyer-Frieze-Kannan(’89) established randomized algorithms returning a non-negative number ζ such that (1 − ε)ζ < Vol K < (1 + ε)ζ with probability at least 1 − η. The running time of the algorithm is polynomial in n, 1/ε and log(1/η).

slide-18
SLIDE 18

Computing the volume of a convex body

K ⊂ Rn is given by a separation oracle Elekes (’86), B´ ar´ any-F¨ uredi (’86) : it is not possible to compute with a deterministic algorithm in polynomial time the volume of a convex body (even approximately) Randomization - Given ε and η, Dyer-Frieze-Kannan(’89) established randomized algorithms returning a non-negative number ζ such that (1 − ε)ζ < Vol K < (1 + ε)ζ with probability at least 1 − η. The running time of the algorithm is polynomial in n, 1/ε and log(1/η). The number of oracle calls is a random variable and the bound is for example on its expected value.

slide-19
SLIDE 19

Computing the volume of a convex body

The randomized algorithm proposed by Kannan, Lov´ asz and Simonovits ’97 improves significantly the polynomial dependence.

slide-20
SLIDE 20

Computing the volume of a convex body

The randomized algorithm proposed by Kannan, Lov´ asz and Simonovits ’97 improves significantly the polynomial dependence. Rounding - Put the convex body in a position where Bn

2 ⊂ K ⊂ d Bn 2

where d ≤ nconst.

slide-21
SLIDE 21

Computing the volume of a convex body

The randomized algorithm proposed by Kannan, Lov´ asz and Simonovits ’97 improves significantly the polynomial dependence. Rounding - Put the convex body in a position where Bn

2 ⊂ K ⊂ d Bn 2

where d ≤ nconst.

  • John (’48) : d ≤ n ( or d ≤ √n in the symmetric case).

How to find an algorithm to do so ?

slide-22
SLIDE 22

Computing the volume of a convex body

The randomized algorithm proposed by Kannan, Lov´ asz and Simonovits ’97 improves significantly the polynomial dependence. Rounding - Put the convex body in a position where Bn

2 ⊂ K ⊂ d Bn 2

where d ≤ nconst.

  • Idea : find an algorithm which produces in polynomial

time a matrix A such that AK is in an approximate isotropic position. Conjecture 2 of KLS (’97) : solved in 2010 by Adamczak, Litvak, Pajor, Tomczak-Jaegermann

slide-23
SLIDE 23

Computing the volume of a convex body

The randomized algorithm proposed by Kannan, Lov´ asz and Simonovits ’97 improves significantly the polynomial dependence. Rounding - Put the convex body in a position where Bn

2 ⊂ K ⊂ d Bn 2

where d ≤ nconst.

  • Idea : find an algorithm which produces in polynomial

time a matrix A such that AK is in an approximate isotropic position. Conjecture 2 of KLS (’97) : solved in 2010 by Adamczak, Litvak, Pajor, Tomczak-Jaegermann Computing the volume - Monte Carlo algorithm, estimates

  • f local conductance.

Conjecture 1 of KLS (’95) : isoperimetric inequality -

  • pen !
slide-24
SLIDE 24

Approximation of the covariance matrix.

Question of KLS (’97) : let X be a vector uniformly distributed on a convex body K, X1, . . . , XN ind. copies of X, what is the smallest N such that

  • 1

N

N

  • j=1

Xj X⊤

j − EX X⊤

  • ≤ ε
  • EX X⊤
  • · is the operator norm
slide-25
SLIDE 25

Approximation of the covariance matrix.

Question of KLS (’97) : let X be a vector uniformly distributed on a convex body K, X1, . . . , XN ind. copies of X, what is the smallest N such that

  • 1

N

N

  • j=1

Xj X⊤

j − Id

  • ≤ ε

Assume EX X⊤ = Id,

slide-26
SLIDE 26

Approximation of the covariance matrix.

Question of KLS (’97) : let X be a vector uniformly distributed on a convex body K, X1, . . . , XN ind. copies of X, what is the smallest N such that

  • 1

N

N

  • j=1

Xj X⊤

j − Id

  • ≤ ε

Assume EX X⊤ = Id, you want to control the smallest and the largest singular values. 1 − ε ≤ λmin

  • 1

N

N

  • j=1

Xj X⊤

j

  • ≤ λmax
  • 1

N

N

  • j=1

Xj X⊤

j

  • ≤ 1 + ε

KLS n2/ε2, Bourgain n log3 n/ε2, ... Rudelson, Gu´ edon, Paouris, Aubrun, Giannopoulos ALPT (’10) n/ε2 : for general log-concave vectors

slide-27
SLIDE 27

Log-concave random vectors

X is a log-concave random vector ↔ log-concave density with respect to the Lebesgue measure on Rn

slide-28
SLIDE 28

Log-concave random vectors

X is a log-concave random vector ↔ log-concave density with respect to the Lebesgue measure on Rn Main properties about linear functionals : ∃C > 1, for all log-concave X, for all p ≥ 2, ∀θ ∈ Rn, (E|X, θ|p)1/p ≤ C p

  • E|X, θ|21/2
slide-29
SLIDE 29

Log-concave random vectors

X is a log-concave random vector ↔ log-concave density with respect to the Lebesgue measure on Rn Main properties about linear functionals : ∃C > 1, for all log-concave X, for all p ≥ 2, ∀θ ∈ Rn, (E|X, θ|p)1/p ≤ C p

  • E|X, θ|21/2

Or equivalently in isotropic position, ∀θ ∈ Rn, ∀t ≥ 1, P (|X, θ| ≥ t) ≤ exp(−c t).

slide-30
SLIDE 30

Log-concave random vectors

Theorem (Paouris, ’06). If X is an isotropic log-concave random vector in Rn then ∀t ≥ 1, P

  • |X|2 ≥ c t √n
  • ≤ exp(−C t √n)
slide-31
SLIDE 31

Log-concave random vectors

Theorem (Paouris, ’06). If X is an isotropic log-concave random vector in Rn then ∀t ≥ 1, P

  • |X|2 ≥ c t √n
  • ≤ exp(−C t √n)

Theorem (ALPT, ’10) If X is isotropic log-concave then with probability greater than 1 − 2 exp(−c√n), sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xi, a2 − EXi, a2
  • ≤ C

n N .

slide-32
SLIDE 32

Log-concave random vectors

Theorem (Paouris, ’06). If X is an isotropic log-concave random vector in Rn then ∀t ≥ 1, P

  • |X|2 ≥ c t √n
  • ≤ exp(−C t √n)

Theorem (ALPT, ’10) If X is isotropic log-concave then with probability greater than 1 − 2 exp(−c√n), sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xi, a2 − EXi, a2
  • ≤ C

n N .

  • 1

N

N

  • j=1

XjXT

j − Id

  • ≤ ε

Take N ≈ n

ε2.

slide-33
SLIDE 33

Relaxation of the assumptions.

Several recent works in this direction : Srivastava-Vershynin, Vershynin, Mendelson-Paouris

slide-34
SLIDE 34

Relaxation of the assumptions.

Several recent works in this direction : Srivastava-Vershynin, Vershynin, Mendelson-Paouris Theorem (Vershynin, ’11) If X is an isotropic random vector in Rn such that |X| ≤ K√n a.s. and ∀θ ∈ Rn, E|X, θ|q ≤ Lq for some q > 4 then with proba greater than 1 − δ sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xi, a2 − 1
  • ≤ Cq,L,δ(log log n)2 n

N 1

2− 1 q .

slide-35
SLIDE 35

Relaxation of the assumptions.

Several recent works in this direction : Srivastava-Vershynin, Vershynin, Mendelson-Paouris Theorem (Mendelson-Paouris ’12) If X is an isotropic random vector in Rn such that |X|2 ≤ K(Nn)1/4 a.s. and ∀θ ∈ Rn, E|X, θ|q ≤ Lq for some q > 8 then with probability greater than 1 − ( 1 Nβ + exp(−cn)) sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xi, a2 − 1
  • ≤ Cq,L,K

n N .

slide-36
SLIDE 36

Result

Theorem (GLPT ’14). If X1, . . . , XN are random vectors in Rn such that ∀j, ∀θ ∈ Rn, ∀t > 0 P (|Xj, θ| > t) ≤ 1 tq for some p ∈ (4, 8] Let ε ≤ min(1, p−4

4 ) and γ = p − 4 − 2ε. Then with

probability greater than 1 − 8 exp(−n) − 2ε−p/2 max(N−3/2, n− p−4

4 )

sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xi, a2 − EXi, a2

C 1 N max |Xi|2

2 + Cp,ε

n N γ/p .

slide-37
SLIDE 37

Restricted Isometry Property

Let 1 ≤ m ≤ N, a vector z ∈ RN is said m-sparse if |{j : zj = 0}| ≤ m. And Um = SN−1 ∩ m − sparse. Let T be an n × N matrix, δm(T) is the smallest number such that for every m-sparse vector z (1 − δm(T))|z|2

2 ≤ |Tz|2 2 ≤ (1 + δm(T))|z|2 2

slide-38
SLIDE 38

Restricted Isometry Property

Let 1 ≤ m ≤ N, a vector z ∈ RN is said m-sparse if |{j : zj = 0}| ≤ m. And Um = SN−1 ∩ m − sparse. Let T be an n × N matrix, δm(T) is the smallest number such that for every m-sparse vector z (1 − δm(T))|z|2

2 ≤ |Tz|2 2 ≤ (1 + δm(T))|z|2 2

Equivalently δm(T) = sup

z∈Um

  • |Tz|2

2 − 1

slide-39
SLIDE 39

Restricted Isometry Property

Let 1 ≤ m ≤ N, a vector z ∈ RN is said m-sparse if |{j : zj = 0}| ≤ m. And Um = SN−1 ∩ m − sparse. Let T be an n × N matrix, δm(T) is the smallest number such that for every m-sparse vector z (1 − δm(T))|z|2

2 ≤ |Tz|2 2 ≤ (1 + δm(T))|z|2 2

Equivalently δm(T) = sup

z∈Um

  • |Tz|2

2 − 1

  • Take T =

1 √nA. Then

δm A √n

  • = sup

z∈Um

 1 n

  • N
  • j=1

zjXj

  • 2

2

− 1  

slide-40
SLIDE 40

Restricted Isometry Property

Take T =

1 √nA. Observe that δm is increasing in m and

δ1 A √n

  • ≤ δm

A √n

  • ≤ δ1

A √n

  • + sup

z∈Um

 1 n

  • N
  • j=1

zjXj

  • 2

2

N

  • j=1

z2

j |Xj|2 2

 

slide-41
SLIDE 41

Restricted Isometry Property

Take T =

1 √nA. Observe that δm is increasing in m and

δ1 A √n

  • ≤ δm

A √n

  • ≤ δ1

A √n

  • + sup

z∈Um

 1 n

  • N
  • j=1

zjXj

  • 2

2

N

  • j=1

z2

j |Xj|2 2

  But δ1 A √n

  • = max

1≤j≤N

  • |Xj|2

2

n − 1

  • .
slide-42
SLIDE 42

Restricted Isometry Property

Take T =

1 √nA. Observe that δm is increasing in m and

δ1 A √n

  • ≤ δm

A √n

  • ≤ δ1

A √n

  • + sup

z∈Um

 1 n

  • N
  • j=1

zjXj

  • 2

2

N

  • j=1

z2

j |Xj|2 2

  But δ1 A √n

  • = max

1≤j≤N

  • |Xj|2

2

n − 1

  • .

Define P(δ) = P

  • max

1≤j≤N

  • |Xj|2

2

n − 1

  • ≥ δ
  • .
slide-43
SLIDE 43

Result

Theorem (GLPT ’14). If X1, . . . , XN are random vectors in Rn such that ∀j, ∀θ ∈ Rn, ∀t > 0 P (|Xj, θ| > t) ≤ 1 tq for some p > 4. Let ε ≤ min(1, p−4

4 ) and γ = p − 4 − 2ε. Let δ ∈ (0, 1) and

m = Cp,ε,δ n N n − 2(2+ε)

p−4−2ε

Then with probability greater than 1 − Cε,p 1 N + N np/4

  • − P

δ 2

  • we have

δm A √n

  • ≤ δ.
slide-44
SLIDE 44

The problem.

On average. E sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − EXj, a2
slide-45
SLIDE 45

The problem.

On average. E sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − E′X′

j, a2

slide-46
SLIDE 46

The problem.

On average. EE′ sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − X′

j, a2

slide-47
SLIDE 47

The problem.

On average. EE′ sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − X′

j, a2

  • By symmetry

= EE′Eε sup

a∈Sn−1

  • 1

N

N

  • j=1

εj

  • Xj, a2 − X′

j, a2

slide-48
SLIDE 48

The problem.

On average. EE′ sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − X′

j, a2

  • By symmetry

= EE′Eε sup

a∈Sn−1

  • 1

N

N

  • j=1

εj

  • Xj, a2 − X′

j, a2

  • ≤ 2EEε sup

a∈Sn−1

  • 1

N

N

  • j=1

εjXj, a2

slide-49
SLIDE 49

The very very first step.

Symmetrization. E sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − EXj, a2

2EEε sup

a∈Sn−1

  • 1

N

N

  • j=1

εjXj, a2

  • How does
  • N
  • j=1

εjXj, a2

  • concentrate ?
slide-50
SLIDE 50

The very very first step.

Symmetrization. E sup

a∈Sn−1

  • 1

N

N

  • j=1
  • Xj, a2 − EXj, a2

2EEε sup

a∈Sn−1

  • 1

N

N

  • j=1

εjXj, a2

  • How does
  • N
  • j=1

εjXj, a2

  • concentrate ?

subgaussian or non-commutative Khintchine inequalities

slide-51
SLIDE 51

The very very first step.

Observation : for any p ≥ 2

  • E
  • N
  • j=1

εjtj

  • p1/p

≤ C√p N

  • j=1

t2

j

1/2 sub gaussian and

  • E
  • N
  • j=1

εjtj

  • p1/p

N

  • j=1

|tj| trivial

slide-52
SLIDE 52

The very very first step.

Observation : for any p ≥ 2

  • E
  • N
  • j=1

εjtj

  • p1/p

≤ C√p N

  • j=1

t2

j

1/2 sub gaussian and

  • E
  • N
  • j=1

εjtj

  • p1/p

N

  • j=1

|tj| trivial Combine

  • E
  • N
  • j=1

εjtj

  • p1/p

k

  • j=1

t∗

j + C√p

N

  • j=k

(t∗

j )2

1/2

slide-53
SLIDE 53

The very very first step.

EEε sup

a∈Sn−1

  • N
  • j=1

εjXj, a2

  • ? → tj = Xj, a2

What is sup

a∈Sn−1 k

  • j=1

t∗

j = sup a∈Sn−1 k

  • j=1

(Xj, a∗)2 ?

slide-54
SLIDE 54

The very very first step.

EEε sup

a∈Sn−1

  • N
  • j=1

εjXj, a2

  • ? → tj = Xj, a2

Duality and sparsity

  • sup

a∈Sn−1 k

  • j=1

t∗

j

1/2 =

  • sup

a∈Sn−1 k

  • j=1

(Xj, a∗)2 1/2 = sup

a∈Sn−1 sup α∈Sk−1 k

  • j=1

αjXj, a∗ = sup

a∈Sn−1 sup α∈Uk k

  • j=1

αjXj, a = sup

α∈Uk

  • k
  • j=1

αj Xj

  • 2
slide-55
SLIDE 55

The key Lemma. Correct symmetrization

Define Ak by Ak = sup

α∈Uk

  • k
  • j=1

αj Xj

  • 2
  • Lemma. For every A, Z > 0,

sup

a∈Sn−1

  • N
  • j=1
  • Xj, a2 − EXj, a2
  • ≤ 2A2 + 6√nZ + 8N

2 min(p,4)

with probability larger than 1−4 exp(−n)−4P(Ak > A)−4·9n sup

a∈Sn−1 P

  • i>k

(Xi, a∗)4 > Z2

slide-56
SLIDE 56

The key Lemma. Correct symmetrization

Define Ak by Ak = sup

α∈Uk

  • k
  • j=1

αj Xj

  • 2
  • Lemma. For every A, Z > 0,

sup

a∈Sn−1

  • N
  • j=1
  • Xj, a2 − EXj, a2
  • ≤ 2A2 + 6√nZ + 8N

2 min(p,4)

with probability larger than 1−4 exp(−n)−4P(Ak > A)−4·9n sup

a∈Sn−1 P

  • i>k

(Xi, a∗)4 > Z2

  • Mixture of the trivail bound for the Rademacher

(combinatorics on the rearrangement) and subgaussian.

slide-57
SLIDE 57

Second step.

Choose k = n. Then if p > 4 and for all t ≥ 1, for all a ∈ Sn−1, P (|Xi, a| > t) ≤ 1 tp then with proba greater than 1 − 10−n ;

  • i>n

(Xi, a∗)4 ≤ Cp N → Z = Cp √ N

slide-58
SLIDE 58

Third and main step. Evaluate Ak

A2

k = sup α∈Uk

  • k
  • j=1

αj Xj

  • 2

2

= sup

α∈Uk

  • i=j

αi Xi, αj Xj +

k

  • j=1

α2

j |Xj|2 2

≤ sup

α∈Uk

  • i=j

αi Xi, αj Xj + max

j≤N |Xj|2 2

slide-59
SLIDE 59

Third and main step. Evaluate Ak

Call B2

k = sup α∈Uk

  • i=j

αi Xi, αj Xj

slide-60
SLIDE 60

Third and main step. Evaluate Ak

Call B2

k = sup α∈Uk

  • i=j

αi Xi, αj Xj

  • Combinatorial argument :
  • i=j

αi Xi, αj Xj = 4 2N

  • I⊂{1,...,N}
  • i∈I

αiXi,

  • j/

∈I

αjXj

slide-61
SLIDE 61

Third and main step. Evaluate Ak

Call B2

k = sup α∈Uk

  • i=j

αi Xi, αj Xj

  • Combinatorial argument :
  • i=j

αi Xi, αj Xj = 4 2N

  • I⊂{1,...,N}
  • i∈I

αiXi,

  • j/

∈I

αjXj Call Qk(I) = sup

α∈Uk

  • i∈I

αiXi,

  • j/

∈I

αjXj

  • Therefore

B2

k ≤ 4

2N

  • I⊂{1,...,N}

Qk(I).

slide-62
SLIDE 62

Third and main step. Evaluate Ak

Call B2

k = sup α∈Uk

  • i=j

αi Xi, αj Xj

  • Combinatorial argument :
  • i=j

αi Xi, αj Xj = 4 2N

  • I⊂{1,...,N}
  • i∈I

αiXi,

  • j/

∈I

αjXj Call Qk(I) = sup

α∈Uk

  • i∈I

αiXi,

  • j/

∈I

αjXj

  • Therefore

B2

k ≤ 4

2N

  • I⊂{1,...,N}

Qk(I). Study of quadratic forms.

slide-63
SLIDE 63

Remarks.

Recent work about the smallest singular value : Srivastava-Vershynin, Oliveira (Roberto Imbuzeiro), Koltchinskii-Mendelson → It is ”easier” in the sense that you need weaker assumption : fourth moment assumption of the linear functional gives the good rate, like in Bai-Yin result. → Reconstruction

slide-64
SLIDE 64

THANK YOU