Near Optimal Dimensionality Reductions that Preserve Volumes - - PowerPoint PPT Presentation

near optimal dimensionality reductions that preserve
SMART_READER_LITE
LIVE PREVIEW

Near Optimal Dimensionality Reductions that Preserve Volumes - - PowerPoint PPT Presentation

Near Optimal Dimensionality Reductions that Preserve Volumes RANDOM/APPROX 2008 Avner Magen Anastasios Zouzias University of Toronto August, 2008 A. Zouzias (University of Toronto) Dimensionality Reductions for Volumes RANDOM/APPROX 2008 1 /


slide-1
SLIDE 1

Near Optimal Dimensionality Reductions that Preserve Volumes

RANDOM/APPROX 2008 Avner Magen Anastasios Zouzias

University of Toronto

August, 2008

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 1 / 23

slide-2
SLIDE 2

Dimension Reduction

P ⊆ IRt: set of n points

Goal: Find f : P → IRd (d ≪ n,t) s.t. some property is preserved.

Measure of quality (Distance) f has distortion 1+ ε if ∀p,q ∈ P p−q ≤ f(p)−f(q) ≤ (1+ ε)p−q. Measure of quality (Volume) f has volume distortion 1+ ε if ∀S ⊂ P,|S| ≤ k 1 ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 2 / 23

slide-3
SLIDE 3

Dimension Reduction

P ⊆ IRt: set of n points

Goal: Find f : P → IRd (d ≪ n,t) s.t. some property is preserved.

Measure of quality (Distance) f has distortion 1+ ε if ∀p,q ∈ P p−q ≤ f(p)−f(q) ≤ (1+ ε)p−q. Measure of quality (Volume) f has volume distortion 1+ ε if ∀S ⊂ P,|S| ≤ k 1 ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 2 / 23

slide-4
SLIDE 4

Dimension Reduction

P ⊆ IRt: set of n points

Goal: Find f : P → IRd (d ≪ n,t) s.t. some property is preserved.

Measure of quality (Distance) f has distortion 1+ ε if ∀p,q ∈ P p−q ≤ f(p)−f(q) ≤ (1+ ε)p−q. Measure of quality (Volume) f has volume distortion 1+ ε if ∀S ⊂ P,|S| ≤ k 1 ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 2 / 23

slide-5
SLIDE 5

Dimension Reduction

P ⊆ IRt: set of n points

Goal: Find f : P → IRd (d ≪ n,t) s.t. some property is preserved.

Measure of quality (Distance) f has distortion 1+ ε if ∀p,q ∈ P p−q ≤ f(p)−f(q) ≤ (1+ ε)p−q. Measure of quality (Volume) (This talk) f has volume distortion 1+ ε if ∀S ⊂ P,|S| ≤ k 1 ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 2 / 23

slide-6
SLIDE 6

Johnson Lindenstrauss Lemma

Lemma (Distances)

Let P an n-point subset of Euclidean space. There exists a mapping f from P into Rd,d = O(ε−2 logn) such that

∀x,y ∈ P (1− ε)x−y ≤ f(x)−f(y) ≤ (1+ ε)x−y

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 3 / 23

slide-7
SLIDE 7

Johnson Lindenstrauss Lemma

Lemma (Distances)

Let P an n-point subset of Euclidean space. There exists a mapping f from P into Rd,d = O(ε−2 logn) such that

∀x,y ∈ P (1− ε)x−y ≤ f(x)−f(y) ≤ (1+ ε)x−y

Almost tight Lower bound Ω(ε−2 logn/log(1/ε)) [Alon, 2003].

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 3 / 23

slide-8
SLIDE 8

Random Projections

Many ways to generate such a (linear) mapping (encoded by X ∈ Rn×d):

Xi,j ∼ N(0,1) Xi,j ∼ ±1 w.p. 1/2.

Sparse Gaussian matrix (with preprocessing) Entries with Subgaussian tails ECC and Rademacher r.v. Lean Walsh Transform. (Next talk, [Liberty et al., 2008])

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 4 / 23

slide-9
SLIDE 9

Random Projections

Many ways to generate such a (linear) mapping (encoded by X ∈ Rn×d):

Xi,j ∼ N(0,1) (This talk) Xi,j ∼ ±1 w.p. 1/2.

Sparse Gaussian matrix (with preprocessing) Entries with Subgaussian tails ECC and Rademacher r.v. Lean Walsh Transform. (Next talk, [Liberty et al., 2008])

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 4 / 23

slide-10
SLIDE 10

Related Work: Extensions of JL to other cases

[Magen, 2002] Preserve volumes of subsets of size up to k and affine distances using O(kε−2 logn) dimensions. [Sarlos, 2006] Preserve distances of all points lying in any k dim. linear subspace by projecting into O(kε−2 log(k/ε)) dimensions. [Wakin and Baraniuk, 2006, Agarwal et al., 2007, Clarkson, 2008] Moving points, curves, surfaces and manifolds etc.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 5 / 23

slide-11
SLIDE 11

Related Work: Extensions of JL to other cases

[Magen, 2002] Preserve volumes of subsets of size up to k and affine distances using O(kε−2 logn) dimensions. [Sarlos, 2006] Preserve distances of all points lying in any k dim. linear subspace by projecting into O(kε−2 log(k/ε)) dimensions. [Wakin and Baraniuk, 2006, Agarwal et al., 2007, Clarkson, 2008] Moving points, curves, surfaces and manifolds etc.

Our Contribution

Improve Magen’s result for volumes, by showing that

O(max{k/ε,ε−2 logn}) dimensions are enough.

JL Lemma preserves more than distances. It preserves volumes of subsets of size up to logn/ε.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 5 / 23

slide-12
SLIDE 12

Our Result

Theorem

Let P ⊂ Rn. There ∃ f : P → Rd,d = O(max{k

ε,ε−2 logn}), s.t. ∀ subset S of P,

1 < |S| < k, 1− ε ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

Overview of proof: There are roughly O(ns) sets of size s. It suffices to prove the failure probability for a subset of size s is roughly

e−Ω(sdε2).

Union bound implies that a volume-preserving mapping exists.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 6 / 23

slide-13
SLIDE 13

Our Result

Theorem

Let P ⊂ Rn. There ∃ f : P → Rd,d = O(max{k

ε,ε−2 logn}), s.t. ∀ subset S of P,

1 < |S| < k, 1− ε ≤ vol(f(S)) vol(S)

  • 1

|S|−1

≤ 1+ ε.

Overview of proof: There are roughly O(ns) sets of size s. It suffices to prove the failure probability for a subset of size s is roughly

e−Ω(sdε2). (Core of the talk.)

Union bound implies that a volume-preserving mapping exists.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 6 / 23

slide-14
SLIDE 14

Proof

Two steps:

1

Prove it for the regular n-simplex.

2

Reduce the general case to the above case.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 7 / 23

slide-15
SLIDE 15

The n-simplex

Assume input points are {e1,...,en}.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-16
SLIDE 16

The n-simplex

Assume input points are {e1,...,en}. Form a matrix ei → ith row, i.e., identity matrix.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-17
SLIDE 17

The n-simplex

Assume input points are {e1,...,en}. Form a matrix ei → ith row, i.e., identity matrix. Random Projection (without normalization)

                         1

...

1

... ... . . . . . . . . .

1

...

1                         

n×n

                          Xij ∼ N(0,1)                          

n×d

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-18
SLIDE 18

The n-simplex

Assume input points are {e1,...,en}. Projected points are Random Gaussian Vectors in Rd.

                                          Xij ∼ N(0,1)                                          

n×d

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-19
SLIDE 19

The n-simplex

Assume input points are {e1,...,en}. Projected points are Random Gaussian Vectors in Rd. Pick any subset S, |S| = s of rows of X

XS :=                      Xij ∼ N(0,1)                     

s×d

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-20
SLIDE 20

The n-simplex

Assume input points are {e1,...,en}. Projected points are Random Gaussian Vectors in Rd. Pick any subset S, |S| = s of rows of X

XS :=                      Xij ∼ N(0,1)                     

s×d

vol(S∪{0}) =

  • det(XSX⊤

S )/s!.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-21
SLIDE 21

The n-simplex

Assume input points are {e1,...,en}. Projected points are Random Gaussian Vectors in Rd. Pick any subset S, |S| = s of rows of X

XS :=                      Xij ∼ N(0,1)                     

s×d

vol(S∪{0}) =

  • det(XSX⊤

S )/s!.

What’s the distribution of

  • det(XSX⊤

S )?

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 8 / 23

slide-22
SLIDE 22

Distribution of

  • det(XSX⊤

S ) for s = 2 (0,0,. . . , 0) (u1, u2, . . . , ud) ui ∼ N(0, 1) x-axis 1) Pick a random vector

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 9 / 23

slide-23
SLIDE 23

Distribution of

  • det(XSX⊤

S ) for s = 2 (0,0,. . . , 0) (χd, 0, . . . , 0) χd ∼

  • i=1..d u2

i

2) Rotate to x-axis (Rotational invariance). x-axis 1) Pick a random vector

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 9 / 23

slide-24
SLIDE 24

Distribution of

  • det(XSX⊤

S ) for s = 2 (0,0,. . . , 0) (χd, 0, . . . , 0) χd ∼

  • i=1..d u2

i

2) Rotate to x-axis (Rotational invariance). x-axis 1) Pick a random vector 3) Pick one more. (u1, u2, . . . , ud)

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 9 / 23

slide-25
SLIDE 25

Distribution of

  • det(XSX⊤

S ) for s = 2 (0,0,. . . , 0) (χd, 0, . . . , 0) χd ∼

  • i=1..d u2

i

2) Rotate to x-axis (Rotational invariance). x-axis 1) Pick a random vector 3) Pick one more. (u1, u2, . . ., ud) χd−1

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 9 / 23

slide-26
SLIDE 26

Distribution of

  • det(XSX⊤

S ) for s = 2 (0,0,. . . , 0) (χd, 0, . . . , 0) χd ∼

  • i=1..d u2

i

2) Rotate to x-axis (Rotational invariance). x-axis 1) Pick a random vector 3) Pick one more. (u1, u2, . . ., ud) χd−1 vol(♦) ∼ χdχd−1

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 10 / 23

slide-27
SLIDE 27

Random Determinants

For s = 2,

  • det(XSX⊤

S ) = χdχd−1

Using induction, we can show that: Let XS s-by-d (s ≤ d) Gaussian Random Matrix,

det(XSX⊤

S ) ∼ s

  • i=1

χ2

d−i+1

where the Chi square r.v. are independent. Facts about χ2 distribution: χ2

t = t i=1 u2 i , ui ∼ N(0,1).

χ2

t is sharply concentrated around its expected value (e−Ω(tε2)).

Pr

  • χ2

t −E[χ2 t ]

  • > εE[χ2

t ]

  • < e−Ω(tε2)
  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 11 / 23

slide-28
SLIDE 28

Random Determinants

For s = 2,

  • det(XSX⊤

S ) = χdχd−1

Using induction, we can show that: Let XS s-by-d (s ≤ d) Gaussian Random Matrix,

det(XSX⊤

S ) ∼ s

  • i=1

χ2

d−i+1

where the Chi square r.v. are independent. Facts about χ2 distribution: χ2

t = t i=1 u2 i , ui ∼ N(0,1).

χ2

t is sharply concentrated around its expected value (e−Ω(tε2)).

What about the concentration of the (normalized) product of χ2?

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 11 / 23

slide-29
SLIDE 29

Normalized Product of Independent χ2

Theorem ([Gordon, 1989])        

s

  • i=1

χ2

d−i+1

       

1 s

∼ 1 s χ2

sd±O(s2)

Reminder: χ2

t concentrated with e−Ω(tε2).

Implication: Take any subset S of the vertices of the n-simplex, call it IS.

vol(f(IS))

vol(IS)

1

s =

det(XSX⊤

S )/s!

det(ISI⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 12 / 23

slide-30
SLIDE 30

Normalized Product of Independent χ2

Theorem ([Gordon, 1989])        

s

  • i=1

χ2

d−i+1

       

1 s

∼ 1 s χ2

sd±O(s2)

Reminder: χ2

t concentrated with e−Ω(tε2).

Implication: Take any subset S of the vertices of the n-simplex, call it IS.

vol(f(IS))

vol(IS)

1

s =

det(XSX⊤

S )/s!

det(ISI⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 12 / 23

slide-31
SLIDE 31

Normalized Product of Independent χ2

Theorem ([Gordon, 1989])        

s

  • i=1

χ2

d−i+1

       

1 s

∼ 1 s χ2

sd±O(s2)

Reminder: χ2

t concentrated with e−Ω(tε2).

Implication: Take any subset S of the vertices of the n-simplex, call it IS.

vol(f(IS))

vol(IS)

1

s =

det(XSX⊤

S )/s!

det(ISI⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 12 / 23

slide-32
SLIDE 32

Normalized Product of Independent χ2

Theorem ([Gordon, 1989])        

s

  • i=1

χ2

d−i+1

       

1 s

∼ 1 s χ2

sd±O(s2)

Reminder: χ2

t concentrated with e−Ω(tε2).

Implication: Take any subset S of the vertices of the n-simplex, call it IS.

vol(f(IS))

vol(IS)

1

s =

det(XSX⊤

S )/s!

det(ISI⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s ≈ χ2

sd

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 12 / 23

slide-33
SLIDE 33

Normalized Product of Independent χ2

Theorem ([Gordon, 1989])        

s

  • i=1

χ2

d−i+1

       

1 s

∼ 1 s χ2

sd±O(s2)

Reminder: χ2

t concentrated with e−Ω(tε2).

Implication: Take any subset S of the vertices of the n-simplex, call it IS.

vol(f(IS))

vol(IS)

1

s =

det(XSX⊤

S )/s!

det(ISI⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s ≈ χ2

sd

The normalized volume of any subset IS of the n-simplex is concentrated with exp(−Ω(sdε2))

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 12 / 23

slide-34
SLIDE 34

Points in General Position

Let P a n-point set in general position.

                         ∗ ∗ ∗

...

∗ ∗ ∗ ∗

...

∗ ∗ ∗ Pij

. . . . . . . . .

∗ ∗ ∗ ∗

...

∗ ∗                         

n×n

                          Xij ∼ N(0,1)                          

n×d

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 13 / 23

slide-35
SLIDE 35

Points in General Position

Pick any subset PS of P

               ∗ ∗ ∗ ∗ ∗ ∗ ∗ PS ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗               

s×n

                          Xij ∼ N(0,1)                          

n×d

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 13 / 23

slide-36
SLIDE 36

Points in General Position

Pick any subset PS of P

               ∗ ∗ ∗ ∗ ∗ ∗ ∗ PS ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗               

s×n

                          Xij ∼ N(0,1)                          

n×d

Let YS = PSX. Observation 1: YS is a correlated Gaussian matrix. Using the stability property of Gaussian Distribution, we can show that

det(YSY⊤

S )

det(PSP⊤

S ) ∼ det(XSX⊤ S )

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 13 / 23

slide-37
SLIDE 37

Points in General Position

vol(YS)

vol(PS)

1

s =

det(YSY⊤

S )/s!

det(PSP⊤

S )/s!

1

s

=

  • det(XSX⊤

S )

1

2s ≈ χ2

sd

Same distribution as n-simplex case. Observation 2: (Normalized) volume distribution independent of PS. Conclusion: For points in general position we have the desired concentration. Therefore, by union bound there exists a volume preserving mapping to

Rd.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 14 / 23

slide-38
SLIDE 38

Why k/ε on d = O(max{k/ε,ε−2logn})?

Let µs be the expected normalized volume of subsets of size s. µs is decreasing in s, µs ≤

√ d −s/2.

Also, µ2 ≥

√ d −1.

We must satisfy:

  • d −k/2

d −1 ≥ 1−O(ε)

Therefore, d = Ω(k/ε).

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 15 / 23

slide-39
SLIDE 39

Digestion slide

Distance (JL) Volume (This work)

|S| = s

Parameter

s = 1 s < k

Quantity(Squared & normalized) Length Volume

SX2 S2

det(SXX⊤S⊤)

det(SS⊤)

1

s

Random Variable

χ2

d

s

i=1 χ2 d−i+1

1

s

Concentration

e−Ω(dε2) e−Ω(sdε2)

Lower Bound

[Alon, 2003] ?

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 16 / 23

slide-40
SLIDE 40

Open Question - Lower Bound

Open Question

Let A be an n×n positive semidefinite matrix such that the determinant of every s×s principal minor (s ≤ k) is between (1− ε)s−1 to 1. Then is the rank of

A at least min{Ω(k/ε),n}?

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 17 / 23

slide-41
SLIDE 41

Summary

We discussed: A O(max{ε−2 logn,k/ε}) upper bound for volume-preserving (to subset of size up to k) dimension reduction. JL Lemma preserves more than distances. It preserves volumes of subsets of size up to logn/ε. An open question for the lower bound. Future work: Extension to Rd, d = o(logn).

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 18 / 23

slide-42
SLIDE 42

Thank You

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 19 / 23

slide-43
SLIDE 43

References I

Agarwal, P . K., Har-Peled, S., and Yu, H. (2007). Embeddings of surfaces, curves, and moving points in euclidean space. In SCG ’07: Proceedings of the twenty-third annual symposium on Computational geometry, pages 381–389, New York, NY, USA. ACM. Alon, N. (2003). Problems and results in extremal combinatorics, i. Discrete Math., (273):31–53. Clarkson, K. L. (2008). Tighter bounds for random projections of manifolds. In SCG ’08: Proceedings of the twenty-fourth annual symposium on Computational geometry, pages 39–48, New York, NY, USA. ACM. Gordon, L. (1989). Bounds for the distribution of the generalized variance. The Annals of Statistics, 17(4):1684–1692. Liberty, E., Ailon, N., and Singer, A. (2008). Fast random projections using lean walsh transforms. To Appear in RANDOM. Magen, A. (2002). Dimensionality reductions that preserve volumes and distance to affine spaces, and their algorithmic applications. In RANDOM ’02, pages 239–253. Springer-Verlag. Sarlos, T. (2006). Improved approximation algorithms for large matrices via random projections. In FOCS ’06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 143–152, Washington, DC, USA. IEEE Computer Society.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 20 / 23

slide-44
SLIDE 44

References II

Wakin, M. and Baraniuk, R. (2006). Random projections of signal manifolds. Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, 5:V–V.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 21 / 23

slide-45
SLIDE 45

Extra slides

Concentration bounds for χ2

Lemma

Let χ2

t = t i=1 X2 i , where Xi ∼ N(0,1). Then for every ε, with 0 < ε ≤ 1/2, we

have that

Pr

  • χ2

t ≤ (1− ε)E[χ2 t ]

exp(−tε2 6 )

and

Pr

  • χ2

t ≥ (1+ ε)E[χ2 t ]

exp(−tε2 6 ).

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 22 / 23

slide-46
SLIDE 46

Extra slides

Theorem ([Gordon, 1989])

Let ui := χ2

d−i+1 be independent Chi-square random variables for i = 1,2,...,s.

If ui are independent, then the following holds for every s ≥ 1, χ2

s(d−s+1)+ (s−1)(s−2)

2

s        

s

  • i=1

ui        

1/s

χ2

s(d−s+1).

(1) By X Y we denote that the r.v. X is stochastically greater than Y.

  • A. Zouzias (University of Toronto)

Dimensionality Reductions for Volumes RANDOM/APPROX 2008 23 / 23