col866 foundations of data science
play

COL866: Foundations of Data Science Ragesh Jaiswal, IITD Ragesh - PowerPoint PPT Presentation

COL866: Foundations of Data Science Ragesh Jaiswal, IITD Ragesh Jaiswal, IITD COL866: Foundations of Data Science Gaussians in High Dimension Ragesh Jaiswal, IITD COL866: Foundations of Data Science High Dimension Space Gaussian annulus


  1. COL866: Foundations of Data Science Ragesh Jaiswal, IITD Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  2. Gaussians in High Dimension Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  3. High Dimension Space Gaussian annulus theorem A one dimensional Gaussian has much of its probability mass close to the origin. Does this generalise to higher dimensions? A d -dimensional spherical Gaussian with 0 means and σ 2 variance in each coordinate has density: σ d (2 π ) d / 2 e − || x || 2 1 p ( x ) = 2 σ 2 Let σ 2 = 1. Even though the probability density is high within the unit ball, the volume of of the unit ball is negligible and hence the probability mass within the unit ball is negligible. √ When the radius is d , the volume becomes large enough to √ make the probability mass around the d radius significant. √ Even though the volume keeps increasing beyond the d radius, the probability density keeps diminishing. So, the probability mass √ much beyond the d radius is again negligible. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  4. High Dimension Space Gaussian annulus theorem Even though the probability density is high within the unit ball, the volume of of the unit ball is negligible and hence the probability mass within the unit ball is negligible. √ When the radius is d , the volume becomes large enough to √ make the probability mass around the d radius significant. √ Even though the volume keeps increasing beyond the d radius, the probability density keeps diminishing. So, the probability mass √ much beyond the d radius is again negligible. This intuition is formalised in the next theorem. Theorem (Gaussian Annulus Theorem) For a d-dimensional spherical Gaussian with unit variance in each √ d, all but at most 3 e − c β 2 of the probability direction, for any β ≤ √ √ mass lies within the annulus d − β ≤ || x || ≤ d + β , where c is a fixed positive constant. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  5. High Dimension Space Gaussian annulus theorem Theorem (Gaussian Annulus Theorem) For a d-dimensional spherical Gaussian with unit variance in each √ d, all but at most 3 e − c β 2 of the probability direction, for any β ≤ √ √ d − β ≤ || x || ≤ d + β , where c is a mass lies within the annulus fixed positive constant. E [ || x || 2 ] = � d i =1 E [ x 2 i ] = d · E [ x 2 1 ] = d . So, the average squared distance of a point from center is d . The Gaussian annulus theorem essentially says that the distance of √ points is tightly concentrated around the distance d (called radius of Gaussian). Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  6. Random Projection and Johnson Lindenstrauss (JL) Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  7. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Typical data analysis tasks requires one to process d -dimensional point set of cardinality n where n and d are very large numbers. Many data processing tasks depends only on the pair-wise distances between the points (e.g., nearest neighbour search). Each such distance query has a significant computational cost due to the large value of the dimension d . Question: Can we perform dimensionality reduction on the dataset? That is, find a mapping f : R d → R k with k << d such that the pairwise distances between the mapped points are preserved (in a relative sense). Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  8. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Claim There exists a mapping f : R d → R k with k << d such that the pairwise distances between the mapped points are preserved (in a relative sense). Consider the following mapping: f ( v ) = ( u 1 · v , ..., u k · v ) , where u 1 , ..., u k ∈ R d are Gaussian vectors with unit variance and zero mean in each coordinate. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  9. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Claim There exists a mapping f : R d → R k with k << d such that the pairwise distances between the mapped points are preserved (in a relative sense). Consider the following mapping: f ( v ) = ( u 1 · v , ..., u k · v ) , where u 1 , ..., u k ∈ R d are Gaussian vectors with unit variance and zero mean in each coordinate. √ We will show that || f ( v ) || ≈ k || v || . Due to the nature of the mapping, for any two vectors v 1 , v 2 ∈ R d we have: √ || f ( v 1 ) − f ( v 2 ) || ≈ k · || v 1 − v 2 || . So, the distance between v 1 and v 2 can be estimated by computing the distance between the mapped points and then √ dividing the result by k . Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  10. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Claim √ For any v ∈ R d , || f ( v ) || ≈ k || v || . Theorem (Random Projection Theorem) There exists a constant c > 0 such that for any ε ∈ (0 , 1) and v ∈ R d , √ √ �� � � ≤ 3 e − ck ε 2 . Pr � || f ( v ) || − k || v || � ≥ ε k || v || � � The probability is over the randomness involved in sampling the vectors u i ’s. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  11. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Claim √ For any v ∈ R d , || f ( v ) || ≈ k || v || . Theorem (Random Projection Theorem) There exists a constant c > 0 such that for any ε ∈ (0 , 1) and v ∈ R d , √ √ �� � � ≤ 3 e − ck ε 2 . Pr � || f ( v ) || − k || v || � ≥ ε k || v || � � The probability is over the randomness involved in sampling the vectors u i ’s. Proof Claim 1: It is sufficient to prove the statement for unit vectors v . For all u i ,we have: d d d � � v 2 � v 2 Var ( u i · v ) = Var ( u ij v j ) = j Var ( u ij ) = j = 1 . j =1 j =1 j =1 So, f ( v ) = ( u 1 · v , ..., u k · v ) is a k dimensional Gaussian with unit variance in each coordinate. The result now follows from a simple application of the Gaussian Annulus Theorem. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  12. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Claim √ For any two vectors v 1 , v 2 ∈ R d , || f ( v 1 ) − f ( v 2 ) || ≈ k · || v 1 − v 2 || . Theorem (Johnson-Lindenstrauss (JL) Theorem) 3 For any 0 < ε < 1 and any integer n, let k ≥ c ε 2 ln n with c as in the Random Projection Theorem. For any set of n points in R d , the random projection f : R d → R k defined as before has the property that for all pairs of points v i and v j , with probability at least (1 − 3 2 n ) , √ √ (1 − ε ) k || v i − v j || ≤ || f ( v i ) − f ( v j ) || ≤ (1 + ε ) k || v i − v j || . Proof We obtain the result from the Random Projection Theorem by � n � < n 2 / 2 applying the union bound with respect to at most 2 pairs of points. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  13. High Dimension Space Random Projection and Johnson Lindenstrauss (JL) Theorem (Johnson-Lindenstrauss (JL) Theorem) 3 For any 0 < ε < 1 and any integer n, let k ≥ c ε 2 ln n with c as in the Random Projection Theorem. For any set of n points in R d , the random projection f : R d → R k defined as before has the property that for all pairs of points v i and v j , with probability at least (1 − 3 2 n ) , √ √ (1 − ε ) k || v i − v j || ≤ || f ( v i ) − f ( v j ) || ≤ (1 + ε ) k || v i − v j || . Here is an application of the JL Theorem for the Nearest Neighbour (NN) problem: Suppose we need to pre-process n data points X ⊆ R d so that we can answer at most n ′ queries of the form: “find the point from X that is nearest to a given point p ∈ R d ”. 3 If we use a JL mapping with k ≥ c ε 2 ln ( n + n ′ ), then we can store f ( x ) for all x ∈ X . For a query point p , we just return the the point that is nearest to f ( p ). Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  14. Separating Gaussians Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  15. Separating Gaussians Mixture of Gaussians Mixture of Gaussians are used to model heterogenous data coming from multiple sources. Consider an example of height of people in a city: Let p M ( x ) denote the Gaussian density of height of men in the city and p F ( x ) for women. Let w M and w F denote the proportion of men and women in the city respectively. So, the mixture model p ( x ) = w M · p M ( x ) + w F · p F ( x ) is a natural way to model the density of hight of people in the city. Ragesh Jaiswal, IITD COL866: Foundations of Data Science

  16. Separating Gaussians Mixture of Gaussians Mixture of Gaussians are used to model heterogenous data coming from multiple sources. Consider an example of height of people in a city: Let p M ( x ) denote the Gaussian density of height of men in the city and p F ( x ) for women. Let w M and w F denote the proportion of men and women in the city respectively. So, the mixture model p ( x ) = w M · p M ( x ) + w F · p F ( x ) is a natural way to model the density of hight of people in the city. The parameter estimation problem is to guess the parameters of the mixture given samples from the mixture. In our above example this means that we are given heights of a number of people of the city and the task is to infer w M , w F and the mean and variance of p M ( x ) and p F ( x ). Ragesh Jaiswal, IITD COL866: Foundations of Data Science

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend