compsci 514 algorithms for data science
play

compsci 514: algorithms for data science Cameron Musco University - PowerPoint PPT Presentation

compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 24 (Final Lecture!) 0 logistics under the Schedule tab of the course page. week from 10am to 12pm to prep for final. has been


  1. compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 24 (Final Lecture!) 0

  2. logistics under the ‘Schedule’ tab of the course page. week from 10am to 12pm to prep for final. has been very helpful to me, so please fill out the survey! courseEvalSurvey/uma/ 1 • Problem Set 4 due Sunday 12/15 at 8pm. • Exam prep materials (including practice problems) posted • I will hold office hours on both Tuesday and Wednesday next • SRTI survey is open until 12/22. Your feedback this semester • https://owl.umass.edu/partners/

  3. • Finish up sparse recovery. • Solution via basis pursuit. Idea of convex relaxation. • Wrap up. summary Last Class: problem, sparse Fourier transform. This Class: 2 • Compressed sensing and sparse recovery. • Applications to sparse regression, frequent elements

  4. summary Last Class: problem, sparse Fourier transform. This Class: 2 • Compressed sensing and sparse recovery. • Applications to sparse regression, frequent elements • Finish up sparse recovery. • Solution via basis pursuit. Idea of convex relaxation. • Wrap up.

  5. z 0 d Az sparse recovery Last Time: Proved this is possible (i.e., the solution x is unique) when A has Kruskal rank 2 k . x arg min z b Kruskal rank condition can be satisfied with n as small as 2 k 3 Problem Set Up: Given data matrix A ∈ R n × d with n < d and measurements b = Ax . Recover x under the assumption that it is k -sparse, i.e., has at most k ≪ d nonzero entries.

  6. z 0 d Az sparse recovery Last Time: Proved this is possible (i.e., the solution x is unique) when A has Kruskal rank 2 k . x arg min z b Kruskal rank condition can be satisfied with n as small as 2 k 3 Problem Set Up: Given data matrix A ∈ R n × d with n < d and measurements b = Ax . Recover x under the assumption that it is k -sparse, i.e., has at most k ≪ d nonzero entries.

  7. sparse recovery Last Time: Proved this is possible (i.e., the solution x is unique) when Kruskal rank condition can be satisfied with n as small as 2 k 3 Problem Set Up: Given data matrix A ∈ R n × d with n < d and measurements b = Ax . Recover x under the assumption that it is k -sparse, i.e., has at most k ≪ d nonzero entries. A has Kruskal rank ≥ 2 k . x = arg min ∥ z ∥ 0 , z ∈ R d : Az = b

  8. frequent items counting approximately k -sparse. 4 • A frequency vector with k out of n very frequent items is • Can be approximately recovered from its multiplication with a random matrix A with just m = ˜ O ( k ) rows. • b = Ax can be maintained in a stream using just O ( m ) space. • Exactly the set up of Count-min sketch in linear algebraic notation.

  9. sparse fourier transform Discrete Fourier Transform: For a discrete signal (aka a vector) For many natural signals x is approximately sparse: a few dominant frequencies in a recording, superposition of a few radio transmitters sending at different frequencies, etc. 5 x ∈ R n , its discrete Fourier transform is denoted � x ∈ C n and given by � x = Fx , where F ∈ C n × n is the discrete Fourier transform matrix.

  10. sparse fourier transform Discrete Fourier Transform: For a discrete signal (aka a vector) x is approximately sparse: a few dominant frequencies in a recording, superposition of a few radio transmitters sending at different frequencies, etc. 5 x ∈ R n , its discrete Fourier transform is denoted � x ∈ C n and given by � x = Fx , where F ∈ C n × n is the discrete Fourier transform matrix. For many natural signals �

  11. • x sparse fourier transform x is sparse, can recover it from few measurements of x using sparse recovery. Fx and so x F 1 x F T x ( x = signal, x = Fourier transform). Translates to big savings in acquisition costs and number of sensors. 6 When the Fourier transform �

  12. sparse fourier transform x is sparse, can recover it from few measurements of x using sparse recovery. x = Fourier transform). Translates to big savings in acquisition costs and number of sensors. 6 When the Fourier transform � • � x = Fx and so x = F − 1 � x = F T � x ( x = signal, �

  13. sparse fourier transform x is sparse, can recover it from few measurements of x using sparse recovery. x = Fourier transform). Translates to big savings in acquisition costs and number of sensors. 6 When the Fourier transform � • � x = Fx and so x = F − 1 � x = F T � x ( x = signal, �

  14. sparse fourier transform x is sparse, can recover it from few measurements of x using sparse recovery. x = Fourier transform). Translates to big savings in acquisition costs and number of sensors. 6 When the Fourier transform � • � x = Fx and so x = F − 1 � x = F T � x ( x = signal, �

  15. sparse fourier transform x is sparse, can recover it from few measurements of x using sparse recovery. x = Fourier transform). Translates to big savings in acquisition costs and number of sensors. 6 When the Fourier transform � • � x = Fx and so x = F − 1 � x = F T � x ( x = signal, �

  16. sparse fourier transform Other Direction: When x itself is sparse, can recover it from few x using sparse recovery. How do we access/measure entries of Sx ? 7 measurements of the Fourier transform �

  17. sparse fourier transform Other Direction: When x itself is sparse, can recover it from few x using sparse recovery. How do we access/measure entries of Sx ? 7 measurements of the Fourier transform �

  18. sparse fourier transform Other Direction: When x itself is sparse, can recover it from few x using sparse recovery. How do we access/measure entries of Sx ? 7 measurements of the Fourier transform �

  19. sparse fourier transform Other Direction: When x itself is sparse, can recover it from few x using sparse recovery. x ? 7 measurements of the Fourier transform � How do we access/measure entries of S �

  20. • To measure entries of x need to measure the content of different • Achieved by inducing vibrations of different frequencies with a geosensing (e.g., a few locations of oil deposits). frequencies in a signal x . vibroseis truck, air guns, explosions, etc and recording the response (more complicated in reality...) 8 • In seismology, x is an image of the earth’s crust, and often sparse • Want to recover from a few measurements of the Fourier transform S � x = SFx .

  21. • Achieved by inducing vibrations of different frequencies with a geosensing (e.g., a few locations of oil deposits). x need to measure the content of different frequencies in a signal x . vibroseis truck, air guns, explosions, etc and recording the response (more complicated in reality...) 8 • In seismology, x is an image of the earth’s crust, and often sparse • Want to recover from a few measurements of the Fourier transform S � x = SFx . • To measure entries of �

  22. geosensing (e.g., a few locations of oil deposits). x need to measure the content of different frequencies in a signal x . vibroseis truck, air guns, explosions, etc and recording the response (more complicated in reality...) 8 • In seismology, x is an image of the earth’s crust, and often sparse • Want to recover from a few measurements of the Fourier transform S � x = SFx . • To measure entries of � • Achieved by inducing vibrations of different frequencies with a

  23. Back to Algorithms 9

  24. d Az b z 1 where z 1 1 z i . • Projected (sub)gradient descent – convex objective function and • An instance of linear programming, so typically faster to solve with a linear programming algorithm (e.g., simplex, interior point). convex constraint set. What is one algorithm we have learned for solving this problem? i d convex relaxation arg min z Basis Pursuit: x problem to be convex. Convex Relaxation: A very common technique. Just ‘relax’ the solving the non-convex optimization problem: 10 We would like to recover k -sparse x from measurements b = Ax by x = arg min ∥ z ∥ 0 z ∈ R d : Az = b Works if A has Kruskal rank ≥ 2 k , but very hard computationally.

  25. b z 1 where z 1 1 z i . d Az • Projected (sub)gradient descent – convex objective function and • An instance of linear programming, so typically faster to solve with a linear programming algorithm (e.g., simplex, interior point). convex constraint set. What is one algorithm we have learned for solving this problem? i d convex relaxation arg min z Basis Pursuit: x problem to be convex. Convex Relaxation: A very common technique. Just ‘relax’ the solving the non-convex optimization problem: 10 We would like to recover k -sparse x from measurements b = Ax by x = arg min ∥ z ∥ 0 z ∈ R d : Az = b Works if A has Kruskal rank ≥ 2 k , but very hard computationally.

  26. • Projected (sub)gradient descent – convex objective function and • An instance of linear programming, so typically faster to solve with convex relaxation problem to be convex. a linear programming algorithm (e.g., simplex, interior point). convex constraint set. What is one algorithm we have learned for solving this problem? 10 solving the non-convex optimization problem: Convex Relaxation: A very common technique. Just ‘relax’ the We would like to recover k -sparse x from measurements b = Ax by x = arg min ∥ z ∥ 0 z ∈ R d : Az = b Works if A has Kruskal rank ≥ 2 k , but very hard computationally. where ∥ z ∥ 1 = ∑ d Basis Pursuit: x = arg min z ∈ R d : Az = b ∥ z ∥ 1 i = 1 | z ( i ) | .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend