tensor methods a new paradigm for probabilistic models
play

Tensor Methods: A New Paradigm for Probabilistic Models and Feature - PowerPoint PPT Presentation

Tensor Methods: A New Paradigm for Probabilistic Models and Feature Learning Anima Anandkumar U.C. Irvine Learning with Big Data Data vs. Information Data vs. Information Data vs. Information Missing observations, gross corruptions,


  1. Tensor Methods: A New Paradigm for Probabilistic Models and Feature Learning Anima Anandkumar U.C. Irvine

  2. Learning with Big Data

  3. Data vs. Information

  4. Data vs. Information

  5. Data vs. Information Missing observations, gross corruptions, outliers.

  6. Data vs. Information Missing observations, gross corruptions, outliers. Learning useful information is finding needle in a haystack!

  7. Matrices and Tensors as Data Structures Multi-modal and multi-relational data. Matrices: pairwise relations. Tensors: higher order relations. Multi-modal data figure from Lise Getoor slides.

  8. Spectral Decomposition of Tensors M 2 = � λ i u i ⊗ v i i .... = + Matrix M 2 λ 1 u 1 ⊗ v 1 λ 2 u 2 ⊗ v 2

  9. Spectral Decomposition of Tensors M 2 = � λ i u i ⊗ v i i .... = + Matrix M 2 λ 1 u 1 ⊗ v 1 λ 2 u 2 ⊗ v 2 M 3 = � λ i u i ⊗ v i ⊗ w i i .... = + Tensor M 3 λ 1 u 1 ⊗ v 1 ⊗ w 1 λ 2 u 2 ⊗ v 2 ⊗ w 2 We have developed efficient methods to solve tensor decomposition.

  10. Strengths of Tensor Methods Fast and accurate, orders of magnitude faster than previous methods. Embarrassingly parallel and suited for cloud systems, e.g.Spark. Exploit optimized linear algebra libraries. Exploit parallelism of GPU systems.

  11. Strengths of Tensor Methods Fast and accurate, orders of magnitude faster than previous methods. Embarrassingly parallel and suited for cloud systems, e.g.Spark. Exploit optimized linear algebra libraries. Exploit parallelism of GPU systems. 4 10 3 10 Running time(secs) 2 10 1 10 MATLAB Tensor Toolbox(CPU) 0 10 CULA Standard Interface(GPU) CULA Device Interface(GPU) Eigen Sparse(CPU) −1 10 2 3 10 10 Number of communities k

  12. Outline Introduction 1 Learning Probabilistic Models 2 Experiments 3 Feature Learning with Tensor Methods 4 Conclusion 5

  13. Latent variable models Incorporate hidden or latent variables. Information structures: Relationships between latent variables and observed data.

  14. Latent variable models Incorporate hidden or latent variables. Information structures: Relationships between latent variables and observed data. Basic Approach: mixtures/clusters Hidden variable is categorical.

  15. Latent variable models Incorporate hidden or latent variables. Information structures: Relationships between latent variables and observed data. Basic Approach: mixtures/clusters Hidden variable is categorical. h 1 Advanced: Probabilistic models Hidden variables have more general distributions. h 2 h 3 Can model mixed membership/hierarchical groups. x 1 x 2 x 3 x 4 x 5

  16. Challenges in Learning LVMs Computational Challenges Maximum likelihood: non-convex optimization. NP-hard. Practice: Local search approaches such as gradient descent, EM, Variational Bayes have no consistency guarantees. Can get stuck in bad local optima. Poor convergence rates and hard to parallelize. Tensor methods yield guaranteed learning for LVMs

  17. Unsupervised Learning of LVMs GMM HMM ICA h 1 h 2 h k h 1 h 2 h 3 x 1 x 2 x d x 1 x 2 x 3 Multiview and Topic Models

  18. Overall Framework for Unsupervised Learning = + .... Inference Probabilistic Tensor Unlabeled admixture Method Data models

  19. Outline Introduction 1 Learning Probabilistic Models 2 Experiments 3 Feature Learning with Tensor Methods 4 Conclusion 5

  20. Demo for Learning Gaussian Mixtures

  21. NYTimes Demo

  22. Experimental Results on Yelp and DBLP Lowest error business categories & largest weight businesses Rank Category Business Stars Review Counts 1 Latin American Salvadoreno Restaurant 4 . 0 36 2 Gluten Free P.F. Chang’s China Bistro 3 . 5 55 3 Hobby Shops Make Meaning 4 . 5 14 4 Mass Media KJZZ 91 . 5 FM 4 . 0 13 5 Yoga Sutra Midtown 4 . 5 31

  23. Experimental Results on Yelp and DBLP Lowest error business categories & largest weight businesses Rank Category Business Stars Review Counts 1 Latin American Salvadoreno Restaurant 4 . 0 36 2 Gluten Free P.F. Chang’s China Bistro 3 . 5 55 3 Hobby Shops Make Meaning 4 . 5 14 4 Mass Media KJZZ 91 . 5 FM 4 . 0 13 5 Yoga Sutra Midtown 4 . 5 31 Top- 5 bridging nodes (businesses) Business Categories Four Peaks Brewing Restaurants, Bars, American, Nightlife, Food, Pubs, Tempe Pizzeria Bianco Restaurants, Pizza, Phoenix FEZ Restaurants, Bars, American, Nightlife, Mediterranean, Lounges, Phoenix Matt’s Big Breakfast Restaurants, Phoenix, Breakfast& Brunch Cornish Pasty Co Restaurants, Bars, Nightlife, Pubs, Tempe

  24. Experimental Results on Yelp and DBLP Lowest error business categories & largest weight businesses Rank Category Business Stars Review Counts 1 Latin American Salvadoreno Restaurant 4 . 0 36 2 Gluten Free P.F. Chang’s China Bistro 3 . 5 55 3 Hobby Shops Make Meaning 4 . 5 14 4 Mass Media KJZZ 91 . 5 FM 4 . 0 13 5 Yoga Sutra Midtown 4 . 5 31 Top- 5 bridging nodes (businesses) Business Categories Four Peaks Brewing Restaurants, Bars, American, Nightlife, Food, Pubs, Tempe Pizzeria Bianco Restaurants, Pizza, Phoenix FEZ Restaurants, Bars, American, Nightlife, Mediterranean, Lounges, Phoenix Matt’s Big Breakfast Restaurants, Phoenix, Breakfast& Brunch Cornish Pasty Co Restaurants, Bars, Nightlife, Pubs, Tempe Error ( E ) and Recovery ratio ( R ) ˆ Dataset k Method Running Time E R DBLP sub(n=1e5) 500 ours 10,157 0 . 139 89% DBLP sub(n=1e5) 500 variational 558,723 16 . 38 99% DBLP(n=1e6) 100 ours 5407 0 . 105 95%

  25. Discovering Gene Profiles of Neuronal Cell Types Learning mixture of point processes of cells through tensor methods. Components of mixture are candidates for neuronal cell types.

  26. Discovering Gene Profiles of Neuronal Cell Types Learning mixture of point processes of cells through tensor methods. Components of mixture are candidates for neuronal cell types.

  27. Hierarchical Tensors for Healthcare Analytics = .... + .... .... = .... = = + + +

  28. Hierarchical Tensors for Healthcare Analytics = .... + .... .... = .... = = + + + CMS dataset: 1.6 million patients, 15.8 million events. Mining disease inferences from patient records.

  29. Outline Introduction 1 Learning Probabilistic Models 2 Experiments 3 Feature Learning with Tensor Methods 4 Conclusion 5

  30. Feature Learning For Efficient Classification Find good transformations of input for improved classification Figures used attributed to Fei-Fei Li, Rob Fergus, Antonio Torralba, et al.

  31. Tensor Methods for Training Neural Networks m -th order score function First order score function Input pdf is p ( x ) S m ( x ) := ( − 1) m ∇ ( m ) p ( x ) p ( x ) Capture local variations in data. Algorithm for Training Neural Networks Estimate score functions using autoencoder. Decompose tensor E [ y ⊗ S m ( x )] to obtain weights, for m ≥ 3 . Recursively estimate score function of autoencoder and repeat.

  32. Tensor Methods for Training Neural Networks m -th order score function Second order score function Input pdf is p ( x ) S m ( x ) := ( − 1) m ∇ ( m ) p ( x ) p ( x ) Capture local variations in data. Algorithm for Training Neural Networks Estimate score functions using autoencoder. Decompose tensor E [ y ⊗ S m ( x )] to obtain weights, for m ≥ 3 . Recursively estimate score function of autoencoder and repeat.

  33. Tensor Methods for Training Neural Networks m -th order score function Third order score function Input pdf is p ( x ) S m ( x ) := ( − 1) m ∇ ( m ) p ( x ) p ( x ) Capture local variations in data. Algorithm for Training Neural Networks Estimate score functions using autoencoder. Decompose tensor E [ y ⊗ S m ( x )] to obtain weights, for m ≥ 3 . Recursively estimate score function of autoencoder and repeat.

  34. Demo: Training Neural Networks

  35. Combining Probabilistic Models with Deep Learning Multi-object Detection in Computer vision Deep learning is able to extract good features, but not context. Probabilistic models capture contextual information. Hierarchical models + pre-trained deep learning features. State-of-art results on Microsoft COCO.

  36. Outline Introduction 1 Learning Probabilistic Models 2 Experiments 3 Feature Learning with Tensor Methods 4 Conclusion 5

  37. Conclusion: Tensor Methods for Learning Tensor Decomposition Efficient sample and computational complexities Better performance compared to EM, Variational Bayes etc. In practice Scalable and embarrassingly parallel: handle large datasets. Efficient performance: perplexity or ground truth validation.

  38. My Research Group and Resources Furong H. Majid J. Hanie S. Niranjan U.N. Forough A. Tejaswi N. Hao L. Yang S. ML summer school lectures available at http://newport.eecs.uci.edu/anandkumar/MLSS.html

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend