collaborative deep learning for recommender systems
play

Collaborative Deep Learning for Recommender Systems Hao Wang - PowerPoint PPT Presentation

Collaborative Deep Learning for Recommender Systems Hao Wang Naiyan Wang Dit-Yan Yeung 1 Motivation Stacked Denoising Autoencoders Probabilistic Matrix Factorization Collaborative Deep Learning Experiments Summary


  1. Collaborative Deep Learning for Recommender Systems Hao Wang Naiyan Wang Dit-Yan Yeung 1

  2. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary Motivation Stacked DAE 2 PMF Collaborative DL Experiments Summary

  3. Recommender Systems Rating matrix: Observed preferences: Matrix completion To predict: Motivation Stacked DAE 3 PMF Collaborative DL Experiments Summary

  4. Recommender Systems with Content Content information: Plots, directors, actors, etc. Motivation Stacked DAE 4 PMF Collaborative DL Experiments Summary

  5. Modeling the Content Information Automatically Automatically learn features and Handcrafted features learn features adapt for ratings Motivation Stacked DAE 5 PMF Collaborative DL Experiments Summary

  6. Modeling the Content Information 1. Powerful features for content information Deep learning 2. Feedback from rating information Non-i.i.d. Collaborative deep learning Motivation Stacked DAE 6 PMF Collaborative DL Experiments Summary

  7. Deep Learning Stacked denoising Convolutional neural Recurrent neural autoencoders networks networks Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction . Bengio et al. 2015 Motivation Stacked DAE 7 PMF Collaborative DL Experiments Summary

  8. Deep Learning Stacked denoising Convolutional neural Recurrent neural autoencoders networks networks Typically for i.i.d. data Motivation Stacked DAE 8 PMF Collaborative DL Experiments Summary

  9. Modeling the Content Information 1. Powerful features for content information Deep learning 2. Feedback from rating information Non-i.i.d. Collaborative deep learning (CDL) Motivation Stacked DAE 9 PMF Collaborative DL Experiments Summary

  10. Contribution  Collaborative deep learning: * deep learning for non-i.i.d. data * joint representation learning and collaborative filtering 10 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  11. Contribution  Collaborative deep learning  Complex target: * beyond targets like classification and regression * to complete a low-rank matrix 11 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  12. Contribution  Collaborative deep learning  Complex target  First hierarchical Bayesian models for hybrid deep recommender system 12 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  13. Contribution  Collaborative deep learning  Complex target  First hierarchical Bayesian models for hybrid deep recommender system  Significantly advance the state of the art 13 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  14. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary 14

  15. Stacked Denoising Autoencoders (SDAE) Corrupted input Clean input Vincent et al. 2010 15 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  16. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary 16

  17. Probabilistic Matrix Factorization (PMF) Graphical model: Notation: latent vector of item j latent vector of user i rating of item j from user i Generative process: Objective function if using MAP: Salakhutdinov et al. 2008 17 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  18. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary 18

  19. Probabilistic SDAE Graphical model: Generative process: Generalized SDAE Notation: corrupted input clean input weights and biases 19 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  20. Collaborative Deep Learning Graphical model: Collaborative deep learning SDAE Two-way interaction Notation: rating of item j from user i corrupted input • More powerful representation latent vector of item j clean input • Infer missing ratings from content latent vector of user i weights and biases • Infer missing content from ratings content representation 20 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  21. Collaborative Deep Learning Neural network representation for degenerated CDL 21 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  22. Collaborative Deep Learning Information flows from ratings to content 22 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  23. Collaborative Deep Learning Information flows from content to ratings 23 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  24. Collaborative Deep Learning Reciprocal: representation and recommendation 24 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  25. Learning maximizing the posterior probability is equivalent to maximizing the joint log-likelihood 25 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  26. Learning Prior (regularization) for user latent vectors, weights, and biases 26 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  27. Learning Generating item latent vectors from content representation with Gaussian offset 27 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  28. Learning ‘Generating’ clean input from the output of probabilistic SDAE with Gaussian offset 28 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  29. Learning Generating the input of Layer l from the output of Layer l-1 with Gaussian offset 29 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  30. Learning measures the error of predicted ratings 30 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  31. Learning If goes to infinity, the likelihood becomes ¸ s ¸ s 31 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  32. Update Rules For U and V, use block coordinate descent: For W and b, use a modified version of backpropagation: 32 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  33. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary 33

  34. Datasets Content information Titles and abstracts Titles and abstracts Movie plots Wang et al. 2011 Wang et al. 2013 34 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  35. Evaluation Metrics Recall: Mean Average Precision (mAP): Higher recall and mAP indicate better recommendation performance 35 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  36. Comparing Methods Hybrid methods using BOW and ratings Loosely coupled; interaction is not two-way PMF+LDA 36 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  37. Recall@M When the ratings are very sparse : citeulike-t , sparse setting Netflix , sparse setting When the ratings are dense : Netflix , dense setting citeulike-t , dense setting 37 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  38. Mean Average Precision (mAP) Exactly the same as Oord et al. 2013, we set the cutoff point at 500 for each user. A relative performance boost of about 50% 38 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  39. Number of Layers Sparse Setting Dense Setting The best performance is achieved when the number of layers is 2 or 3 ( 4 or 6 layers of generalized neural networks). 39 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  40. Example User Moonstruck Romance Movies Precision: 30% VS 20% True Romance 40 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  41. Example User Action & Johnny English Drama Movies Precision: 50% VS 20% American Beauty 41 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  42. Example User Precision: 90% VS 50% 42 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  43. • Motivation • Stacked Denoising Autoencoders • Probabilistic Matrix Factorization • Collaborative Deep Learning • Experiments • Summary 43

  44. Summary  Non-i.i.d (collaborative) deep learning  With a complex target  First hierarchical Bayesian models for hybrid deep recommender system  Significantly advance the state of the art 44 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  45. Summary  Word2vec, tf-idf  Sampling-based, variational inference  Tagging information, networks 45 Motivation Stacked DAE PMF Collaborative DL Experiments Summary

  46. Thank you! Hao Wang hwangaz@cse.ust.hk More results, code, and datasets: http://www.wanghao.in 46

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend