collaborative deep learning and its variants for
play

Collaborative Deep Learning and Its Variants for Recommender Systems - PowerPoint PPT Presentation

1 Collaborative Deep Learning and Its Variants for Recommender Systems Hao Wang Joint work with Naiyan Wang, Xingjian Shi, and Dit-Yan Yeung 2 Recommender Systems Rating matrix: Observed preferences: Matrix completion To predict: 3


  1. 1 Collaborative Deep Learning and Its Variants for Recommender Systems Hao Wang Joint work with Naiyan Wang, Xingjian Shi, and Dit-Yan Yeung

  2. 2 Recommender Systems Rating matrix: Observed preferences: Matrix completion To predict:

  3. 3 Recommender Systems with Content Content information: Plots, directors, actors, etc.

  4. 4 Modeling the Content Information Automatically Automatically learn features and Handcrafted features learn features adapt for ratings

  5. 5 Modeling the Content Information 1. Powerful features for content information Deep learning 2. Feedback from rating information Non-i.i.d. Collaborative deep learning

  6. 6 Deep Learning Stacked denoising Convolutional neural Recurrent neural autoencoders networks networks Typically for i.i.d. data

  7. 7 Modeling the Content Information 1. Powerful features for content information Deep learning 2. Feedback from rating information Non-i.i.d. Collaborative deep learning (CDL)

  8. 8 Contribution  Collaborative deep learning: * deep learning for non-i.i.d. data * joint representation learning and collaborative filtering

  9. 9 Contribution  Collaborative deep learning  Complex target: * beyond targets like classification and regression * to complete a low-rank matrix

  10. 10 Contribution  Collaborative deep learning  Complex target  First hierarchical Bayesian models for deep hybrid recommender system

  11. 11 Related Work • Not hybrid methods (ratings only) RBM (single layer, Salakhutdinov et al., 2007) I-RBM/U-RBM (Georgiev et al., 2013) • Not using Bayesian modeling for joint learning DeepMusic (van den Oord et al., 2013) HLDBN (Wang et al., 2014)

  12. 12 Stacked Denoising Autoencoders (SDAE) Corrupted input Clean input [ Vincent et al. 2010 ]

  13. 13 Probabilistic Matrix Factorization (PMF) Graphical model: Notation: latent vector of item j latent vector of user i rating of item j from user i Generative process: Objective function if using MAP: [ Salakhutdinov et al. 2008 ]

  14. 14 Probabilistic SDAE Graphical model: Generative process: Generalized SDAE Notation: corrupted input clean input weights and biases

  15. 15 Collaborative Deep Learning (CDL) Graphical model: Collaborative deep learning SDAE Two-way interaction Notation: rating of item j from user i corrupted input • More powerful representation latent vector of item j clean input • Infer missing ratings from content latent vector of user i weights and biases • Infer missing content from ratings content representation

  16. 16 A Principled Probabilistic Framework Perception Component Task-Specific Component Perception Variables Task Variables Hinge Variables [ Wang et al. TKDE 2016 ]

  17. 17 CDL with Two Components Graphical model: Collaborative deep learning SDAE Two-way interaction Notation: rating of item j from user i corrupted input • More powerful representation latent vector of item j clean input • Infer missing ratings from content latent vector of user i weights and biases • Infer missing content from ratings content representation

  18. 18 Collaborative Deep Learning Neural network representation for degenerated CDL

  19. 19 Collaborative Deep Learning Information flows from ratings to content

  20. 20 Collaborative Deep Learning Information flows from content to ratings

  21. 21 Collaborative Deep Learning Representation learning <-> recommendation

  22. 22 Learning maximizing the posterior probability is equivalent to maximizing the joint log-likelihood

  23. 23 Learning Prior (regularization) for user latent vectors, weights, and biases

  24. 24 Learning Generating item latent vectors from content representation with Gaussian offset

  25. 25 Learning ‘Generating’ clean input from the output of probabilistic SDAE with Gaussian offset

  26. 26 Learning Generating the input of Layer l from the output of Layer l-1 with Gaussian offset

  27. 27 Learning measures the error of predicted ratings

  28. 28 Learning If goes to infinity, the likelihood simplifies to

  29. 29 Update Rules For U and V, use block coordinate descent: For W and b, use a modified version of backpropagation:

  30. 30 Datasets Content information Titles and abstracts Titles and abstracts Movie plots [ Wang et al. KDD 2011 ] [ Wang et al. IJCAI 2013 ]

  31. 31 Evaluation Metrics Recall: Mean Average Precision (mAP): Higher recall and mAP indicate better recommendation performance

  32. 32 Recall@M When the ratings are very sparse : citeulike-t , sparse setting Netflix , sparse setting When the ratings are dense : Netflix , dense setting citeulike-t , dense setting

  33. 33 Mean Average Precision (mAP) Exactly the same as Oord et al. 2013, we set the cutoff point at 500 for each user. A relative performance boost of about 50%

  34. 34 Example User Moonstruck Romance Movies Precision: 20% VS 30% True Romance

  35. 35 Example User Action & Johnny English Drama Movies Precision: 20% VS 50% American Beauty

  36. 36 Example User Precision: 50% VS 90%

  37. 37 Summary: Collaborative Deep Learning  Non-i.i.d (collaborative) deep learning  With a complex target  First hierarchical Bayesian models for hybrid deep recommender system  Significantly advance the state of the art

  38. 38 Marginalized CDL Transformation to latent factors CDL: Reconstruction error Transformation to latent factors Marginalized CDL: Reconstruction error [ Li et al., CIKM 2015 ]

  39. 39 Collaborative Deep Ranking [ Ying et al., PAKDD 2016 ]

  40. 40 Generative Process: Collaborative Deep Ranking

  41. 41 CDL Variants More details in http://wanghao.in/CDL.htm

  42. 42 Beyond Bag-of-Words: Documents as Sequences Motivation: • A more natural way, take in one word at a time, model documents as sequences • Jointly model preferences and sequence generation under the BDL framework “ Collaborative recurrent autoencoder: recommend while learning to fill in the blanks” [ Wang et al., NIPS 2016a ]

  43. 43 Beyond Bag-of-Words: Documents as Sequences Main Idea: • Joint learning in the BDL framework • Wildcard denoising for robust representation “ Collaborative recurrent autoencoder: recommend while learning to fill in the blanks” [ Wang et al., NIPS 2016a]

  44. 44 Wildcard Denoising Sentence: This is a great idea. -> This is a great idea. Direct wrong transition Denoising: this a great idea this is a great idea encoder RNN decoder RNN Wildcard Denoising: this <wc> a great idea this is a great idea encoder RNN decoder RNN

  45. 45 Documents as Sequences Main Idea: • Joint learning in the BDL framework • Wildcard denoising for robust representation • Beta-Pooling for variable-length sequences “ Collaborative recurrent autoencoder: recommend while learning to fill in the blanks” [ Wang et al., NIPS 2016a ]

  46. 46 Is Variable-Length Weight Vector Possible? length: 8 length: 6 length: 4 [ Wang et al., NIPS 2016a ]

  47. 47 Variable-Length Weight Vector with Beta Distributions 8 length-3 vectors X 0.22 0.21 0.18 0.16 length-8 0.08 0.10 weight vector 0.04 0.01 = one single vector Use the area of the beta distribution to define the weights! [ Wang et al., NIPS 2016a ]

  48. 48 Variable-Length Weight Vector with Beta Distributions 6 length-3 vectors X 0.27 0.28 0.20 length-6 0.13 0.10 weight vector 0.02 = one single vector Use the area of the beta distribution to define the weights! [ Wang et al., NIPS 2016a ]

  49. 49 Graphical Model: Collaborative Recurrent Autoencoder Perception Component Task-Specific Component • Joint learning in the BDL framework • Wildcard denoising for robust representation • Beta-Pooling for variable-length sequences [ Wang et al., NIPS 2016a ]

  50. 50 Incorporating Relational Information [ Wang et al. AAAI 2015 ] [ Wang et al. AAAI 2017 ]

  51. 51 Probabilistic SDAE Graphical model: Generative process: Generalized SDAE Notation: corrupted input clean input weights and biases

  52. 52 Relational SDAE: Graphical Model Notation: corrupted input clean input adjacency matrix

  53. 53 Relational SDAE: Two Components Perception Component Task-Specific Component

  54. 54 Relational SDAE: Generative Process

  55. 55 Relational SDAE: Generative Process

  56. 56 Multi-Relational SDAE: Graphical Model Notation: Product of Q+1 Gaussians corrupted input Multiple networks : citation networks clean input co-author networks adjacency matrix

  57. 57 Relational SDAE: Objective Function Network A → Relational Matrix S Relational Matrix S → Middle-Layer Representations

  58. 58 Update Rules

  59. 59 From Representation to Tag Recommendation

  60. 60 Algorithm

  61. 61 Datasets

  62. 62 Sparse Setting, citeulike-a

  63. 63 Case Study 1: Tagging Scientific Articles Precision: 10% VS 60%

  64. 64 Case Study 2: Tagging Movies (SDAE) Precision: 30% VS 60%

  65. 65 Case Study 2: Tagging Movies (RSDAE) Does not appear in the tag lists of movies linked to ‘E.T. the Extra - Terrestrial’ Very difficult to discover this tag

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend