Explainable Recommendation Through Attentive Multi-View Learning - - PowerPoint PPT Presentation

explainable recommendation
SMART_READER_LITE
LIVE PREVIEW

Explainable Recommendation Through Attentive Multi-View Learning - - PowerPoint PPT Presentation

Explainable Recommendation Through Attentive Multi-View Learning Advisor: Jia-Ling Koh Presenter: You-Xiang Chen Source: AAAI 19 Data: 2020/03/02 Content 01 Introduction 02 Method 03 Experiment 04 Conclusion Introduction


slide-1
SLIDE 1

Explainable Recommendation Through Attentive Multi-View Learning

Advisor: Jia-Ling Koh Presenter: You-Xiang Chen Source: AAAI ‘19 Data: 2020/03/02

slide-2
SLIDE 2

Content

Introduction Method Experiment 01 Conclusion 02 03 04

slide-3
SLIDE 3

Introduction

slide-4
SLIDE 4

Introduction

Recommendation System

slide-5
SLIDE 5

Introduction

user feature × user latent item latent × item feature Matrix Factorization

slide-6
SLIDE 6

Introduction

Deep but unexplainable

Neural Collaborative Filtering

slide-7
SLIDE 7

Introduction

We propose a Deep Explicit Attentive Multi-View Learning Model (DEAML) for explainable recommendation:

  • 1. improves accuracy from noisy and sparse data
  • 2. formulates personalized explanation generation as a

constrained tree node selection problem

slide-8
SLIDE 8

Problem Definition

  • User set 𝑉
  • Item set 𝐽
  • Explicit feature hierarchy Υ
  • Set the node in Υ as

ℱ = {ℱ1, … , ℱ𝑀}

  • Input
  • Output
  • Predicted rating Ƹ

𝑠

𝑗𝑘

  • Feature-level explanation 𝐹 (𝑡𝑣𝑐𝑡𝑓𝑢 𝑝𝑔 𝐺)

Microsoft Concept Graph

e.g. Pork

slide-9
SLIDE 9

Microsoft Concept Graph

https://concept.research.microsoft.com/

  • New York (is-a) state
  • Name (is-a) information
  • Facebook (

(is-a) social medium

  • 5 million concepts
  • 85 million “IsA” relations
slide-10
SLIDE 10

Relate work

  • Explicit Factor Models

Enrich user & item representation by adding set of latent factors learned from explicit feature.

capture both explicit & implicit factor

Explicit Factor Models for Explainable Recommendation based on Phrase-level Sentiment Analysis

explicit factor explicit factor

slide-11
SLIDE 11

Relate work

  • User/Item-feature attention matrix 𝒀, 𝒁

ℱ = ℱ1, … , ℱ

𝑞 , set of explicit feature in review

  • Integrating Explicit and Implicit Features

𝑊𝑈: projection matrix Explicit Factor Models for Explainable Recommendation based on Phrase-level Sentiment Analysis

Factorization Model over matrix 𝒀,𝒁 Factorization Model over matrix A

X, Y are in the range of [𝟐, 𝐎]

slide-12
SLIDE 12

Method

slide-13
SLIDE 13

Framework

Deep Explicit Attentive Multi-View Learning Model

slide-14
SLIDE 14

Hierarchical propagation

  • Personalized User Attention
  • Attn. score

𝒚𝒋𝒎 measures how much user 𝒋 cares about feature 𝑮𝒎

slide-15
SLIDE 15

Attentive Multi-View Learning

h=1

  • Latent factors learning from explicit features

concatenation

Latent factor learn from explicit feature

(EFM model)

Latent factor learn from implicit feature (EFM model)

item representation at view h user representation at view h

rating prediction in h view

slide-16
SLIDE 16

Attentive Multi-View Learning

  • Loss of each view

projection matrix rating prediction for each view estimating hidden representation of user/item

  • Co-regularization loss

enforcing agreement

slide-17
SLIDE 17

Attentive Multi-View Learning

  • Weighted sum prediction in each view

Calculate attention weight

slide-18
SLIDE 18

Objective function

  • Jointly learning

loss of each view Co-regularization loss Weighted sum prediction

slide-19
SLIDE 19

Personalized Explanation Generation

  • Utility function

user interest at level h item interest at level h weight of view h

4 5

  • 1

6 2

slide-20
SLIDE 20

Personalized Explanation Generation

  • Constrained tree node selection
  • max. utility of s-th child
  • max. utility (s-1)-th node to t’
slide-21
SLIDE 21

Experiment

slide-22
SLIDE 22

Dataset

Dataset User# Item# Reviews# Toys&Games 19,412 11,924 167,597 Digital Music 5,541 3,568 64,706 Yelp 8,744 14,082 212,922

  • Statistics of the evaluation datasets

5-core 5-core 10-core

slide-23
SLIDE 23

Baselines

  • Observed rating matrix
  • NMF
  • PMF

PMF

  • SVD++
  • Knowledge-based method
  • Reviews-based method
  • HFT

HFT

  • EFM

EFM

  • DeepCoNN
  • NARRE
  • CKE

Single layer structure Deep learning base

slide-24
SLIDE 24

RMSE comparison

same weight to all views

slide-25
SLIDE 25

Effect of number of latent factors

slide-26
SLIDE 26

Case Study

slide-27
SLIDE 27

Conclusion

1. We build an initial network based on an explainable deep hierarchy (Microsoft Concept Graph) and improve the model accuracy by optimizing key variables in the hierarchy 2. We propose a Deep Explicit Attentive Multi-View Learning Model (DEAML) for explainable recommendation, which combines the advantages of deep learning-based methods and existing explainable methods. 3. Experimental results show that our model performs better than state-of-the- art methods in both accuracy and explainability.