User Recommendation in Content Curation Platforms Jianling Wang, - - PowerPoint PPT Presentation

user recommendation in content curation platforms
SMART_READER_LITE
LIVE PREVIEW

User Recommendation in Content Curation Platforms Jianling Wang, - - PowerPoint PPT Presentation

User Recommendation in Content Curation Platforms Jianling Wang, Ziwei Zhu and James Caverlee Content Creation vs Curation Content creators generate new digital artifacts such as tweets, blog posts, or photos. Content Creation vs Curation


slide-1
SLIDE 1

User Recommendation in Content Curation Platforms

Jianling Wang, Ziwei Zhu and James Caverlee

slide-2
SLIDE 2

Content creators generate new digital artifacts such as tweets, blog posts, or photos.

Content Creation vs Curation

slide-3
SLIDE 3

Music Streaming platforms allow users to create and share playlists.

Mega Hit Mix Jogging! Mood Booster

Playlists

Content Creation vs Curation

slide-4
SLIDE 4

John rated it John added to Bio

Like Comment Preview Book

John Smith Following

John added to want to read

Like Comment Preview Book Like Comment Preview Book

Content Creation vs Curation

Goodreads provide a platform for users to curate interesting books via tagging, ratings and reviews.

slide-5
SLIDE 5

In Content Curation Platforms, users acting as curators, collect and

  • rganize existing content via reviews, pins, boards, ratings and other

actions.

Our Goal: Recommend Curators

slide-6
SLIDE 6

Our Goal: Recommend Curators

Compared with:

  • Item-level recommendation, e.g., recommend music tracks

There are many new items or items with little feedback.

  • Curation-level recommendation, e.g., recommend playlists

Curations (e.g. pin boards, playlists) are frequently updated.

slide-7
SLIDE 7
  • Curators can provide a human-powered overlay that can link seemingly un-

related items (e.g., a collection of songs that are thematically related though from different genres).

Why Recommend Curators?

For study

Jogging! Mood Booster

Playlists

slide-8
SLIDE 8
  • By receiving updates from whom they follow, users can be exposed to

interesting items and curation decisions.

G e t U p d a t e F

  • l

l

  • w

Why Recommend Curators?

R a t e T a g L i s t e n

Item User

slide-9
SLIDE 9

Our Setting

We can collect:

  • User-curator following relationships
  • Implicit feedback on items

Tag Read Highlight Tag Read Highlight Follow 1 1

1 1 1

Feedback Vector

  • n Items

Feedback Vector

  • n Curators
slide-10
SLIDE 10

Challenge

How to model these two aspects - curator preferences and item preferences - in a unified model?

Tag Read Highlight Tag Read Highlight Follow 1 1

1 1 1

Feedback Vector

  • n Items

Feedback Vector

  • n Curators
slide-11
SLIDE 11

The Goal

We are motivated to develop a new model for Curator Recommendation that leverages the linkage between user- curator following relationships and the items they are interested in.

slide-12
SLIDE 12

The Joint Tasks

Ultimately, the model aims to provide users with recommendation on:

  • who to follow (the primary task)
  • interesting items (the supplementary task)
slide-13
SLIDE 13

CuRe - Curator Recommendation

Three components:

  • Learning Curator & Item Preferences
  • Fusing Latent Representations
  • Personalized via Attention
slide-14
SLIDE 14

Uncover the Preferences

Follow

1 1

hC V

1 1

… W Feedback Vector on Curators

Denoising Autoencoder

Calculate Reconstruction Loss

During Training

Use Denoising Autoencoder (DAE) to uncover the latent representation of user preference on curators.

Latent Representation

slide-15
SLIDE 15

Uncover the Preferences

Use Denoising Autoencoder (DAE) to uncover the latent representation of user preference on curators.

Follow

1 1

hC V

1 1

… W Feedback Vector on Curators

Denoising Autoencoder

The Reconstructed Vector

During Prediction

Preference Scores on Curators

Latent Representation

slide-16
SLIDE 16

Uncover the Preferences

We can enrich the preference on curators with preference on items.

Tag Read Highlight Tag Read Highlight Follow

1 1

hC

1 1 1

V

1 1

… W Feedback Vector

  • n Items

Feedback Vector on Curators

slide-17
SLIDE 17

Uncover the Preferences

We can enrich the preference on curators with preference on items.

Tag Read Highlight Tag Read Highlight Follow

1 1

hC

1 1 1

V

1 1

… W Feedback Vector

  • n Items

Feedback Vector on Curators

hI VI

slide-18
SLIDE 18

Uncover the Preferences

A Joint Curator-Item DAE model +

Tag Read Highlight Tag Read Highlight Follow 1 1

hC

1 1 1

hI

V

VI

1 1

h

1 1 1

… WI W

Feedback Vector

  • n Items

Shared Latent Factors Feedback Vector

  • n Curators
slide-19
SLIDE 19

What’s Next?

The element at the same dimension in and may not correspond to the same latent factor.

Tag Read Highlight Tag Read Highlight Follow 1 1

hC

1 1 1

hI

V

VI

1 1

h

1 1 1

… WI W

Feedback Vector

  • n Items

Shared Latent Factors Feedback Vector

  • n Curators

hC

hI

slide-20
SLIDE 20

What’s Next?

How to assign personalized weights on and ?

hC

hI

Tag Read Highlight Tag Read Highlight Follow 1 1

hC

1 1 1

hI

V

VI

1 1

h

1 1 1

… WI W

Feedback Vector

  • n Items

Shared Latent Factors Feedback Vector

  • n Curators
slide-21
SLIDE 21

Fusing Latent Representations

Use a Discriminator to force and to live in a shared space.

hC

hI

Tag Read Highlight Tag Read Highlight Follow 1 1

hC

1 1 1

hI

V

VI

Input

Fully-Connected Layers Feedback Vector

  • n Items

Adversarial loss for distinguishing and

hC

hI

Discriminator

Feedback Vector

  • n Curators
slide-22
SLIDE 22

Generate the user-dependent weights for and via an attention layer.

Personalized Fusing

1 1

hC

1 1 1

hI

V V

h

h

αC

αI

VI

1 1

h

1 1 1

… WI

W

E

Attention Layer Isolated Latent Factors Feedback Vector

  • n Items

Shared Latent Factors Feedback Vector

  • n Curators

hC

hI

Input

slide-23
SLIDE 23

Generate the user-dependent weights for and via an attention layer.

Personalized Fusing

1 1

hC

1 1 1

hI

V V

h

h

αC

αI

VI

1 1

h

1 1 1

… WI

W

E

Attention Layer Isolated Latent Factors Feedback Vector

  • n Items

Shared Latent Factors Feedback Vector

  • n Curators

hC

hI

Output

slide-24
SLIDE 24

CuRe - Curator Recommendation

Tag Read Highlight Tag Read Highlight Follow 1 1

hC

1 1 1

hI

V V

h

h

αC

αI

VI

1 1

h

1 1 1

… WI

W

E

Attention Layer

Input

Isolated Latent Factors

Fully-Connected Layers Feedback Vector

  • n Items

Adversarial loss for distinguishing and

hC

hI

Discriminator

Shared Latent Factors Feedback Vector

  • n Curators
slide-25
SLIDE 25

Experiment: Data

Dataset #User #Item #User-User Interactions #User-Item Interactions

Goodreads

48,208 61,848 528,816 10,526,215

Spotify

25,471 70,107 227,024 4,499,741

Two Datasets:

slide-26
SLIDE 26

Experiment: Metric

  • F1@K: combination of recall and precision
  • NDCG@K: takes the position of recommendations into

consideration

  • K=5, 10
slide-27
SLIDE 27

Experiment: Baselines

Compare with the widely used recommendation frameworks:

  • MP: Most Popular
  • UCF: User-based collaborative filtering
  • BPR: Matrix Factorization with Bayesian Personalized Ranking
slide-28
SLIDE 28

Experiment: Baselines

Compare with recommendation frameworks enhanced with an adversarial component or built on Autoencoder:

  • AMF: Adversarial Matrix Factorization
  • DAE: Denoising Autoencoder
  • CDAE: Collaborative Denoising Autoencoder
  • VAE: Variational Autoencoder for Collaborative Filtering
slide-29
SLIDE 29

Experiment: Baselines

Additional Approaches considering both user-user and user- item interactions:

  • EMJ: Embedding Factorization odes for Joint Recommendation
  • Joint-DAE: A simplified version of CuRe without adversarial learning

process and the attention layer.

slide-30
SLIDE 30

CuRe vs Baselines

  • The proposed model outperforms the state-of-the-art in

recommending curators (by 18% in Goodreads, 6% in Spotify).

  • Simultaneously, it is able to achieve significant

improvements in item recommendation compared with the baselines.

  • Larger improvements under the cold-start setting.
slide-31
SLIDE 31

Impact of each component?

Utilizing feedback on items can help in inferring preferences

  • n curators.

DAE Joint-DAE Incorporate the preference on items

slide-32
SLIDE 32

Impact of each component?

The adversarial component enables the model to achieve better performance in less epochs.

Adversarial Joint-DAE Add the Discriminator into the training process

slide-33
SLIDE 33

Impact of each component?

Providing personalized fusing is important for achieving the improved performance in both tasks.

Adversarial Joint-DAE + Attention Layers With personalized fusing layer

slide-34
SLIDE 34

Conclusion

  • New Problem - Curator Recommendation
  • Joint Recommendation for a primary and a supplementary task.
  • Experiments prove that the proposed models can outperform the

state-of-the-art in both the primary and the supplementary tasks.

  • The next step…
  • Can we support various types of interactions between users?
  • How to capture the temporally dynamic patterns of curators?