CS425: Algorithms for Web Scale Data Most of the slides are from the - - PowerPoint PPT Presentation

β–Ά
cs425 algorithms for web scale data
SMART_READER_LITE
LIVE PREVIEW

CS425: Algorithms for Web Scale Data Most of the slides are from the - - PowerPoint PPT Presentation

CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original slides can be accessed at: www.mmds.org Training data 100 million ratings,


slide-1
SLIDE 1

CS425: Algorithms for Web Scale Data

Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original slides can be accessed at: www.mmds.org

slide-2
SLIDE 2

ο‚‘ Training data

  • 100 million ratings, 480,000 users, 17,770 movies
  • 6 years of data: 2000-2005

ο‚‘ Test data

  • Last few ratings of each user (2.8 million)
  • Evaluation criterion: Root Mean Square Error (RMSE) =

1 𝑆

Οƒ(𝑗,𝑦)βˆˆπ‘† ΖΈ 𝑠

𝑦𝑗 βˆ’ 𝑠 𝑦𝑗 2

  • Netflix’s system RMSE: 0.9514

ο‚‘ Competition

  • 2,700+ teams
  • $1 million prize for 10% improvement on Netflix
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

2

slide-3
SLIDE 3

1 3 4 3 5 5 4 5 5 3 3 2 2 2 5 2 1 1 3 3 1 480,000 users 17,700 movies

3

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

Matrix R

slide-4
SLIDE 4

1 3 4 3 5 5 4 5 5 3 3 2 ? ? ? 2 1 ? 3 ? 1 Test Data Set

RMSE =

1 R

Οƒ(𝑗,𝑦)βˆˆπ‘† ΖΈ 𝑠

𝑦𝑗 βˆ’ 𝑠 𝑦𝑗 2

4

480,000 users 17,700 movies Predicted rating True rating of user x on item i π’”πŸ’,πŸ•

Matrix R

Training Data Set

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-5
SLIDE 5

ο‚‘ The winner of the Netflix Challenge! ο‚‘ Multi-scale modeling of the data:

Combine top level, β€œregional” modeling of the data, with a refined, local view:

  • Global:
  • Overall deviations of users/movies
  • Factorization:
  • Addressing β€œregional” effects
  • Collaborative filtering:
  • Extract local patterns
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

5

Global effects Factorization Collaborative filtering

slide-6
SLIDE 6

ο‚‘ Global:

  • Mean movie rating: 3.7 stars
  • The Sixth Sense is 0.5 stars above avg.
  • Joe rates 0.2 stars below avg.

οƒž Baseline estimation: Joe will rate The Sixth Sense 4 stars

ο‚‘ Local neighborhood (CF/NN):

  • Joe didn’t like related movie Signs
  • οƒž Final estimate:

Joe will rate The Sixth Sense 3.8 stars

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

6

slide-7
SLIDE 7

ο‚‘ Earliest and most popular collaborative

filtering method

ο‚‘ Derive unknown ratings from those of β€œsimilar”

movies (item-item variant)

ο‚‘ Define similarity measure sij of items i and j ο‚‘ Select k-nearest neighbors, compute the rating

  • N(i; x): items most similar to i that were rated by x

7

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

οƒ₯ οƒ₯

οƒŽ οƒŽ

οƒ— ο€½

) ; ( ) ; (

Λ†

x i N j ij x i N j xj ij xi

s r s r

sij… similarity of items i and j rxj…rating of user x on item j N(i;x)… set of items similar to item i that were rated by x

slide-8
SLIDE 8

ο‚‘ In practice we get better estimates if we

model deviations:

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

8

ΞΌ = overall mean rating bx = rating deviation of user x = (avg. rating of user x) – ΞΌ bi = (avg. rating of movie i) – ΞΌ

Problems/Issues: 1) Similarity measures are β€œarbitrary” 2) Pairwise similarities neglect interdependencies among users 3) Taking a weighted average can be restricting Solution: Instead of sij use wij that we estimate directly from data

^

οƒ₯ οƒ₯

οƒŽ οƒŽ

ο€­ οƒ—  ο€½

) ; ( ) ; (

) (

x i N j ij x i N j xj xj ij xi xi

s b r s b r

baseline estimate for rxi

π’„π’šπ’‹ = 𝝂 + π’„π’š + 𝒄𝒋

slide-9
SLIDE 9

ο‚‘ Use a weighted sum rather than weighted avg.:

ෞ 𝑠𝑦𝑗 = 𝑐𝑦𝑗 + ෍

π‘˜βˆˆπ‘‚(𝑗;𝑦)

π‘₯π‘—π‘˜ π‘ π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜

ο‚‘ A few notes:

  • 𝑢(𝒋; π’š) … set of movies rated by user x that are

similar to movie i

  • π’™π’‹π’Œ is the interpolation weight (some real number)
  • We allow: Οƒπ’Œβˆˆπ‘Ά(𝒋,π’š) π’™π’‹π’Œ β‰  𝟐
  • π’™π’‹π’Œ models interaction between pairs of movies

(it does not depend on user x)

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

9

slide-10
SLIDE 10

ο‚‘ ෞ

𝑠𝑦𝑗 = 𝑐𝑦𝑗 + Οƒπ‘˜βˆˆπ‘‚(𝑗,𝑦) π‘₯π‘—π‘˜ π‘ π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜

ο‚‘ How to set wij?

  • Remember, error metric is:

1 𝑆

Οƒ(𝑗,𝑦)βˆˆπ‘† ΖΈ 𝑠𝑦𝑗 βˆ’ 𝑠

𝑦𝑗 2

  • r equivalently SSE: Οƒ(𝒋,π’š)βˆˆπ‘Ί ො

π’”π’šπ’‹ βˆ’ π’”π’šπ’‹ πŸ‘

  • Find wij that minimize SSE on training data!
  • Models relationships between item i and its neighbors j
  • wij can be learned/estimated based on x and

all other users that rated i

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

10

Why is this a good idea?

slide-11
SLIDE 11

ο‚‘ Goal: Make good recommendations

  • Quantify goodness using RMSE:

Lower RMSE οƒž better recommendations

  • Want to make good recommendations on items

that user has not yet seen. Can’t really do this!

  • Let’s set build a system such that it works well
  • n known (user, item) ratings

And hope the system will also predict well the unknown ratings

11

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

1 3 4 3 5 5 4 5 5 3 3 2 2 2 5 2 1 1 3 3 1

slide-12
SLIDE 12

ο‚‘ Idea: Let’s set values w such that they work well

  • n known (user, item) ratings

ο‚‘ How to find such values w? ο‚‘ Idea: Define an objective function

and solve the optimization problem

ο‚‘ Find wij that minimize SSE on training data!

𝐾 π‘₯ = ෍

𝑦,𝑗

𝑐𝑦𝑗 + ෍

π‘˜βˆˆπ‘‚ 𝑗;𝑦

π‘₯π‘—π‘˜ π‘ π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜ βˆ’ 𝑠𝑦𝑗

2

ο‚‘ Think of w as a vector of numbers

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

12

Predicted rating True rating

slide-13
SLIDE 13

ο‚‘ A simple way to minimize a function π’ˆ(π’š):

  • Compute the derivative πœΆπ’ˆ
  • Start at some point 𝒛 and evaluate πœΆπ’ˆ(𝒛)
  • Make a step in the reverse direction of the

gradient: 𝒛 = 𝒛 βˆ’ πœΆπ’ˆ(𝒛)

  • Repeat until converged

13

𝑔 𝑧 𝑔 𝑧 + 𝛼𝑔(𝑧)

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-14
SLIDE 14

14 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example: Formulation

 Assume we have a dataset with a single user x and items 0, 1, and 2. We are

given all ratings, and we want to compute the weights w01, w02, and w03.

 Rating estimate: ෞ

𝑠

𝑦𝑗 = 𝑐𝑦𝑗 + Οƒπ‘˜βˆˆπ‘‚(𝑗,𝑦) π‘₯π‘—π‘˜ 𝑠 π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜

Training dataset already has the correct rxi values. We will use the estimation formula to compute the unknown weights w01, w02, and w03.

 Optimization problem: Compute wij values to minimize:

Οƒ(𝒋,π’š)βˆˆπ‘Ί ො π’”π’šπ’‹ βˆ’ π’”π’šπ’‹ πŸ‘

 Plug in the formulas:

minimize J w = 𝑐𝑦0 + π‘₯01 𝑠

𝑦1 βˆ’ 𝑐𝑦1 + π‘₯02 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦0 2

+ 𝑐𝑦1 + π‘₯01 𝑠

𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦1 2

+ 𝑐𝑦2 + π‘₯02 𝑠

𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠 𝑦1 βˆ’ 𝑐𝑦1

βˆ’ 𝑠

𝑦2 2

slide-15
SLIDE 15

15 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example: Algorithm

Initialize unknown variables:

𝐱𝐨𝐟𝐱 = π‘₯01

π‘œπ‘“π‘₯

π‘₯02

π‘œπ‘“π‘₯

π‘₯12

π‘œπ‘“π‘₯

= π‘₯01 π‘₯02 π‘₯12

Iterate: while |wnew - wold| > Ξ΅

wold = wnew wnew = wold -  ·J(wold)

 is the learning rate (a parameter) How to compute J(wold)?

slide-16
SLIDE 16

16 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example: Gradient-Based Update

J w = 𝑐𝑦0 + π‘₯01 𝑠

𝑦1 βˆ’ 𝑐𝑦1 + π‘₯02 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦0 2

+ 𝑐𝑦1 + π‘₯01 𝑠

𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦1 2

+ 𝑐𝑦2 + π‘₯02 𝑠

𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠 𝑦1 βˆ’ 𝑐𝑦1

βˆ’ 𝑠

𝑦2 2

𝛂𝑲(𝒙) = 𝝐𝑲(𝒙) ππ’™πŸπŸ 𝝐𝑲(𝒙) ππ’™πŸπŸ‘ 𝝐𝑲(𝒙) ππ’™πŸπŸ‘

π‘₯01

π‘œπ‘“π‘₯

π‘₯02

π‘œπ‘“π‘₯

π‘₯12

π‘œπ‘“π‘₯

= π‘₯01

π‘π‘šπ‘’

π‘₯02

π‘π‘šπ‘’

π‘₯12

π‘π‘šπ‘’

βˆ’ πœƒ πœ–πΎ(π‘₯) πœ–π‘₯01 πœ–πΎ(π‘₯) πœ–π‘₯02 πœ–πΎ(π‘₯) πœ–π‘₯12

Each partial derivative is evaluated at wold.

slide-17
SLIDE 17

17 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example: Computing Partial Derivatives

J w = 𝑐𝑦0 + π‘₯01 𝑠

𝑦1 βˆ’ 𝑐𝑦1 + π‘₯02 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦0 2

+ 𝑐𝑦1 + π‘₯01 𝑠𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠

𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦1 2

+ 𝑐𝑦2 + π‘₯02 𝑠𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠

𝑦1 βˆ’ 𝑐𝑦1

βˆ’ 𝑠

𝑦2 2

πœ–πΎ(π‘₯) πœ–π‘₯01 = 2 𝑐𝑦0 + π‘₯01 𝑠 𝑦1 βˆ’ 𝑐𝑦1 + π‘₯02 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦0

𝑠

𝑦1 βˆ’ 𝑐𝑦1

+2 𝑐𝑦1 + π‘₯01 𝑠

𝑦0 βˆ’ 𝑐𝑦0 + π‘₯12 𝑠 𝑦2 βˆ’ 𝑐𝑦2

βˆ’ 𝑠

𝑦1

𝑠

𝑦0 βˆ’ 𝑐𝑦0

Reminder:

πœ–( ax+b 2) πœ–x

= 2 ax + b a

Evaluate each partial derivative at wold to compute the gradient direction.

slide-18
SLIDE 18

ο‚‘ We have the optimization

problem, now what?

ο‚‘ Gradient descent:

  • Iterate until convergence: 𝒙 ← 𝒙 βˆ’ ο¨πœΆπ’™π‘²
  • where πœΆπ’™π‘² is the gradient (derivative evaluated on data):

𝛼

π‘₯𝐾 = πœ–πΎ(π‘₯)

πœ–π‘₯π‘—π‘˜ = 2 ෍

𝑦,𝑗

𝑐𝑦𝑗 + ෍

π‘™βˆˆπ‘‚ 𝑗;𝑦

π‘₯𝑗𝑙 𝑠

𝑦𝑙 βˆ’ 𝑐𝑦𝑙

βˆ’ 𝑠

𝑦𝑗

𝑠

π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜

for π’Œ ∈ {𝑢 𝒋; π’š , βˆ€π’‹, βˆ€π’š } else

πœ–πΎ(π‘₯) πœ–π‘₯π‘—π‘˜ = 𝟏

  • Note: We fix movie i, go over all rxi, for every movie π’Œ ∈

𝑢 𝒋; π’š , we compute

𝝐𝑲(𝒙) ππ’™π’‹π’Œ

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

18

 … learning rate

while |wnew - wold| > Ξ΅: wold = wnew wnew = wold -  ·wold

𝐾 π‘₯ = ෍

𝑦

𝑐𝑦𝑗 + ෍

π‘˜βˆˆπ‘‚ 𝑗;𝑦

π‘₯π‘—π‘˜ π‘ π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜ βˆ’ 𝑠𝑦𝑗

2

slide-19
SLIDE 19

ο‚‘ So far: ෞ

𝑠𝑦𝑗 = 𝑐𝑦𝑗 + Οƒπ‘˜βˆˆπ‘‚(𝑗;𝑦) π‘₯π‘—π‘˜ π‘ π‘¦π‘˜ βˆ’ π‘π‘¦π‘˜

  • Weights wij derived based
  • n their role; no use of an

arbitrary similarity measure (wij ο‚Ή sij)

  • Explicitly account for

interrelationships among the neighboring movies

ο‚‘ Next: Latent factor model

  • Extract β€œregional” correlations
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

19

Global effects

Factorization

CF/NN

slide-20
SLIDE 20

Grand Prize: 0.8563 Netflix: 0.9514 Movie average: 1.0533 User average: 1.0651 Global average: 1.1296 Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

20

slide-21
SLIDE 21

Geared towards females Geared towards males Serious Funny

21

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

slide-22
SLIDE 22

ο‚‘ β€œSVD” on Netflix data: R β‰ˆ Q Β· PT ο‚‘ For now let’s assume we can approximate the

rating matrix R as a product of β€œthin” Q Β· PT

  • R has missing entries but let’s ignore that for now!
  • Basically, we will want the reconstruction error to be small on known

ratings and we don’t care about the values on the missing ones

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

22

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1 .2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

β‰ˆ

users items

PT Q

items users

R

SVD: A = U  VT factors factors

slide-23
SLIDE 23

ο‚‘ How to estimate the missing rating of

user x for item i?

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

23

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1

items

.2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

β‰ˆ

items users users

?

PT

ො π’”π’šπ’‹ = 𝒓𝒋 β‹… π’’π’š = ෍

π’ˆ

π’“π’‹π’ˆ β‹… π’’π’šπ’ˆ

qi = row i of Q px = column x of PT factors

Q

factors

slide-24
SLIDE 24

ο‚‘ How to estimate the missing rating of

user x for item i?

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

24

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1

items

.2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

β‰ˆ

items users users

?

PT

factors

Q

factors

ො π’”π’šπ’‹ = 𝒓𝒋 β‹… π’’π’š = ෍

π’ˆ

π’“π’‹π’ˆ β‹… π’’π’šπ’ˆ

qi = row i of Q px = column x of PT

slide-25
SLIDE 25

ο‚‘ How to estimate the missing rating of

user x for item i?

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

25

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1

items

.2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

β‰ˆ

items users users

?

Q PT

2.4 f factors f factors

ො π’”π’šπ’‹ = 𝒓𝒋 β‹… π’’π’š = ෍

π’ˆ

π’“π’‹π’ˆ β‹… π’’π’šπ’ˆ

qi = row i of Q px = column x of PT

slide-26
SLIDE 26

Geared towards females Geared towards males Serious Funny

26

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

slide-27
SLIDE 27

Geared towards females Geared towards males Serious Funny

27

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

slide-28
SLIDE 28

ο‚‘ FYI, SVD:

  • A: Input data matrix
  • U: Left singular vecs
  • V: Right singular vecs
  • : Singular values

ο‚‘ So in our case:

β€œSVD” on Netflix data: R β‰ˆ Q Β· PT A = R, Q = U, PT =  VT

A

m n



m n

VT

ο‚»

28

U ො π’”π’šπ’‹ = 𝒓𝒋 β‹… π’’π’š

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-29
SLIDE 29

ο‚‘ We already know that SVD gives minimum

reconstruction error (Sum of Squared Errors): min

𝑉,V,Ξ£ ෍ π‘—π‘˜βˆˆπ΅

π΅π‘—π‘˜ βˆ’ π‘‰Ξ£π‘ŠT

π‘—π‘˜ 2

ο‚‘ Note two things:

  • SSE and RMSE are monotonically related:
  • 𝑺𝑡𝑻𝑭 =

𝟐 𝒅

𝑻𝑻𝑭 Great news: SVD is minimizing RMSE

  • Complication: The sum in SVD error term is over

all entries (no-rating in interpreted as zero-rating). But our R has missing entries!

29

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-30
SLIDE 30

ο‚‘ SVD isn’t defined when entries are missing! ο‚‘ Use specialized methods to find P, Q

  • min

𝑄,𝑅 Οƒ 𝑗,𝑦 ∈R 𝑠 𝑦𝑗 βˆ’ π‘Ÿπ‘— β‹… π‘žπ‘¦ 2

  • Note:
  • We don’t require cols of P, Q to be orthogonal/unit length
  • P, Q map users/movies to a latent space
  • The most popular model among Netflix contestants
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

30

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1 .2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

ο‚»

PT Q

users items

ΖΈ 𝑠

𝑦𝑗 = π‘Ÿπ‘— β‹… π‘žπ‘¦

factors factors

items users

slide-31
SLIDE 31
slide-32
SLIDE 32

32 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

General Concept: Overfitting

Almost-linear data is fit to a linear function and a polynomial function. Polynomial model fits perfectly to data. Linear model has some error in the training set. Linear model is expected to perform better on test data, because it filters

  • ut noise.

Image source: Wikipedia

slide-33
SLIDE 33

ο‚‘ Our goal is to find P and Q such that:

𝒏𝒋𝒐

𝑸,𝑹

෍

𝒋,π’š βˆˆπ‘Ί

π’”π’šπ’‹ βˆ’ 𝒓𝒋 β‹… π’’π’š

πŸ‘

33

4 5 5 3 1 3 1 2 4 4 5 5 3 4 3 2 1 4 2 2 4 5 4 2 5 2 2 4 3 4 4 2 3 3 1 .2

  • .4

.1 .5 .6

  • .5

.5 .3

  • .2

.3 2.1 1.1

  • 2

2.1

  • .7

.3 .7

  • 1
  • .9

2.4 1.4 .3

  • .4

.8

  • .5
  • 2

.5 .3

  • .2

1.1 1.3

  • .1

1.2

  • .7

2.9 1.4

  • 1

.3 1.4 .5 .7

  • .8

.1

  • .6

.7 .8 .4

  • .3

.9 2.4 1.7 .6

  • .4

2.1

ο‚»

PT Q

users items

factors factors

items users

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-34
SLIDE 34

ο‚‘ Want to minimize SSE for unseen test data ο‚‘ Idea: Minimize SSE on training data

  • Want large k (# of factors) to capture all the signals
  • But, SSE on test data begins to rise for k > 2

ο‚‘ This is a classical example of overfitting:

  • With too much freedom (too many free

parameters) the model starts fitting noise

  • That is it fits too well the training data and thus not

generalizing well to unseen test data

34

1 3 4 3 5 5 4 5 5 3 3 2 ? ? ? 2 1 ? 3 ? 1

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-35
SLIDE 35

ο‚‘ To solve overfitting we introduce

regularization:

  • Allow rich model where there are sufficient data
  • Shrink aggressively where data are scarce

35

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 1 2 ,

) (

min

 

1 3 4 3 5 5 4 5 5 3 3 2 ? ? ? 2 1 ? 3 ? 1

1, 2 … user set regularization parameters

β€œerror” β€œlength”

Note: We do not care about the β€œraw” value of the objective function, but we care in P,Q that achieve the minimum of the objective

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-36
SLIDE 36

36 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Regularization

 What happens if the user x has rated hundreds of movies?

The error term will dominate, and we’ll get a rich model Noise is less of an issue because we have lots of data

 What happens if the user x has rated only a few movies?

Length term for px will have more effect, and we’ll get a simple model

 Same argument applies for items

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 1 2 ,

) (

min

 

1, 2 … user set regularization parameters

β€œerror” β€œlength”

slide-37
SLIDE 37

Geared towards females Geared towards males serious funny

37

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

minfactors β€œerror” +  β€œlength”

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 ,

) (

min



slide-38
SLIDE 38

Geared towards females Geared towards males serious funny

38

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

The Princess Diaries

minfactors β€œerror” +  β€œlength”

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 ,

) (

min



slide-39
SLIDE 39

Geared towards females Geared towards males serious funny

39

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

minfactors β€œerror” +  β€œlength”

The Princess Diaries

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 ,

) (

min



slide-40
SLIDE 40

Geared towards females Geared towards males serious funny

40

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

Factor 1 Factor 2

The Princess Diaries

minfactors β€œerror” +  β€œlength”

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 ,

) (

min



slide-41
SLIDE 41

ο‚‘ Want to find matrices P and Q: ο‚‘ Gradient descent:

  • Initialize P and Q (using SVD, pretend missing ratings are 0)
  • Do gradient descent:
  • P  P -  ·P
  • Q  Q -  ·Q
  • where Q is gradient/derivative of matrix Q:

𝛼𝑅 = [π›Όπ‘Ÿπ‘—π‘”] and π›Όπ‘Ÿπ‘—π‘” = σ𝑦,𝑗 βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘žπ‘¦π‘” + 2πœ‡2π‘Ÿπ‘—π‘”

  • Here π’“π’‹π’ˆ is entry f of row qi of matrix Q
  • Observation: Computing gradients is slow!
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

41

How to compute gradient

  • f a matrix?

Compute gradient of every element independently!

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 1 2 ,

) (

min

 

slide-42
SLIDE 42

42 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example

Rewrite objective as: min ෍

𝑦,𝑗

𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—0π‘žπ‘¦0 + π‘Ÿπ‘—1π‘žπ‘¦1 + π‘Ÿπ‘—2π‘žπ‘¦2 2

+πœ‡1 σ𝑦 π‘žπ‘¦0

2 + π‘žπ‘¦1 2 + π‘žπ‘¦2 2

+πœ‡2 σ𝑗 π‘Ÿπ‘—0

2 + π‘Ÿπ‘—1 2 + π‘Ÿπ‘—2 2

οƒΊ  οƒΉ οƒͺ     ο€­

οƒ₯ οƒ₯ οƒ₯

i i x x training x i xi Q P

q p p q r

2 2 2 1 2 ,

) (

min

 

π‘žπ‘¦ = π‘žπ‘¦0 π‘žπ‘¦1 π‘žπ‘¦2 π‘Ÿπ‘— = π‘Ÿπ‘—0 π‘Ÿπ‘—1 π‘Ÿπ‘—2

Assume we want 3 factors per user and item:

slide-43
SLIDE 43

43 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Example

min ෍

𝑦,𝑗

𝑠𝑦𝑗 βˆ’ π‘Ÿπ‘—0π‘žπ‘¦0 + π‘Ÿπ‘—1π‘žπ‘¦1 + π‘Ÿπ‘—2π‘žπ‘¦2

2

+πœ‡1 ෍

𝑦

π‘žπ‘¦0

2 + π‘žπ‘¦1 2 + π‘žπ‘¦2 2

+πœ‡2 ෍

𝑗

π‘Ÿπ‘—0

2 + π‘Ÿπ‘—1 2 + π‘Ÿπ‘—2 2

π‘žπ‘¦ = π‘žπ‘¦0 π‘žπ‘¦1 π‘žπ‘¦2 π‘Ÿπ‘— = π‘Ÿπ‘—0 π‘Ÿπ‘—1 π‘Ÿπ‘—2

Compute gradient for variable qi0: π›Όπ‘Ÿπ‘—0 = ෍

𝑦,𝑗

βˆ’2 𝑠

𝑦𝑗 βˆ’ (π‘Ÿπ‘—0π‘žπ‘¦0 + π‘Ÿπ‘—1π‘žπ‘¦1 + π‘Ÿπ‘—2π‘žπ‘¦2) π‘žπ‘¦0 + 2πœ‡2π‘Ÿπ‘—0

Do the same for every free variable

slide-44
SLIDE 44

44 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Gradient Descent - Computation Cost

 How many free variables do we have?

(# of users + # of items) . (# of factors)

 Which ratings do we process to compute π›Όπ‘Ÿπ‘—π‘” ?

All ratings for item i

 Which ratings do we process to compute π›Όπ‘žπ‘¦π‘” ?

All ratings for user x

 What is the complexity of one iteration?

O(# of ratings . # of factors) 𝛼𝑅 = [π›Όπ‘Ÿπ‘—π‘”] and π›Όπ‘Ÿπ‘—π‘” = σ𝑦,𝑗 βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘žπ‘¦π‘” + 2πœ‡2π‘Ÿπ‘—π‘”

slide-45
SLIDE 45

45 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Stochastic Gradient Descent

 Gradient Descent (GD): Update all free variables in one step.

Need to process all ratings.

 Stochastic Gradient Descent (SGD): Update the free variables

associated with a single rating in one step.

ο‚€ Need many more steps to converge ο‚€ Each step is much faster ο‚€ In practice: SGD much faster than GD  GD: 𝑹𝑹 βˆ’  Οƒπ’”π’šπ’‹ 𝑹(π’”π’šπ’‹)  SGD: 𝑹𝑹 βˆ’ πœˆοƒ‘π‘Ή(π’”π’šπ’‹)

slide-46
SLIDE 46

46 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Stochastic Gradient Descent

π›Όπ‘Ÿπ‘—π‘” = ෍

𝑦,𝑗

βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘žπ‘¦π‘” + 2πœ‡2π‘Ÿπ‘—π‘”

π›Όπ‘žπ‘¦π‘” = ෍

𝑦,𝑗

βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘Ÿπ‘—π‘” + 2πœ‡1π‘žπ‘¦π‘”

Which free variables are associated with rating rxi?

π‘žπ‘¦ = π‘žπ‘¦0 π‘žπ‘¦1 . . π‘žπ‘¦π‘™ π‘Ÿπ‘— = π‘Ÿπ‘—0 π‘Ÿπ‘—1 . . π‘Ÿπ‘—π‘™

slide-47
SLIDE 47

47 CS 425 – Lecture 9 Mustafa Ozdal, Bilkent University

Stochastic Gradient Descent

For each rxi:

πœπ‘¦π‘— = (𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘— β‹… π‘žπ‘¦ )

(derivative of the β€œerror”)

π‘Ÿπ‘— ← π‘Ÿπ‘— + 𝜈1 πœπ‘¦π‘— π‘žπ‘¦ βˆ’ πœ‡2 π‘Ÿπ‘—

(update equation)

π‘žπ‘¦ ← π‘žπ‘¦ + 𝜈2 πœπ‘¦π‘— π‘Ÿπ‘— βˆ’ πœ‡1 π‘žπ‘¦

(update equation) Note: The operations above are vector operations π›Όπ‘Ÿπ‘—π‘” = ෍

𝑦,𝑗

βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘žπ‘¦π‘” + 2πœ‡2π‘Ÿπ‘—π‘”

π›Όπ‘žπ‘¦π‘” = ෍

𝑦,𝑗

βˆ’2 𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘—π‘žπ‘¦ π‘Ÿπ‘—π‘” + 2πœ‡1π‘žπ‘¦π‘”

𝜈 … learning rate

slide-48
SLIDE 48

ο‚‘ Stochastic gradient decent:

  • Initialize P and Q (using SVD, pretend missing ratings are 0)
  • Then iterate over the ratings (multiple times if

necessary) and update factors: For each rxi:

  • πœπ‘¦π‘— = (𝑠

𝑦𝑗 βˆ’ π‘Ÿπ‘— β‹… π‘žπ‘¦ )

(derivative of the β€œerror”)

  • π‘Ÿπ‘— ← π‘Ÿπ‘— + 𝜈1 πœπ‘¦π‘— π‘žπ‘¦ βˆ’ πœ‡2 π‘Ÿπ‘—

(update equation)

  • π‘žπ‘¦ ← π‘žπ‘¦ + 𝜈2 πœπ‘¦π‘— π‘Ÿπ‘— βˆ’ πœ‡1 π‘žπ‘¦

(update equation)

ο‚‘ 2 for loops:

  • For until convergence:
  • For each rxi
  • Compute gradient, do a β€œstep”
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

48

𝜈 … learning rate

slide-49
SLIDE 49

ο‚‘ Convergence of GD vs. SGD

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

49

Iteration/step Value of the objective function GD improves the value

  • f the objective function

at every step. SGD improves the value but in a β€œnoisy” way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster!

slide-50
SLIDE 50

Koren, Bell, Volinksy, IEEE Computer, 2009

50

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-51
SLIDE 51
slide-52
SLIDE 52
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

52

ο‚‘ ΞΌ = overall mean rating ο‚‘ bx = bias of user x ο‚‘ bi = bias of movie i

user-movie interaction movie bias user bias User-Movie interaction

ο‚‘

Characterizes the matching between users and movies

ο‚‘

Attracts most research in the field

ο‚‘

Benefits from algorithmic and mathematical innovations

Baseline predictor

  • Separates users and movies
  • Benefits from insights into user’s

behavior

  • Among the main practical

contributions of the competition

slide-53
SLIDE 53

ο‚‘ We have expectations on the rating by

user x of movie i, even without estimating x’s attitude towards movies like i

– Rating scale of user x – Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) – (Recent) popularity of movie i – Selection bias; related to number of ratings user gave on the same day (β€œfrequency”)

53

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-54
SLIDE 54

ο‚‘ Example:

  • Mean rating:  = 3.7
  • You are a critical reviewer: your ratings are 1 star

lower than the mean: bx = -1

  • Star Wars gets a mean rating of 0.5 higher than

average movie: bi = + 0.5

  • Predicted rating for you on Star Wars:

= 3.7 - 1 + 0.5 = 3.2

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

54

Overall mean rating Bias for user x Bias for movie i

𝑠𝑦𝑗 = 𝜈 + 𝑐𝑦 + 𝑐𝑗 + π‘Ÿπ‘—β‹… π‘žπ‘¦

User-Movie interaction

slide-55
SLIDE 55

ο‚‘ Solve: ο‚‘ Stochastic gradient decent to find parameters

  • Note: Both biases bx, bi as well as interactions qi, px

are treated as parameters (we estimate them)

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

55

regularization goodness of fit  is selected via grid- search on a validation set

 

οƒ· οƒΈ οƒΆ           ο€­

οƒ₯ οƒ₯ οƒ₯ οƒ₯ οƒ₯

οƒŽ i i x x x x i i R i x x i i x xi P Q

b b p q p q b b r

2 4 2 3 2 2 2 1 2 ) , ( ,

) (

min

    

slide-56
SLIDE 56

56

0.885 0.89 0.895 0.9 0.905 0.91 0.915 0.92 1 10 100 1000 RMSE Millions of parameters CF (no time bias) Basic Latent Factors Latent Factors w/ Biases

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-57
SLIDE 57

Grand Prize: 0.8563 Netflix: 0.9514 Movie average: 1.0533 User average: 1.0651 Global average: 1.1296 Basic Collaborative filtering: 0.94 Latent factors: 0.90 Latent factors+Biases: 0.89 Collaborative filtering++: 0.91

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

57

slide-58
SLIDE 58
slide-59
SLIDE 59

ο‚‘ Sudden rise in the

average movie rating (early 2004)

  • Improvements in Netflix
  • GUI improvements
  • Meaning of rating changed

ο‚‘ Movie age

  • Users prefer new movies

without any reasons

  • Older movies are just

inherently better than newer ones

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

59

  • Y. Koren, Collaborative filtering with

temporal dynamics, KDD ’09

slide-60
SLIDE 60

ο‚‘ Original model:

rxi =  +bx + bi + qi ·px

ο‚‘ Add time dependence to biases:

rxi =  +bx(t)+ bi(t) +qi · px

  • Make parameters bx and bi to depend on time
  • (1) Parameterize time-dependence by linear trends

(2) Each bin corresponds to 10 consecutive weeks

ο‚‘ Add temporal dependence to factors

  • px(t)… user preference vector on day t
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

60

  • Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09
slide-61
SLIDE 61
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

61

0.875 0.88 0.885 0.89 0.895 0.9 0.905 0.91 0.915 0.92 1 10 100 1000 10000 RMSE Millions of parameters CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF

slide-62
SLIDE 62

Grand Prize: 0.8563 Netflix: 0.9514 Movie average: 1.0533 User average: 1.0651 Global average: 1.1296 Basic Collaborative filtering: 0.94 Latent factors: 0.90 Latent factors+Biases: 0.89 Collaborative filtering++: 0.91

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

62

Latent factors+Biases+Time: 0.876

Still no prize!  Getting desperate. Try a β€œkitchen sink” approach!

slide-63
SLIDE 63
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

63

slide-64
SLIDE 64
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

64

June 26th submission triggers 30-day β€œlast call”

slide-65
SLIDE 65

ο‚‘ Ensemble team formed

  • Group of other teams on leaderboard forms a new team
  • Relies on combining their models
  • Quickly also get a qualifying score over 10%

ο‚‘ BellKor

  • Continue to get small improvements in their scores
  • Realize that they are in direct competition with Ensemble

ο‚‘ Strategy

  • Both teams carefully monitoring the leaderboard
  • Only sure way to check for improvement is to submit a set
  • f predictions
  • This alerts the other team of your latest score

65

  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org
slide-66
SLIDE 66

ο‚‘ Submissions limited to 1 a day

  • Only 1 final submission could be made in the last 24h

ο‚‘ 24 hours before deadline…

  • BellKor team member in Austria notices (by chance) that

Ensemble posts a score that is slightly better than BellKor’s

ο‚‘ Frantic last 24 hours for both teams

  • Much computer time on final optimization
  • Carefully calibrated to end about an hour before deadline

ο‚‘ Final submissions

  • BellKor submits a little early (on purpose), 40 mins before

deadline

  • Ensemble submits their final entry 20 mins later
  • ….and everyone waits….
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

66

slide-67
SLIDE 67
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

67

slide-68
SLIDE 68
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

68

slide-69
SLIDE 69

ο‚‘ Some slides and plots borrowed from

Yehuda Koren, Robert Bell and Padhraic Smyth

ο‚‘ Further reading:

  • Y. Koren, Collaborative filtering with temporal

dynamics, KDD ’09

  • http://www2.research.att.com/~volinsky/netflix/bpc.html
  • http://www.the-ensemble.com/
  • J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

69