http://cs246.stanford.edu It is always possible to decompose a real - - PowerPoint PPT Presentation

http cs246 stanford edu
SMART_READER_LITE
LIVE PREVIEW

http://cs246.stanford.edu It is always possible to decompose a real - - PowerPoint PPT Presentation

Announcements: Submit your project group TODAY (Ed Pinned Post) Project Proposal due this Thursday (no late periods) Upload homework on time (23:59pm)! CS246: Mining Massive Datasets Jure Leskovec, Stanford University


slide-1
SLIDE 1

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

http://cs246.stanford.edu

Announcements:

  • Submit your project group TODAY (Ed Pinned Post)
  • Project Proposal due this Thursday (no late periods)
  • Upload homework on time (23:59pm)!
slide-2
SLIDE 2

It is always possible to decompose a real matrix A into A = U  VT , where

 U, , V: unique*  U, V: column orthonormal

▪ UT U = I; VT V = I (I: identity matrix) ▪ (Columns are orthogonal unit vectors)

 : diagonal

▪ Entries (singular values) are positive, and sorted in decreasing order (σ1  σ2  ...  0)

* Up to permutations for redundant singular values and orientation of singular vectors (details)

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 2

slide-3
SLIDE 3

High dim. data

Locality sensitive hashing Clustering Dimension- ality reduction

Graph data

PageRank, SimRank Community Detection Spam Detection

Infinite data

Sampling data streams Filtering data streams Queries on streams

Machine learning

SVM Decision Trees Perceptron, kNN

Apps

Recommen- der systems Association Rules Duplicate document detection

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 3

slide-4
SLIDE 4

 Customer X

▪ Buys Metallica CD ▪ Buys Megadeth CD

 Customer Y

▪ Does search on Metallica ▪ Recommender system suggests Megadeth from data collected about customer X

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 4

slide-5
SLIDE 5

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 5

Items Search Recommendations Products, web sites, blogs, news items, …

Examples:

slide-6
SLIDE 6

 Shelf space is a scarce commodity for

traditional retailers

▪ Also: TV networks, movie theaters,…

 Web enables near-zero-cost dissemination

  • f information about products

▪ From scarcity to abundance

 More choice necessitates better filters:

▪ Recommendation engines ▪ Association rules: How Into Thin Air made Touching the Void a bestseller:

http://www.wired.com/wired/archive/12.10/tail.html

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 6

slide-7
SLIDE 7

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 7

Source: Chris Anderson (2004)

slide-8
SLIDE 8

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 8

Read http://www.wired.com/wired/archive/12.10/tail.html to learn more!

slide-9
SLIDE 9

 Editorial and hand curated

▪ List of favorites ▪ Lists of “essential” items

 Simple aggregates

▪ Top 10, Most Popular, Recent Uploads

 Tailored to individual users

▪ Amazon, Netflix, …

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 9

Today’s class

slide-10
SLIDE 10

 X = set of Customers  S = set of Items  Utility function u: X × S → R

▪ R = set of ratings ▪ R is a totally ordered set ▪ e.g., 1-5 stars, real number in [0,1]

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 10

slide-11
SLIDE 11

0.4 1 0.2 0.3 0.5 0.2 1

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 11

Avatar LOTR Matrix Pirates Alice Bob Carol David

slide-12
SLIDE 12

 (1) Gathering “known” ratings for matrix

▪ How to collect the data in the utility matrix

 (2) Extrapolating unknown ratings from the

known ones

▪ Mainly interested in high unknown ratings

▪ We are not interested in knowing what you don’t like but what you like

 (3) Evaluating extrapolation methods

▪ How to measure success/performance of recommendation methods

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 12

slide-13
SLIDE 13

 Explicit

▪ Ask people to rate items ▪ Doesn’t work well in practice – people don’t like being bothered ▪ Crowdsourcing: Pay people to label items

 Implicit

▪ Learn ratings from user actions

▪ E.g., purchase implies high rating ▪ E.g., add to playlist, play in full, skip song…

▪ What about low ratings?

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 13

slide-14
SLIDE 14

 Key problem: Utility matrix U is sparse

▪ Most people have not rated most items ▪ Cold Start Problem:

▪ New items have no ratings ▪ New users have no history

 Three approaches to recommender systems:

▪ 1) Content-based ▪ 2) Collaborative ▪ 3) Latent factor based

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 14

Today!

slide-15
SLIDE 15
slide-16
SLIDE 16

 Main idea: Recommend items to customer x

similar to previous items rated highly by x Example:

 Movie recommendations

▪ Recommend movies with same actor(s), director, genre, …

 Websites, blogs, news

▪ Recommend other sites with “similar” content

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 16

slide-17
SLIDE 17

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 17

likes

Item profiles

Red Circles Triangles

User profile

match recommend build

slide-18
SLIDE 18

 For each item, create an item profile  Profile is a set (vector) of features

▪ Movies: author, title, actor, director,… ▪ Text: Set of “important” words in document

 How to pick important features?

▪ Usual heuristic from text mining is TF-IDF (Term frequency * Inverse Doc Frequency)

▪ Term … Feature ▪ Document … Item

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 18

slide-19
SLIDE 19

fij = frequency of term (feature) i in doc (item) j ni = number of docs that mention term i N = total number of docs TF-IDF score: wij = TFij × IDFi Doc profile = set of words with highest TF-IDF scores, together with their scores

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 19

Note: we normalize TF to discount for “longer” documents

Large when term i appears often in doc j Large when term i appears in very few documents Added pink notes

slide-20
SLIDE 20

 User profile possibilities:

▪ Weighted average of rated item profiles ▪ Variation: weight by difference from average rating for item

 Prediction heuristic: Cosine similarity of user

and item profiles)

▪ Given user profile x and item profile i, estimate 𝑣 𝒚, 𝒋 = cos 𝒚, 𝒋 =

𝒚·𝒋 𝒚 ⋅ 𝒋

 How do you quickly find items closest to 𝒚?

▪ Job for LSH!

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 20

slide-21
SLIDE 21

 +: No need for data on other users

▪ No cold-start or sparsity problems

 +: Able to recommend to users with

unique tastes

 +: Able to recommend new & unpopular items

▪ No first-rater problem

 +: Able to provide explanations

▪ Can provide explanations of recommended items by listing content-features that caused an item to be recommended

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 21

slide-22
SLIDE 22

 –: Finding the appropriate features is hard

▪ E.g., images, movies, music

 –: Recommendations for new users

▪ How to build a user profile?

 –: Overspecialization

▪ Never recommends items outside user’s content profile ▪ People might have multiple interests ▪ ! Unable to exploit quality judgments of other users!

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 22

slide-23
SLIDE 23

Harnessing quality judgments of other users

slide-24
SLIDE 24

 Consider user x  Find set N of other

users whose ratings are “similar” to x’s ratings

 Estimate x’s ratings

based on ratings

  • f users in N

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 24

x N

slide-25
SLIDE 25

 Let rx be the vector of user x’s ratings  Jaccard similarity metric

▪ Problem: Ignores the value of the rating

 Cosine similarity metric

▪ sim(x, y) = cos(rx, ry) =

𝑠𝑦⋅𝑠𝑧 ||𝑠𝑦||⋅||𝑠𝑧||

▪ Problem: Treats some missing ratings as “negative”

 Better: Pearson correlation coefficient

▪ Sxy = items rated by both users x and y

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 25

rx = [*, _, _, *, ***] ry = [*, _, **, **, _]

rx, ry as sets: rx = {1, 4, 5} ry = {1, 3, 4} rx, ry as points: rx = {1, 0, 0, 1, 3} ry = {1, 0, 2, 2, 0}

rx, ry … avg. rating of x, y

slide-26
SLIDE 26

 Intuitively we want: sim(A, B) > sim(A, C)  Jaccard similarity: 1/5 < 2/4  Cosine similarity: 0.380 > 0.322

▪ Considers missing ratings as “negative” ▪ Solution: subtract the (row) mean

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 26

sim A,B vs. A,C: 0.092 > -0.559

Notice cosine sim. is correlation when data is centered at 0

𝒕𝒋𝒏(𝒚, 𝒛) = σ𝒋 𝒔𝒚𝒋 ⋅ 𝒔𝒛𝒋 σ𝒋 𝒔𝒚𝒋

𝟑 ⋅

σ𝒋 𝒔𝒛𝒋

𝟑

Cosine sim:

slide-27
SLIDE 27

From similarity metric to recommendations:

 Let rx be the vector of user x’s ratings  Let N be the set of k users most similar to x

who have rated item i

 Prediction for item i of user x:

▪ 𝑠𝑦𝑗 =

1 𝑙 σ𝑧∈𝑂 𝑠𝑧𝑗

▪ Or even better:

 Many other tricks possible…

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 27

Shorthand: 𝒕𝒚𝒛 = 𝒕𝒋𝒏 𝒚, 𝒛

slide-28
SLIDE 28

 So far: User-user collaborative filtering  Another view: Item-item

▪ For item i, find other similar items ▪ Estimate rating for item i based

  • n ratings for similar items

▪ Can use same similarity metrics and prediction functions as in user-user model

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 28

 

 

 =

) ; ( ) ; ( x i N j ij x i N j xj ij xi

s r s r

sij… similarity of items i and j rxj…rating of user x on item j N(i;x)… set items which were rated by x and similar to i

slide-29
SLIDE 29

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 29

12 11 10 9 8 7 6 5 4 3 2 1 4 5 5 3 1 1 3 1 2 4 4 5 2 5 3 4 3 2 1 4 2 3 2 4 5 4 2 4 5 2 2 4 3 4 5 4 2 3 3 1 6 users movies

  • unknown rating
  • rating between 1 to 5
slide-30
SLIDE 30

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 30

12 11 10 9 8 7 6 5 4 3 2 1 4 5 5 ? 3 1 1 3 1 2 4 4 5 2 5 3 4 3 2 1 4 2 3 2 4 5 4 2 4 5 2 2 4 3 4 5 4 2 3 3 1 6 users

  • estimate rating of movie 1 by user 5

movies

slide-31
SLIDE 31

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 31

12 11 10 9 8 7 6 5 4 3 2 1 4 5 5 ? 3 1 1 3 1 2 4 4 5 2 5 3 4 3 2 1 4 2 3 2 4 5 4 2 4 5 2 2 4 3 4 5 4 2 3 3 1 6 users

Neighbor selection: Identify movies similar to movie 1, rated by user 5

movies 1.00

  • 0.18

0.41

  • 0.10
  • 0.31

0.59

Here we use Pearson correlation as similarity: 1) Subtract mean rating mi from each movie i m1 = (1+3+5+5+4)/5 = 3.6 row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0] 2) Compute dot products between rows

s1,m

slide-32
SLIDE 32

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 32

12 11 10 9 8 7 6 5 4 3 2 1 4 5 5 ? 3 1 1 3 1 2 4 4 5 2 5 3 4 3 2 1 4 2 3 2 4 5 4 2 4 5 2 2 4 3 4 5 4 2 3 3 1 6 users

Compute similarity weights:

s1,3=0.41, s1,6=0.59 movies 1.00

  • 0.18

0.41

  • 0.10
  • 0.31

0.59

s1,m

slide-33
SLIDE 33

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 33

12 11 10 9 8 7 6 5 4 3 2 1 4 5 5

2.6

3 1 1 3 1 2 4 4 5 2 5 3 4 3 2 1 4 2 3 2 4 5 4 2 4 5 2 2 4 3 4 5 4 2 3 3 1 6 users

Predict by taking weighted average: r1.5 = (0.41*2 + 0.59*3) / (0.41+0.59) = 2.6

movies 𝒔𝒋𝒚 = σ𝒌∈𝑶(𝒋;𝒚)𝒕𝒋𝒌 ⋅ 𝒔𝒌𝒚 σ𝒕𝒋𝒌

slide-34
SLIDE 34

 Define similarity sij of items i and j  Select k nearest neighbors N(i; x)

▪ Items most similar to i, that were rated by x

 Estimate rating rxi as the weighted average:

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 34

baseline estimate for rxi

μ = overall mean movie rating

bx = rating deviation of user x = (avg. rating of user x) – μ

bi = rating deviation of movie i

 

 

=

) ; ( ) ; ( x i N j ij x i N j xj ij xi

s r s r Before:

 

 

−  + =

) ; ( ) ; (

) (

x i N j ij x i N j xj xj ij xi xi

s b r s b r

𝒄𝒚𝒋 = 𝝂 + 𝒄𝒚 + 𝒄𝒋

slide-35
SLIDE 35

0.4 1 8 . 1 0.9 0.3 0.5 0.8 1

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 35

Avatar LOTR Matrix Pirates Alice Bob Carol David

 In practice, it has been observed that item-item

  • ften works better than user-user

 Why? Items are simpler, users have multiple tastes

slide-36
SLIDE 36

 + Works for any kind of item

▪ No feature selection needed

 - Cold Start:

▪ Need enough users in the system to find a match

 - Sparsity:

▪ The user/ratings matrix is sparse ▪ Hard to find users that have rated the same items

 - First rater:

▪ Cannot recommend an item that has not been previously rated ▪ New items, Esoteric items

 - Popularity bias:

▪ Cannot recommend items to someone with unique taste ▪ Tends to recommend popular items

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 36

slide-37
SLIDE 37

 Implement two or more different

recommenders and combine predictions

▪ Perhaps using a linear model

 Add content-based methods to

collaborative filtering

▪ Item profiles for new item problem ▪ Demographics to deal with new user problem

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 37

slide-38
SLIDE 38
  • Evaluation
  • Error metrics
  • Complexity / Speed

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 38

slide-39
SLIDE 39

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 39

1 3 4 3 5 5 4 5 5 3 3 2 2 2 5 2 1 1 3 3 1 movies users

slide-40
SLIDE 40

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 40

1 3 4 3 5 5 4 5 5 3 3 2 ? ? ? 2 1 ? 3 ? 1 Test Data Set users movies

slide-41
SLIDE 41

 Compare predictions with known ratings

▪ Root-mean-square error (RMSE)

1 𝑂σ𝑦𝑗 𝑠𝑦𝑗 − 𝑠 𝑦𝑗 ∗ 2 where 𝒔𝒚𝒋 is predicted, 𝒔𝒚𝒋 ∗ is the true rating of x on i

▪ N is the number of points we are making comparisons on

▪ Rank Correlation:

▪ Spearman’s correlation between system’s and user’s complete rankings

▪ Precision at top 10 (or k):

▪ % of those in top 10 (or k)

 Another approach: 0/1 model

▪ Coverage:

▪ Number of items/users for which the system can make predictions

▪ Precision:

▪ Accuracy of predictions

▪ Receiver operating characteristic (ROC)

▪ Tradeoff curve between false positives and false negatives

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 41

Idea: ignore lowly-ranked items

Added green note & rearranged order of bullets

slide-42
SLIDE 42

 Narrow focus on accuracy sometimes

misses the point

▪ Prediction Diversity ▪ Prediction Context ▪ Order of predictions

 In practice, we care only to predict high

ratings:

▪ RMSE might penalize a method that does well for high ratings and badly for others

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 42

slide-43
SLIDE 43

 Expensive step is finding k most similar

customers: O(|X|)

 Too expensive to do at runtime

▪ Could pre-compute

 Pre-computation takes time O(k ·|X|)

▪ X … set of customers

 We already know how to do this!

▪ Near-neighbor search in high dimensions (LSH) ▪ Clustering ▪ Dimensionality reduction

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 43

slide-44
SLIDE 44

 Leverage all the data

▪ Don’t try to reduce data size in an effort to make fancy algorithms work ▪ Simple methods on large data do best

 Add more data

▪ e.g., add IMDB data on genres

 More data beats better algorithms

http://anand.typepad.com/datawocky/2008/03/more-data-usual.html

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 44

slide-45
SLIDE 45
slide-46
SLIDE 46

 Training data

▪ 100 million ratings, 480,000 users, 17,770 movies

▪ Lots of ratings – still 99% sparsity!

▪ 6 years of data: 2000-2005

 Test data (private)

▪ Last few ratings of each user (2.8 million) ▪ Evaluation criterion: root mean squared error (RMSE) ▪ Netflix Cinematch RMSE (production): 0.9514

 Competition

▪ 2700+ teams ▪ $1 million prize for 10% improvement on Cinematch

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 46

slide-47
SLIDE 47

 Next topic: Recommendations via

Latent Factor models

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 47

Overview of Coffee Varieties

FR TE S6 S5 L5 S3 S2 S1 R8 R6 R5 R4 R3 R2 L4 C7 S7 F9 F8 F6 F5 F4 F3 F2F1 F0 I2 C6 I1 C4 C3 C2 C1 B2 B1 S4 Complexity of Flavor Exoticness / Price Flavored Exotic Popular Roasts and Blends a1

The bubbles above represent products sized by sales volume. Products close to each other are recommended to each other.

slide-48
SLIDE 48

Geared towards females Geared towards males serious Less serious The Princess Diaries The Lion King Braveheart Independence Day Amadeus The Color Purple Ocean’s 11 Sense and Sensibility

Gus Dave

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 48

[slide from winning BellkorTeam] Lethal Weapon Dumb and Dumber

slide-49
SLIDE 49

Koren, Bell, Volinksy, IEEE Computer , 2009

4/20/2020 Tim Althoff, UW CS547: Machine Learning for Big Data, http://www.cs.washington.edu/cse547 49