data Rob Nowak www.ece.wisc.edu/~nowak OSL, Les Houches, January - - PowerPoint PPT Presentation

data
SMART_READER_LITE
LIVE PREVIEW

data Rob Nowak www.ece.wisc.edu/~nowak OSL, Les Houches, January - - PowerPoint PPT Presentation

Statistical Learning and Optimization Based on Comparative Judgments data Rob Nowak www.ece.wisc.edu/~nowak OSL, Les Houches, January 10, 2013 Learning from Comparative Judgements Humans are much more reliable and L. L. Thurstone consistent


slide-1
SLIDE 1

Statistical Learning and Optimization Based on Comparative Judgments

data

Rob Nowak www.ece.wisc.edu/~nowak OSL, Les Houches, January 10, 2013
slide-2
SLIDE 2 model space Is model A better than B? data space

Learning from Comparative Judgements

  • L. L. Thurstone
answers = bits Humans are much more reliable and consistent at making comparative judgements, than at giving numerical ratings or evaluations Bijmolt and Wedel (1995) Stewart, Brown, and Chater (2005) active learning
slide-3
SLIDE 3

Machine Learning from Human Judgements

Recommendation Systems Document Classification experiments data scientist Optimizing Experimentation labels Challenge: Computing is cheap, but human assistance/guidance is expensive Goal: Optimize such systems with as little human involvement as possible
slide-4
SLIDE 4

Learning from Paired Comparisons

  • 1. Derivative Free Optimization
using Human Subjects minimizing a convex function ranking objects that embed into a low- dimensional space
  • 2. Ranking from
Pairwise Comparisons 1 2 3 4 5 6 7
slide-5
SLIDE 5 Human oracles can provide function values or comparisons, but not function gradients convex function to be minimized Methods that don’t use gradients are called Derivative Free Optimization (DFO)

Optimization Based on Human Judgements

slide-6
SLIDE 6

A Familiar Application

better worse

spherical correction cylindrical correction
  • ptimal
prescription
slide-7
SLIDE 7

Results ← SEARCH(query = “sebastian bach”, wA)

wA = wold

Johann Sebastian Bach (1685-1750)
  • Composer

wA = wnew

Sebastian Bach (1968-current)
  • Heavy Metal Singer
  • Frontman of “Skid Row”

Personalized Search

Profile vector wA ∈ Rd

slide-8
SLIDE 8

Assume that the answers are probably correct: for some δ > 0 P (answer = sign(f(x) − f(y))) ≥ 1 2 + δ

Optimization Based on Pairwise Comparisons

The function will be minimized by asking pairwise comparisons of the form: Is f(x) > f(y) ?

  • f(x)

f(y) • Assume that the (unknown) function f to be optimized is strongly convex with Lipschitz gradients

slide-9
SLIDE 9 line search iteratively reduces interval containing minimum

Optimization based on Pairwise Comparisons

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1
  • x3
x0 • x1 •
  • x2
  • ••
x4

begin with large interval [y−

0 , y+ 0 ];

midpoint y0 is estimate of minimizer y− y+ y0

Optimization with Pairwise Comparisons initialize: x0 = random point for n = 0, 1, 2, . . . 1) select one of d coordinates uniformly at random and consider line along coordinate that passes xn 2) minimize along coordinate using pairwise comparisons and binary search 3) xn+1 = approximate minimizer
slide-10
SLIDE 10 line search iteratively reduces interval containing minimum

Optimization based on Pairwise Comparisons

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1
  • x3
x0 • x1 •
  • x2
  • ••
x4

y− y+ y0 split intervals [y−

0 , y0] and [y0, y+ 0 ] and compare

function values at these points with f(y0)

Optimization with Pairwise Comparisons initialize: x0 = random point for n = 0, 1, 2, . . . 1) select one of d coordinates uniformly at random and consider line along coordinate that passes xn 2) minimize along coordinate using pairwise comparisons and binary search 3) xn+1 = approximate minimizer
slide-11
SLIDE 11 line search iteratively reduces interval containing minimum

Optimization based on Pairwise Comparisons

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1
  • x3
x0 • x1 •
  • x2
  • ••
x4

y− y+ y0 y1 y−

1

y+

1

reduce to smallest interval containing minimum of these points

Optimization with Pairwise Comparisons initialize: x0 = random point for n = 0, 1, 2, . . . 1) select one of d coordinates uniformly at random and consider line along coordinate that passes xn 2) minimize along coordinate using pairwise comparisons and binary search 3) xn+1 = approximate minimizer
slide-12
SLIDE 12 line search iteratively reduces interval containing minimum

Optimization based on Pairwise Comparisons

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1
  • x3
x0 • x1 •
  • x2
  • ••
x4

y1 y−

1

y+

1

repeat...

Optimization with Pairwise Comparisons initialize: x0 = random point for n = 0, 1, 2, . . . 1) select one of d coordinates uniformly at random and consider line along coordinate that passes xn 2) minimize along coordinate using pairwise comparisons and binary search 3) xn+1 = approximate minimizer
slide-13
SLIDE 13 line search iteratively reduces interval containing minimum

Optimization based on Pairwise Comparisons

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 0.8 0.6 0.4 0.2 0.2 0.4 0.6 0.8 1
  • x3
x0 • x1 •
  • x2
  • ••
x4

repeat... y1 y−

1

y+

1

y2 y−

2

y+

2 Optimization with Pairwise Comparisons initialize: x0 = random point for n = 0, 1, 2, . . . 1) select one of d coordinates uniformly at random and consider line along coordinate that passes xn 2) minimize along coordinate using pairwise comparisons and binary search 3) xn+1 = approximate minimizer
slide-14
SLIDE 14

each line search requires 1

2 log( d ✏ ) comparisons

⇒ total of n ≈ d log 1

✏ log d ✏ comparisons

⇒ ✏ ≈ exp

  • −p n
d
  • Noiseless Case:

line searches require d

2 comparisons ⇒ ✏ ≈ q

d3 n

Unbounded Noise (δ ∝ |f(x) − f(y)|): If we want error := E[f(xk) − f(x∗)] ≤ ✏, we must solve k ≈ d log 1

✏ line searches

(standard coordinate descent bound) and each must be at least p ✏

d accurate

Convergence Analysis

take majority vote of repeated comparisons to mitigate noise P (answer = sign(f(x) − f(y))) ≥ 1 2 + δ Noisy Case: probably correct answers to comparisons: Bounded Noise (δ ≥ δ0 > 0): ⇒ ✏ ≈ exp

  • −p n
d C
  • line searches require C log d
✏ comparisons,

where C > 1/2 depends on δ0

slide-15
SLIDE 15

+✏ −✏

with ✏ ∼ n−1/4

  • KL Divergence = constant
  • squared distance between minima ∼ n−1/2

For unbounded noise, ∝ |f(x) − f(y)|, Kullback-Leibler Divergence between response to f0(x) > f0(y)? vs. f1(x) > f1(y)? is O(✏4), and KL Divergence between n responses is O(n✏4) matches O(n−1/2) upper bound of algorithm ⇒ P

  • f(xn) − f(x∗) ≥ n−1/2

≥ constant

Lower Bounds

f0(x) = |x + ✏|2 f1(x) = |x − ✏|2

l

x

l

y

l l

q

d n in Rd Jamieson, Recht, RN (2012)
slide-16
SLIDE 16

suppose we can obtain noisy function evaluations of the form: f(x) + noise

A Surprise

Could we do better with function evaluations (e.g., ratings instead of comparisons)?

x y

z

f(x) = 10 f(z) = 1 f(y) = 9 f(y) < f(x) f(z) < f(x)

function values seem to provide much more information than comparisons alone

if we could measure noisy gradients (and function is strongly convex), then O( d

n) convergence rate is possible Nemirovski et al 2009

q

d2 n lower bound on optimization error with noisy function evaluations evaluations give at best a small improvement over comparisons

q

d3 n upper bound on optimization error with noisy pairwise comparisons see Agrawal, Dekel, Xiao (2010) for similar upper bounds for function evals
  • O. Shamir (2012)
slide-17
SLIDE 17 Philippe: “B” Bartender: “Ok try these two: C or D?” .... Bartender: “Try these two samples. Do you prefer A or B?” Philippe: “Hmm... I prefer French wine” Bartender: “What beer would you like?”

Preference Learning

slide-18
SLIDE 18

Ranking Based on Pairwise Comparisons

Consider 10 beers ranked from best to worst: D < G < I < C < J < E < A < H < B < F Which pairwise comparisons should we ask? How many are needed? 0 1 -1 -1 -1 1 -1 1 -1 -1
  • 1 0 -1 -1 -1 1 -1 -1 -1 -1
1 1 0 -1 1 1 -1 1 -1 1 1 1 1 0 1 1 1 1 1 1 1 1 -1 -1 0 1 -1 1 -1 -1
  • 1 -1 -1 -1 -1 0 -1 -1 -1 -1
1 1 1 -1 1 1 0 1 1 1
  • 1 1 -1 -1 -1 1 -1 0 -1 -1
1 1 1 -1 1 1 -1 1 0 1 1 1 -1 -1 1 1 -1 1 -1 0 A B C D E F G H I J A B C D E F G H I J Assumption: responses to pairwise comparisons are consistent with ranking
slide-19
SLIDE 19 Problem: n! possible rankings requires n log n bits of information fraction of pairs misordered ≤ c n log n m almost all pairs must be compared, i.e., about n(n − 1)/2 comparisons perfect recovery: approximate recovery: Consider 10 beers ranked from best to worst: D < G < I < C < J < E < A < H < B < F select m pairwise comparisons at random binary insertion sort also requires n log n comparisons That’s a lot of beer! adaptive selection:

Ranking Based on Pairwise Comparisons

slide-20
SLIDE 20

⌅xi W⌅ < ⌅xj W⌅ ⇤ xi ⇥ xj

Low-Dimensional Assumption: Beer Space

A B C D E F G

w

Philippe’s latent preferences in “beer space” (e.g, hoppiness, lightness, maltiness,...) Suppose beers can be embedded (according to characteristics) into a low-dimensional Euclidean space.
slide-21
SLIDE 21

Ranking According to Distance

A B C D E F G C < A < B < E < G < D < F

w

slide-22
SLIDE 22

Ranking According to Distance

A B C D E F G E < B < F < G < C < A < D

w

slide-23
SLIDE 23

... now there are at most n2d rankings (instead of n!), and so in principle no more than 2d log n bits of information are needed.

Goal: Determine ranking by asking comparisons like “Do you prefer A or B?”

Ranking According to Distance

A B C D E F G D < G < C < E < A < B < F

w

slide-24
SLIDE 24

Lazy Binary Search input: x1, . . . , xn ∈ Rd initialize: x1, . . . , xn in uniformly random order for k=2,. . . ,n for i=1,. . . ,k-1 if qi,k is ambiguous given {qi,j}i,j<k, then ask for pairwise comparison, else impute qi,j from {qi,j}i,j<k

  • utput: ranking of x1, . . . , xn consistent with all pairwise comparisons

binary information we can gather: qi,j ≡ do you prefer xi or xj Consider n objects x1, x2, . . . , xn ∈ Rd. Many comparisons are redundant because the objects embed in Rd, and therefore it may be possible to correctly rank based on a small subset.

Optimal selection of a sequence of qi,j requires a computationally difficult search, involving a combinatorial optimization.

Optimization

simple linear program
slide-25
SLIDE 25

Ranking and Geometry

suppose we have ranked 4 beers ranking implies that Philippe’s optimal preferences are in shaded region
slide-26
SLIDE 26

Ranking and Geometry

new beer Answers to queries that intersect shaded region are ambiguous,
  • therwise they are not.
suppose we have ranked 4 beers ranking implies that Philippe’s optimal preferences are in shaded region

Key Observation: most queries will not be ambiguous, therefore the expected total number of queries made by lazy binary search is about d log n

  • K. Jamieson and RN (2011)
slide-27
SLIDE 27

= ⇒ E[#ambiguous] ≈ d

k

# of d-cells ≈ k2d

d!

# intersected ≈ k2(d−1)

(d−1)!

= ⇒ E[# requested] ≈

n

X

k=2

d k (Coombs 1960) (Buck 1943) (Cover 1965) ≈ d log n

Tolerance to erroneous responses using d log2 n queries

(Jamieson & RN 2011)

Ranking and Geometry

= ⇒ P(ambiguous) ≈

d k2 at k-th step of algorithm robust to noise and non-transitivity
slide-28
SLIDE 28

BeerMapper

BeerMapper app learns a persons ranking of beers by selecting pairwise comparisons using lazy binary search and a low- dimensional embedding based on key beer features
slide-29
SLIDE 29 http://www.ratebeer.com/beer/two-hearted-ale/1502/2/1/ Reviews for each beer Bag of Words weighted by TF*IDF Get 100 nearest neighbors using cosine distance Non-metric multidimensional scaling Embedding in 3 dimensions Two Hearted Ale - Input ~2500 natural language reviews

Algorithm requires feature representations of the beers {x1, . . . , xn} ⊂ Rd

BeerMapper - Under the Hood

slide-30
SLIDE 30 Reviews for each beer Bag of Words weighted by TF*IDF Get 100 nearest neighbors using cosine distance Non-metric multidimensional scaling Embedding in 3 dimensions

Algorithm requires feature representations of the beers {x1, . . . , xn} ⊂ Rd

BeerMapper - Under the Hood

Two Hearted Ale - Weighted Bag of Words:

slide-31
SLIDE 31

Weighted count vector for the ith beer: zi ∈ R400,000 Cosine distance: d(zi, zj) = 1 −

zT

i zj

||zi|| ||zj||

Reviews for each beer Bag of Words weighted by TF-IDF Get 100 nearest neighbors using cosine distance Non-metric multidimensional scaling Embedding in 3 dimensions Two Hearted Ale - Nearest Neighbors: Bear Republic Racer 5 Avery IPA Stone India Pale Ale &#40;IPA&#41; Founders Centennial IPA Smuttynose IPA Anderson Valley Hop Ottin IPA AleSmith IPA BridgePort IPA Boulder Beer Mojo IPA Goose Island India Pale Ale Great Divide Titan IPA New Holland Mad Hatter Ale Lagunitas India Pale Ale Heavy Seas Loose Cannon Hop3 Sweetwater IPA ...

Algorithm requires feature representations of the beers {x1, . . . , xn} ⊂ Rd

BeerMapper - Under the Hood

slide-32
SLIDE 32

Weighted count vector for the ith beer: zi ∈ R400,000 Cosine distance: d(zi, zj) = 1 −

zT

i zj

||zi|| ||zj||

Reviews for each beer Bag of Words weighted by TF*IDF Embedding in 3 dimensions Two Hearted Ale - Nearest Neighbors: Bear Republic Racer 5 Avery IPA Stone India Pale Ale &#40;IPA&#41; Founders Centennial IPA Smuttynose IPA Anderson Valley Hop Ottin IPA AleSmith IPA BridgePort IPA Boulder Beer Mojo IPA Goose Island India Pale Ale Great Divide Titan IPA New Holland Mad Hatter Ale Lagunitas India Pale Ale Heavy Seas Loose Cannon Hop3 Sweetwater IPA ...

Algorithm requires feature representations of the beers {x1, . . . , xn} ⊂ Rd

BeerMapper - Under the Hood

Get 100 nearest neighbors using cosine distance Non-metric multidimensional scaling
slide-33
SLIDE 33

Algorithm requires feature representations of the beers {x1, . . . , xn} ⊂ Rd

BeerMapper - Under the Hood

Reviews for each beer Bag of Words weighted by TF*IDF Get 15 nearest neighbors using cosine distance Non-metric multidimensional scaling Embedding in 3 dimensions Red = IPA Green = Pale Ale Magenta = Amber Ale Cyan = Lager + Pilsener Yellow = Belgians (light + dark) Black = Stout + Porter Blue = Everything else Sanity check: styles should cluster together and similar styles should be close.
slide-34
SLIDE 34 Derivative Free Optimization using Human Subjects Ranking from Pairwise Comparisons 1 2 3 4 5 6 7 Humans are much more reliable and consistent at making comparative judgements, than in giving numerical ratings or evaluations Challenge: Computing is cheap, but human assistance/guidance is expensive Goal: Optimize such systems with as little human involvement as possible

Machine Learning from Comparative Judgements

“Binary search” procedures can play a role in active learning
slide-35
SLIDE 35

References

  • K. Jamieson, B. Recht, and R. Nowak, “Query complexity of derivative free optimization,” NIPS 2012
  • S. Tong and D. Koller, “Support vector machine active learning with applications,” JMLR 2001
  • R. Nowak, “The geometry of generalized binary search,” IEEE Trans. IT 2011
  • R. Castro and R. Nowak, “Minimax bounds for active learning,” IEEE Trans. IT 2008
  • M. Raginsky and S. Rahklin, “Lower bounds for passive and active learning,” NIPS 2011
  • K. Jamieson and R. Nowak, “Active ranking using pairwise comparisons,” NIPS 2011
  • T. Bijmolt and M. Wedel, “The effects of alternative methods of collecting similarity data for
multidimensional scaling,” IJRM 1995
  • N. Steward, G. Brown and N. Chater, “Absolute identification by relative judgement,” Psych. Review 2005
  • A. Agrawal, O. Dekel and L. Xiao, “Optimal algorithms for online convex optimization with multi-point
bandit feedback,” COLT 2010
  • M. Horstein, “Sequential decoding using noiseless feedback,” IEEE Trans. IT 1963
  • M. Burnashev and K. Zigangirov, “An interval estimation problem for controlled observations,” Prob.
  • Info. Transmission 1974
  • R. Karp and R. Kleinberg, “Noisy binary search and its applications,” SODA 2007
  • S. Hanneke, “Rates of convergence in active learning,” Ann. Stat. 2011
  • A. Nemirovski, A. Juditsky, G. Lan and A. Shapiro, “Robust stochastic approximation approach to
stochastic programming,” SIAM J. Opt 2009
  • O. Shamir, “On the complexity of bandit and derivative free stochastic convex optimization,” arxiv 2012
  • Y. Yue and T. Joachims, “Interactively Optimizing Information Retrieval Systems as a Dueling
Bandits Problem, 2009