Learning various classes of models of lexicographic orderings - - PowerPoint PPT Presentation

learning various classes of models of lexicographic
SMART_READER_LITE
LIVE PREVIEW

Learning various classes of models of lexicographic orderings - - PowerPoint PPT Presentation

Learning various classes of models of lexicographic orderings Richard Booth, Mahasarakham University, Thailand Yann Chevaleyre, LAMSADE, Universit e Paris-Dauphine J er ome Lang, LAMSADE, Universit e Paris-Dauphine J er ome


slide-1
SLIDE 1

Learning various classes of models of lexicographic

  • rderings

Richard Booth, Mahasarakham University, Thailand Yann Chevaleyre, LAMSADE, Universit´ e Paris-Dauphine J´ erˆ

  • me Lang, LAMSADE, Universit´

e Paris-Dauphine J´ erˆ

  • me Mengin, IRIT, Universit´

e de Toulouse Chattrakul Sombattheera, Mahasarakham University, Thailand

slide-2
SLIDE 2

Introduction Topic: learn to order objects of a combinatorial domain E.g. computers, described by Type: desktop or laptop Color: yellow or black Dvd-unit: reader or writer . . . Recommender system : learn how a user orders these objects, in

  • rder to suggest the ”best” ones among those that are available

/ the user can afford. If n variables, domains of m values : mn objects, !mn orderings

⇒ need compact representation of the orderings :

  • local preferences on each attribute
  • extra structure on the set of variables to ”aggregate” to

global preferences

2

slide-3
SLIDE 3

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b

  • Type is more important than Colour
  • Prefer laptop to desktop
  • Prefer yellow to black

3

slide-4
SLIDE 4

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b lb ≻ dy (decided at node T) ly ≻ lb (decided at node C)

3

slide-5
SLIDE 5

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b + comparisons in linear time + learning in polynomial time [SM06, DIV07] − very weak expressive power:

”prefer yellow for laptops, black for desktops”

3

slide-6
SLIDE 6

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b + comparisons in linear time + learning in polynomial time [SM06, DIV07] − very weak expressive power:

”prefer yellow for laptops, black for desktops” Conditional Preference Networks (CP-nets) : conditional local preferences (dependency graph) e.g.: l : y ≻ b (for laptops: yellow pref. to black)

d : b ≻ y l ≻ d + ceteris paribus comparisons: ly ≻ lb ≻ db ≻ dy

3

slide-7
SLIDE 7

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b + comparisons in linear time + learning in polynomial time [SM06, DIV07] − very weak expressive power:

”prefer yellow for laptops, black for desktops” Conditional Preference Networks (CP-nets) : conditional local preferences (dependency graph) e.g.: l : y ≻ b (for laptops: yellow pref. to black)

d : b ≻ y l ≻ d + ceteris paribus comparisons: ly ≻ lb ≻ db ≻ dy + very expressive − comparisons difficult (NP-complete) − hard to learn [session on CP-net learning at IJCAI’O9]

3

slide-8
SLIDE 8

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b + comparisons in linear time + learning in polynomial time [SM06, DIV07] − very weak expressive power:

”prefer yellow for laptops, black for desktops” Conditional Preference Networks (CP-nets) : conditional local preferences (dependency graph) e.g.: l : y ≻ b (for laptops: yellow pref. to black)

d : b ≻ y l ≻ d + ceteris paribus comparisons: ly ≻ lb ≻ db ≻ dy + very expressive − comparisons difficult (NP-complete) − hard to learn [session on CP-net learning at IJCAI’O9]

(easy classes of CP-nets / examples, incomplete algorithms)

3

slide-9
SLIDE 9

Introduction Lexicographic orderings : local preferences over the domains of each variable

+ importance ordering of the variables T l ≻ d C y ≻ b + comparisons in linear time + learning in polynomial time [SM06, DIV07] − very weak expressive power:

”prefer yellow for laptops, black for desktops” Conditional Preference Networks (CP-nets) : conditional local preferences (dependency graph) e.g.: l : y ≻ b (for laptops: yellow pref. to black)

d : b ≻ y l ≻ d + ceteris paribus comparisons: ly ≻ lb ≻ db ≻ dy + very expressive − comparisons difficult (NP-complete) − hard to learn [session on CP-net learning at IJCAI’O9] ⇒ find something in between the two formalisms

3

slide-10
SLIDE 10

Introduction Contribution of this paper: it is possible to add conditionality in lexicographic prefence models without increasing the complexity of reasoning / learning

4

slide-11
SLIDE 11

Learning unconditional lexicographic preferences Sample complexity: VC dim = n (when n variables, all binary)

5

slide-12
SLIDE 12

Learning unconditional lexicographic preferences Sample complexity: VC dim = n (when n variables, all binary) Active learning: a learner asks ”user” queries of the form ”What is preferred between ly and bd ?” Goal : identify preference model of the user

⇒ If local pref. fixed, need log(!n) queries (worst case) [DIV07] ⇒ If local pref. to be learnt, need n + log(!n) queries

5

slide-13
SLIDE 13

Learning unconditional lexicographic preferences Sample complexity: VC dim = n (when n variables, all binary) Active learning: a learner asks ”user” queries of the form ”What is preferred between ly and bd ?” Goal : identify preference model of the user

⇒ If local pref. fixed, need log(!n) queries (worst case) [DIV07] ⇒ If local pref. to be learnt, need n + log(!n) queries

Passive learning: given set of examples e.g. E = {lb ≻ db, . . .} Goal: output preference struct. consistent with the examples

5

slide-14
SLIDE 14

Learning unconditional lexicographic preferences Sample complexity: VC dim = n (when n variables, all binary) Active learning: a learner asks ”user” queries of the form ”What is preferred between ly and bd ?” Goal : identify preference model of the user

⇒ If local pref. fixed, need log(!n) queries (worst case) [DIV07] ⇒ If local pref. to be learnt, need n + log(!n) queries

Passive learning: given set of examples e.g. E = {lb ≻ db, . . .} Goal: output preference struct. consistent with the examples Greedy algorithm [DIV07] (return failure if not possible)

⇒ passive learning with fixed local pref. in P [DIV07] ⇒ passive learning with unknown local pref. in P

5

slide-15
SLIDE 15

Learning unconditional lexicographic preferences Sample complexity: VC dim = n (when n variables, all binary) Active learning: a learner asks ”user” queries of the form ”What is preferred between ly and bd ?” Goal : identify preference model of the user

⇒ If local pref. fixed, need log(!n) queries (worst case) [DIV07] ⇒ If local pref. to be learnt, need n + log(!n) queries

Passive learning: given set of examples e.g. E = {lb ≻ db, . . .} Goal: output preference struct. consistent with the examples Greedy algorithm [DIV07] (return failure if not possible)

⇒ passive learning with fixed local pref. in P [DIV07] ⇒ passive learning with unknown local pref. in P

Model optimization (less than k errors)

⇒ NP-complete with fixed local pref. [SM06] ⇒ NP-complete with unknown local pref.

5

slide-16
SLIDE 16

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = {lbr ≻ dyr, lyr ≻ lbw, dyw ≻ dbr}

6

slide-17
SLIDE 17

Learning unconditional lexicographic preferences

7

slide-18
SLIDE 18

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = {lbr ≻ dyr, lyr ≻ lbw, dyw ≻ dbr}

?

6

slide-19
SLIDE 19

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = {lbr ≻ dyr, lyr ≻ lbw, dyw ≻ dbr} T l ≻ d

6

slide-20
SLIDE 20

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = { lyr ≻ lbw, dyw ≻ dbr} T l ≻ d

?

6

slide-21
SLIDE 21

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = { lyr ≻ lbw, dyw ≻ dbr} T l ≻ d C y ≻ b

6

slide-22
SLIDE 22

Learning unconditional lexicographic preferences Greedy algorithm [DIV07] 1.initialize seq. of var. with empty sequence; 2.while there remains some unused variable: (a)choose a variable and local pref. that does not wrongly

  • rder the remaining examples

(b)remove examples ordered with this variable

E = { } T l ≻ d C y ≻ b

success !

6

slide-23
SLIDE 23

Conditional local pref. / Unconditional variable importance ”I always prefer laptops to desktops” ”For desktops, I prefer black to yellow” ”For laptops, I prefer yellow to black”

T l ≻ d C l : y ≻ b d : b ≻ y

7

slide-24
SLIDE 24

Conditional local pref. / Unconditional variable importance ”I always prefer laptops to desktops” ”For desktops, I prefer black to yellow” ”For laptops, I prefer yellow to black”

T l ≻ d C l : y ≻ b d : b ≻ y

Sample complexity : VC dim = 2n − 1 Active learning : 2n − 1 + log(!n) queries needed (worst case) Passive learning : in P (Greedy algorithm still works) Model optimization : NP-hard

7

slide-25
SLIDE 25

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = {lbr ≻ dyr, lyr ≻ lbw, dbw ≻ dyr}

8

slide-26
SLIDE 26

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = {lbr ≻ dyr, lyr ≻ lbw, dbw ≻ dyr}

?

8

slide-27
SLIDE 27

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = {lbr ≻ dyr, lyr ≻ lbw, dbw ≻ dyr} T l ≻ d

8

slide-28
SLIDE 28

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = { lyr ≻ lbw, dbw ≻ dyr} T l ≻ d

?

8

slide-29
SLIDE 29

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = { lyr ≻ lbw, dbw ≻ dyr} T l ≻ d C l : y ≻ b d : b ≻ y

8

slide-30
SLIDE 30

Conditional local pref. / Unconditional variable importance Greedy algorithm:

E = { } T l ≻ d C l : y ≻ b d : b ≻ y

success !

8

slide-31
SLIDE 31

Conditional local pref. & Conditional variable importance ”For desktops, Dvd-unit (read/write) more important than color” ”For laptops, color is more important than the type of Dvd unit”

T l ≻ d D d d : w > r C r dr : b > y C l l : y > b D y ly : w ≻ r D b lb : w > r

9

slide-32
SLIDE 32

Conditional local pref. & Conditional variable importance ”For desktops, Dvd-unit (read/write) more important than color” ”For laptops, color is more important than the type of Dvd unit”

T l ≻ d D d d : w > r C r dr : b > y C l l : y > b D y ly : w ≻ r D b lb : w > r ⇒ variable importance tree + conditional local preference tables

Note : tree need not be complete (but then partial ordering)

9

slide-33
SLIDE 33

Conditional local pref. & Conditional variable importance ”For desktops, Dvd-unit (read/write) more important than color” ”For laptops, color is more important than the type of Dvd unit”

T l ≻ d D d d : w > r C r dr : b > y C l l : y > b D y ly : w ≻ r D b lb : w > r

Sample complexity: VC dim = 2n − 1 Active learning: 2n − 1 +

n−1

  • k=0

2k log(n − k) queries needed

Passive learning: in P (Greedy algorithm still works) Model optimization: NP-complete

9

slide-34
SLIDE 34

Unconditional local pref. / Conditional variable importance

T D d C r C l D y D b y ≻ b l ≻ d w ≻ r

10

slide-35
SLIDE 35

Unconditional local pref. / Conditional variable importance

T D d C r C l D y D b y ≻ b l ≻ d w ≻ r ⇒ variable importance tree + unconditional local preference table

10

slide-36
SLIDE 36

Unconditional local pref. / Conditional variable importance

T D d C r C l D y D b y ≻ b l ≻ d w ≻ r

Sample complexity: ? Active learning: .

n +

n−1

  • k=0

2k log(n − k) queries needed (unknown pref.)

n−1

  • k=0

2k log(n − k) queries needed (fixed pref.)

Passive learning: NP-complete !! (Greedy algorithm still works) Model optimization: NP-complete

10

slide-37
SLIDE 37

Quick recap VC-dim active l. passive l. approx UI - FLP

log(!n)

P NP-C UI - ULP

n n + log(!n)

P NP-C UI - CLP 2n − 1 2n − 1 + log(!n) P NP-hard CI - FLP

g(n)

P NP-C CI - ULP

≥ n n + g(n)

NP-C NP-C CI - CLP 2n − 1

2n − 1 + g(n)

P NP-C

g(n) =

n−1

  • k=0

2k log(n − k)

11

slide-38
SLIDE 38

Related and future work

  • Conditional lexic. orderings introduced by [Wilson, ECAI’06]

⇒ approximate CP-nets

  • Need to explore heuristics to choose variables during execu-

tion of the greedy algorithm

  • Problem if tree not complete : the ordering is only partial

⇒ Need to explore mixtures of conditional / unconditional

structures

  • Need to test algorithms on real / generated data

⇒ How to deal with noisy date ?

12