Learning preferences with multiple-criteria models Olivier Sobrie - - PowerPoint PPT Presentation

learning preferences with multiple criteria models
SMART_READER_LITE
LIVE PREVIEW

Learning preferences with multiple-criteria models Olivier Sobrie - - PowerPoint PPT Presentation

Learning preferences with multiple-criteria models Olivier Sobrie Universit Paris-Saclay - CentraleSuplec Universit de Mons - Facult polytechnique June 21, 2016 Learning preferences with multiple-criteria models O. Sobrie - June 21,


slide-1
SLIDE 1

Learning preferences with multiple-criteria models

Olivier Sobrie

Université Paris-Saclay - CentraleSupélec Université de Mons - Faculté polytechnique

June 21, 2016

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

1 / 54

slide-2
SLIDE 2
  • 1. Introduction

Preferences

Preferences problems - some examples Sorting of hotels Choice of a pair of shoes Preference learning - some examples Google Amazon

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

2 / 54

slide-3
SLIDE 3
  • 1. Introduction

Learning the preferences

◮ Hot topic in last years ◮ Several research communities study the learning of preferences

Learning of preferences Multiple-criteria decision analysis (MCDA) Preference learning (PL) . . .

◮ Examples of sorting problems (ordered classification) treated in

MCDA and PL

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

3 / 54

slide-4
SLIDE 4
  • 1. Introduction

Example of MCDA sorting problem I

◮ Maria (DM) has to choose for an

accommodation for her next holidays in Barcelona

◮ She sorts a small subset of accommodations

A∗ in two ordered sets : “Bad” and “Good”

A∗ Good Bad

Plaza Hilton Travelhodge Majestic Rambla Front Maritim Miramar Hotel W ◮ She wants to obtain a full sorting of all the hotels in Barcelona ◮ She asks for the support of a decision analyst

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

4 / 54

slide-5
SLIDE 5
  • 1. Introduction

Example of MCDA sorting problem II

Decision maker (DM) Decision analyst (DA) asks questions provides preference information

◮ The DA helps Maria identifying the criteria that amount for her

. . . distance to the beach 600m 300m 50m 200m . . . distance to the center 500m 100m 600m 300m . . . price 150e 130e 90e 80e . . . size 45m2 35m2 30m2 25m2 . . . rating . . .

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

5 / 54

slide-6
SLIDE 6
  • 1. Introduction

Example of MCDA sorting problem III

Start Choice of a learning set A∗ Add info? Learning of a model Model accepted? End Fix some parameters Restart the process (globally or partially) no yes yes no

Decision maker (DM) Decision analyst (DA) asks questions provides preference information

A∗ Good Bad

Plaza Hilton Travelhodge Majestic Rambla Front Maritim Miramar Hotel W

OK for this model

Decision maker (DM) Decision analyst (DA) asks questions provides preference information

Decision process

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

6 / 54

slide-7
SLIDE 7
  • 1. Introduction

Example of PL sorting problem I

◮ From a large database, we would like to have a model predicting the

health status of a patient before anesthesia

◮ Database built from different data sources ◮ Data generated by a ground truth ◮ The database contains ±1000 patients ◮ Patients are evaluated on attributes and

assigned to a category reflecting their health status

◮ Categories are ordered (ASA score) Healthy Mild systemic disease Sever systemic disease Incapaciting systemic disease Moribound

≻ ≻ ≻ ≻

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

7 / 54

slide-8
SLIDE 8
  • 1. Introduction

Example of PL sorting problem II

◮ The database is given as input to a learning algorithm

Learning algorithm Model

◮ The model learned is then used as a blackbox for predicting the

assignments of other patients

Patient evaluation Learned model Learned model Assignment (ASA score)

◮ The performance of the model and learning algorithm are assessed

using indicators such as classification accuracy, area under the curve, etc.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

8 / 54

slide-9
SLIDE 9
  • 1. Introduction

MCDA versus PL

Multiple criteria decision analysis Preference learning

◮ Small datasets

A∗ Good Bad

Plaza Hilton Travelhodge Majestic Rambla Front Maritim Miramar Hotel W

◮ Large datasets

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

9 / 54

slide-10
SLIDE 10
  • 1. Introduction

MCDA versus PL

Multiple criteria decision analysis Preference learning

◮ Small datasets

A∗ Good Bad

Plaza Hilton Travelhodge Majestic Rambla Front Maritim Miramar Hotel W

◮ Large datasets ◮ Strong interactions

Decision maker (DM) Decision analyst (DA) asks questions provides preference information

◮ No/little interactions

Ground Truth

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

9 / 54

slide-11
SLIDE 11
  • 1. Introduction

MCDA versus PL

Multiple criteria decision analysis Preference learning

◮ Small datasets

A∗ Good Bad

Plaza Hilton Travelhodge Majestic Rambla Front Maritim Miramar Hotel W

◮ Large datasets ◮ Strong interactions

Decision maker (DM) Decision analyst (DA) asks questions provides preference information

◮ No/little interactions

Ground Truth

◮ Interpretable models Interpretable model Output Input ◮ Blackbox models Interpretable model Output Input

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

9 / 54

slide-12
SLIDE 12
  • 1. Introduction

Aim of this thesis Make some links between MCDA and PL

Use MCDA models to deal with PL problems (outranking models and additive value function models) Validation of the learning algorithms as done in PL Test the algorithms and models on a real application Study the expressivity of the MCDA models Bring new techniques in MCDA and PL

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

10 / 54

slide-13
SLIDE 13
  • 1. Introduction

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

11 / 54

slide-14
SLIDE 14
  • 1. Introduction

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

11 / 54

slide-15
SLIDE 15
  • 2. Majority rule sorting model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

11 / 54

slide-16
SLIDE 16
  • 2. Majority rule sorting model

Majority rule sorting model

◮ Sorting model (p ordered categories, i.e. C p ≻ C p−1 ≻ . . . ≻ C 1) ◮ Axiomatized by Bouyssou and Marchant (2007a,b)

C1 C2 C3

  • crit. 1

w1

  • crit. 2

w2

  • crit. 3

w3

  • crit. 4

w4

  • crit. 5

w5 b1 b2

◮ n weights (w1, . . . , wn) ◮ 1 majority threshold (λ) ◮ p − 1 profiles (b1, . . . , bp−1)

Assignment rule

a ∈ C h ⇔

  • j:aj ≥bh−1

j

wj ≥ λ and

  • j:aj ≥bh

j

wj < λ

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

12 / 54

slide-17
SLIDE 17
  • 2. Majority rule sorting model

MR-Sort applied to Maria’s decision problem

◮ Sorting accommodations in two categories : Good and Bad

Bad Good

200m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.8

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

13 / 54

slide-18
SLIDE 18
  • 2. Majority rule sorting model

MR-Sort applied to Maria’s decision problem

◮ Sorting accommodations in two categories : Good and Bad

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.8 50m 600m 90e 30m2

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ Hilton

∈ Good

  • j:aj≥b1

j

wj = 0.8

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

13 / 54

slide-19
SLIDE 19
  • 2. Majority rule sorting model

MR-Sort applied to Maria’s decision problem

◮ Sorting accommodations in two categories : Good and Bad

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.8 300m 100m 130e 35m2 4

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ Plaza

∈ Bad

  • j:aj≥b1

j

wj = 0.6

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

13 / 54

slide-20
SLIDE 20
  • 2. Majority rule sorting model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

14 / 54

slide-21
SLIDE 21
  • 3. Metaheuristic for learning a MR-Sort model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

14 / 54

slide-22
SLIDE 22
  • 3. Metaheuristic for learning a MR-Sort model

Learning a MR-Sort model

Objective

MR-Sort Metaheuristic

C1 C2 C3

  • crit. 1

w1

  • crit. 2

w2

  • crit. 3

w3

  • crit. 4

w4

  • crit. 5

w5 b1 b2

Previous research for learning a MR-Sort model

◮ MIP by Leroy et al. (2011) → inefficient for large datasets ◮ Learning the weights and majority threshold → easy (LP) ◮ Learning the profiles → difficult (MIP)

Strategy Metaheuristic which takes advantage of the ease of learning the weights and leverages the difficulty for learning the profiles

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

15 / 54

slide-23
SLIDE 23
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-24
SLIDE 24
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Profiles initialized with a heu- ristic with some randomness

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-25
SLIDE 25
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Profiles initialized with a heuristic with some randomness Fixed profiles Maximization of the CA

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-26
SLIDE 26
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Profiles initialized with a heuristic with some randomness Fixed profiles Maximization of the CA Fixed weights and majority threshold Maximization of the CA

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-27
SLIDE 27
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Profiles initialized with a heuristic with some randomness Fixed profiles Maximization of the CA Fixed weights and majority thre- shold Maximization of the CA Once a model restores all the assignment examples

  • r after Nit iterations

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-28
SLIDE 28
  • 3. Metaheuristic for learning a MR-Sort model

Metaheuristic for learning a MR-Sort model

Initialization of Nmod MR-Sort models LP learning the weights and the majority threshold Heuristic adjus- ting the profiles Stopping criterion met ? MR-Sort model Reinitialize

  • Nmod

2

  • worst models

Learning set

Profiles initialized with a heuristic with some randomness Fixed profiles Maximization of the CA Fixed weights and majority thre- shold Maximization of the CA Once a model restores all the assignment examples

  • r after Nit iterations

The best model regarding CA

  • r AUC is returned

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

16 / 54

slide-29
SLIDE 29
  • 3. Metaheuristic for learning a MR-Sort model

Test with PL datasets I

◮ Datasets issued from the PL field ◮ Categories have been binarized by thresholding at the median ◮ Split in learning and test sets

Data set #instances #attributes #categories DBS 120 8 2 CPU 209 6 4 BCC 286 7 2 MPG 392 7 36 ESL 488 4 9 MMG 961 5 2 ERA 1000 4 4 LEV 1000 4 5 CEV 1728 6 4

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

17 / 54

slide-30
SLIDE 30
  • 3. Metaheuristic for learning a MR-Sort model

Tests with PL datasets II

Size Data set META MIP UTADIS CR 20 % DBS 18.97 ± 4.23 19.77 ± 4.81 20.08 ± 5.33 17.13 ± 4.24 CPU 9.94 ± 3.23 9.00 ± 3.45 6.52 ± 3.62 8.11 ± 1.03 BCC 28.24 ± 2.73 26.78 ± 2.76 29.15 ± 3.07 27.75 ± 3.35 MPG 20.25 ± 3.56 20.80 ± 3.26 22.25 ± 3.18 7.09 ± 1.93 ESL 10.42 ± 1.71 10.75 ± 1.58 8.89 ± 1.60 6.82 ± 1.29 MMG 16.97 ± 0.87 17.16 ± 1.40 18.40 ± 1.84 17.25 ± 1.20 ERA 21.36 ± 2.05 20.93 ± 1.74 23.68 ± 1.87 28.89 ± 2.73 LEV 16.74 ± 1.87 16.08 ± 1.73 16.54 ± 1.60 14.99 ± 1.22 CEV 9.37 ± 1.12

  • 7.94 ± 0.59

4.48 ± 0.89 50 % DBS 16.23 ± 4.69 16.27 ± 4.26 14.80 ± 4.21 15.72 ± 4.16 CPU 6.75 ± 2.37 6.40 ± 2.39 2.30 ± 2.38 4.64 ± 2.81 BCC 27.50 ± 3.17

  • 28.54 ± 2.46

26.87 ± 2.82 MPG 17.81 ± 2.37

  • 20.90 ± 2.36

5.77 ± 2.51 ESL 10.04 ± 1.86 10.18 ± 1.55 7.83 ± 1.63 6.01 ± 1.26 MMG 17.32 ± 1.51

  • 17.58 ± 1.52

16.67 ± 1.44 ERA 20.56 ± 1.73 19.58 ± 1.37 23.42 ± 1.71 28.44 ± 3.06 LEV 15.92 ± 1.22 14.22 ± 1.54 15.56 ± 1.32 13.72 ± 1.25 CEV 9.36 ± 1.19

  • 7.99 ± 0.91

3.76 ± 0.59 80 % DBS 15.92 ± 6.98 14.80 ± 8.11 12.80 ± 5.01 14.16 ± 6.81 CPU 6.40 ± 3.04 5.98 ± 3.15 1.52 ± 2.14 2.12 ± 3.01 BCC 26.77 ± 5.47

  • 29.13 ± 5.10

24.96 ± 4.85 MPG 16.86 ± 3.69

  • 20.80 ± 3.88

5.51 ± 1.60 ESL 10.01 ± 2.97 10.08 ± 2.47 7.44 ± 2.35 5.42 ± 2.18 MMG 16.98 ± 2.79

  • 17.34 ± 2.65

15.84 ± 2.51 ERA 20.31 ± 2.50 18.56 ± 2.60 23.56 ± 2.92 28.13 ± 2.80 LEV 16.16 ± 2.22 13.59 ± 1.85 15.72 ± 2.22 13.14 ± 1.76 CEV 9.66 ± 1.74

  • 7.99 ± 1.32

2.73 ± 0.89

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

18 / 54

slide-31
SLIDE 31
  • 3. Metaheuristic for learning a MR-Sort model

Contributions

◮ Sobrie, O., Mousseau, V., and Pirlot, M. (2012). Learning the

parameters of a multiple criteria sorting method from large sets of assignment examples. In DA2PL 2012 Workshop From Multiple Criteria Decision Aid to Preference Learning, pages 21–31. Mons, Belgique

◮ Sobrie, O., Mousseau, V., and Pirlot, M. (2013). Learning a majority

rule model from large sets of assignment examples. In Perny, P., Pirlot, M., and Tsoukiás, A., editors, Algorithmic Decision Theory, volume 8176 of Lecture Notes in Artificial Intelligence, pages 336–350, Brussels, Belgium. Springer

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

19 / 54

slide-32
SLIDE 32
  • 3. Metaheuristic for learning a MR-Sort model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

20 / 54

slide-33
SLIDE 33
  • 4. Application with MR-Sort metaheuristic

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

20 / 54

slide-34
SLIDE 34
  • 4. Application with MR-Sort metaheuristic

Application

◮ Medical application : prediction of the ASA score and acceptance

  • r refusal for surgery from a database containing 898 patients

MR-Sort Model 1 ...

ASA score

MR-Sort Model 2

+2 criteria Acceptance

  • r refusal

for surgery 16 criteria

◮ Results have been compared to other machine learning algorithms

Learning algorithm ASA score A/R (3 criteria) SVM 0.8752 0.9142 C4.5 0.9154 0.9012 KNN 0.8468 0.9085 MLP 0.8927 0.9292 RBF 0.8333 0.8981 Majority voting 0.9259 0.9407 MR-Sort 0.9615 0.9235

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

21 / 54

slide-35
SLIDE 35
  • 4. Application with MR-Sort metaheuristic

Application

◮ Medical application : prediction of the ASA score and acceptance

  • r refusal for surgery from a database containing 898 patients

MR-Sort Model 1 ...

ASA score

MR-Sort Model 2’

+2 criteria Acceptance

  • r refusal

for surgery 16 criteria

◮ Results have been compared to other machine learning algorithms

Learning algorithm ASA score A/R (3 criteria) A/R (18 criteria) SVM 0.8752 0.9142 C4.5 0.9154 0.9012 KNN 0.8468 0.9085 MLP 0.8927 0.9292 RBF 0.8333 0.8981 Majority voting 0.9259 0.9407 MR-Sort 0.9615 0.9235 0.9525

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

21 / 54

slide-36
SLIDE 36
  • 4. Application with MR-Sort metaheuristic

Contributions

◮ Sobrie, O., Lazouni, M. E. A., Mahmoudi, S., Mousseau, V., and

Pirlot, M. (2016b). A new decision support model for preanesthetic evaluation. Computer Methods and Programs in Biomedicine. Accepted

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

22 / 54

slide-37
SLIDE 37
  • 4. Application with MR-Sort metaheuristic

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

23 / 54

slide-38
SLIDE 38
  • 5. Learning a NCS model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

23 / 54

slide-39
SLIDE 39
  • 5. Learning a NCS model

NCS model - Learning and expressivity I

Improvement of the expressivity of MR-Sort

◮ MR-Sort is not able to take criteria interactions into account ◮ We added capacities in the outranking rule → NCS model

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

24 / 54

slide-40
SLIDE 40
  • 5. Learning a NCS model

NCS model - Learning and expressivity I

Improvement of the expressivity of MR-Sort

◮ MR-Sort is not able to take criteria interactions into account ◮ We added capacities in the outranking rule → NCS model

Learning a NCS model

◮ MIP : only usable for small datasets ◮ Metaheuristic : modification of the MR-Sort metaheuristic ◮ Test with PL datasets → Performances are not much improved

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

24 / 54

slide-41
SLIDE 41
  • 5. Learning a NCS model

NCS model - Learning and expressivity I

Improvement of the expressivity of MR-Sort

◮ MR-Sort is not able to take criteria interactions into account ◮ We added capacities in the outranking rule → NCS model

Learning a NCS model

◮ MIP : only usable for small datasets ◮ Metaheuristic : modification of the MR-Sort metaheuristic ◮ Test with PL datasets → Performances are not much improved

Study of the expressivity of the model

◮ Proportion of NCS outranking rule that cannot be represented by

1-additive weights and a threshold ?

◮ How can we approximate non 1-additive rules by a set of 1-additive

weights and threshold ?

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

24 / 54

slide-42
SLIDE 42
  • 5. Learning a NCS model

NCS model - Learning and expressivity II

Proportion of k-additive rule

20 40 60 80 100 3 4 5 6

2 11 57 95 100 89 43 3

Proportion of all families of NCS outranking rules (in %) Number of criteria 1-additive 2-additive 3-additive

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

25 / 54

slide-43
SLIDE 43
  • 5. Learning a NCS model

NCS model - Learning and expressivity III

Approximation of a k-additive rule (k > 1) by a 1-additive rule

◮ Generation of all possible inputs (2n) regarding a fixed profile ◮ Assignment of these inputs in two categories using a k-additive rule ◮ MIP inferring a 1-additive rule

n = 4 n = 5 n = 6

Not restored Restored

15 (93.8%) 1 30.74 (96.1%) 1.26 61.27 (95.7%) 2.73

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

26 / 54

slide-44
SLIDE 44
  • 5. Learning a NCS model

Contributions

◮ Sobrie, O., Mousseau, V., and Pirlot, M. (2015). Learning the

parameters of a non compensatory sorting model. In Walsh, T., editor, Algorithmic Decision Theory, volume 9346 of Lecture Notes in Artificial Intelligence, pages 153–170, Lexington, KY,

  • USA. Springer

◮ Ersek Uyanık, E., Sobrie, O., Mousseau, V., and Pirlot, M. (2016).

Families of sufficient coalitions of criteria involved in ordered classification procedures. Submitted

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

27 / 54

slide-45
SLIDE 45
  • 5. Learning a NCS model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

28 / 54

slide-46
SLIDE 46
  • 6. New veto rule

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

28 / 54

slide-47
SLIDE 47
  • 6. New veto rule

MR-Sort without veto

◮ Best possible MR-Sort model (CA) regarding the learning set

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ Rambla

∈ Bad

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

29 / 54

slide-48
SLIDE 48
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2

v1

550m 700m 125e

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-49
SLIDE 49
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4

v1

550m 700m 125e

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Majestic

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-50
SLIDE 50
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4

v1

550m 700m

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Majestic

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-51
SLIDE 51
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4 50e 15m2

v1

550m 700m

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Majestic

∈ Good

Travelodge

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-52
SLIDE 52
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4 50e 15m2

v1

550m 700m

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Majestic

∈ Good

Travelodge

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-53
SLIDE 53
  • 6. New veto rule

Binary veto rule

◮ Veto if alternative worse than the veto profile on any criterion

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj beach 0.2 center 0.2 price 0.2 size 0.2 rating 0.2 λ = 0.6 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4 50e 15m2

v1

550m 700m

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and ∄j : aj ≤ v 1

j

Rambla

∈ Bad

Majestic

∈ Good

Travelodge

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

30 / 54

slide-54
SLIDE 54
  • 6. New veto rule

Coalitional veto rule

◮ Veto if alternative worse than the veto profile on a subset of criteria

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj zj beach 0.2 0.2 center 0.2 0.2 price 0.2 0.2 size 0.2 0.2 rating 0.2 0.2 λ = 0.6 Λ = 0.4 50m 200m 150e 30m2 2

v1

550m 700m 125e

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and

  • j:aj ≤v1

j

zj < Λ Rambla

∈ Bad

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

31 / 54

slide-55
SLIDE 55
  • 6. New veto rule

Coalitional veto rule

◮ Veto if alternative worse than the veto profile on a subset of criteria

Bad Good

300m 400m 100e 25m2 3 0m 0m 0e 45m2 5 600m 800m 200e 5m2 1

b1

crit. wj zj beach 0.2 0.2 center 0.2 0.2 price 0.2 0.2 size 0.2 0.2 rating 0.2 0.2 λ = 0.6 Λ = 0.4 50m 200m 150e 30m2 2 150m 100m 175e 35m2 4 50e 15m2

v1

550m 700m 125e

Assignment rule

hotel ∈ Good ⇔

  • j:aj ≥b1

j

wj ≥ λ and

  • j:aj ≤v1

j

zj < Λ Rambla

∈ Bad

Majestic

∈ Good

Travelodge

∈ Good

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

31 / 54

slide-56
SLIDE 56
  • 6. New veto rule

Learning a MR-Sort model with coalitional veto

Problem size

◮ Number of parameters to learn doubled compared to a classical

MR-Sort model without veto Mixed integer program

◮ Adapted for small problems ◮ Tested on a small example

Adaptation of the MR-Sort metaheuristic

◮ Outline of an approach for integrating the veto in the metaheuristic

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

32 / 54

slide-57
SLIDE 57
  • 6. New veto rule

Contributions

◮ Sobrie, O., Mousseau, V., and Pirlot, M. (2014). New veto rules for

sorting models. In 20th Conference of the International Federation of Operational Research Societies, Barcelona, Spain

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

33 / 54

slide-58
SLIDE 58
  • 6. New veto rule

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

34 / 54

slide-59
SLIDE 59
  • 7. Additive value function model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

34 / 54

slide-60
SLIDE 60
  • 7. Additive value function model

Additive value function model I

◮ A marginal value function is associated to each criterion

600 0.0 0.5 1.0 m uj(aj)

  • dist. beach

800 0.0 0.5 1.0 m

  • dist. center

200 0.0 0.5 1.0 e price 5 45 0.0 0.5 1.0 m2 size 1 5 0.0 0.5 1.0 ⋆ rating

◮ Marginal value functions are monotone ◮ A weight wj is associated to each criterion j ◮ A score U(a) can be computed for an alt. a

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

35 / 54

slide-61
SLIDE 61
  • 7. Additive value function model

Additive value function model I

◮ A marginal value function is associated to each criterion

600 0.0 0.1 0.2 m u∗

j(aj)

  • dist. beach

800 0.0 0.1 0.2 m

  • dist. center

200 0.0 0.1 0.2 e price 5 45 0.0 0.1 0.2 m2 size 1 5 0.0 0.1 0.2 ⋆ rating

◮ Marginal value functions are monotone ◮ A weight wj is associated to each criterion j ◮ A score U(a) can be computed for an alt. a

u∗

j (aj) = wjuj(aj)

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

35 / 54

slide-62
SLIDE 62
  • 7. Additive value function model

Additive value function model I

◮ A marginal value function is associated to each criterion

600 0.0 0.1 0.2 m u∗

j(aj)

  • dist. beach

800 0.0 0.1 0.2 m

  • dist. center

200 0.0 0.1 0.2 e price 5 45 0.0 0.1 0.2 m2 size 1 5 0.0 0.1 0.2 ⋆ rating

◮ Marginal value functions are monotone ◮ A weight wj is associated to each criterion j ◮ A score U(a) can be computed for an alt. a

u∗

j (aj) = wjuj(aj)

U(a) =

5

  • j=1

u∗

j (aj)

Miramar 0.51 Plaza 0.53 Hilton 0.43 Hotel W 0.41

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

35 / 54

slide-63
SLIDE 63
  • 7. Additive value function model

Additive value function model II

Ranking

Plaza

0.53

Miramar

0.51

Hilton

0.43

Hotel W

0.41

Sorting

Good Bad

Plaza

0.53

Miramar

0.51 0.5

Hilton

0.43

Hotel W

0.41

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

36 / 54

slide-64
SLIDE 64
  • 7. Additive value function model

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

37 / 54

slide-65
SLIDE 65
  • 8. UTA-poly and UTA-splines

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

37 / 54

slide-66
SLIDE 66
  • 8. UTA-poly and UTA-splines

Learning an AVF model

Existing methods

◮ UTA : LP for learning the parameters of an AVF-ranking model ◮ UTADIS : LP for learning the parameters of an AVF-sorting model ◮ Other methods : UTA*, ACUTA, . . . ◮ Monotonicity of the marginals is ensured ◮ Marginals are modeled with piecewise linear functions

uj uj(aj) = 0 uj(aj) = 1 aj aj uj(aj) aj

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

38 / 54

slide-67
SLIDE 67
  • 8. UTA-poly and UTA-splines

UTA-poly and UTA-splines

◮ Marginals are modeled by polynomials

  • r splines (continuity of the marginals

up to the second derivative)

◮ Use of semi-definite programming ◮ Monotonicity guaranteed if first

derivative nonnegative

◮ Hilbert’s theorems

uj uj(aj) = 0 uj(aj) = 1 aj aj uj(aj) aj Theorem (Hilbert) A polynomial F : Rn → R is nonnegative if it is possible to decompose it as a sum of squares (SOS) : F(z) =

  • s

f 2

s (z)

with z ∈ Rn. Theorem (Hilbert) A non-negative polynomial in one variable is always a SOS.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

39 / 54

slide-68
SLIDE 68
  • 8. UTA-poly and UTA-splines

UTA-poly - Example I

x y a1 10 7 a2 6 8 a3 7 5 a1 ≻ a2 ≻ a3

◮ We define u∗ 1(x) and u∗ 2(y) as third degree polynomials :

u∗

1(x) = px,0 + px,1 · x + px,2 · x2 + px,3 · x3,

u∗

2(y) = py,0 + py,1 · y + py,2 · y2 + py,3 · y3. ◮ Scores of a1, a2 and a3 are given by :

U(a1) = px,0 + 10px,1 + 100px,2 + 1000px,3 + py,0 + 7py,1 + 49py,2 + 343py,3, U(a2) = px,0 + 6px,1 + 36px,2 + 324px,3 + py,0 + 8py,1 + 64py,2 + 512py,3, U(a3) = px,0 + 7px,1 + 49px,2 + 343px,3 + py,0 + 5py,1 + 25py,2 + 125py,3.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

40 / 54

slide-69
SLIDE 69
  • 8. UTA-poly and UTA-splines

UTA-poly - Example II

◮ Scores of a1, a2 and a3 are given by :

U(a1) = px,0 + 10px,1 + 100px,2 + 1000px,3 + py,0 + 7py,1 + 49py,2 + 343py,3, U(a2) = px,0 + 6px,1 + 36px,2 + 324px,3 + py,0 + 8py,1 + 64py,2 + 512py,3, U(a3) = px,0 + 7px,1 + 49px,2 + 343px,3 + py,0 + 5py,1 + 25py,2 + 125py,3.

◮ We have a1 ≻ a2 and a2 ≻ a3, which implies :

U(a1) − U(a2) + σ+(a1) − σ−(a1) − σ+(a2) + σ−(a2) > 0, U(a2) − U(a3) + σ+(a2) − σ−(a2) − σ+(a1) + σ−(a1) > 0.

◮ By replacing U(a1), U(a2) and U(a3), we have :

       4px,1 + 64px,2 + 776px,3 − py,1 − 15py,2 − 231py,3 + σ+(a1) − σ−(a1) −σ+(a2) + σ−(a2) > 0, −px,1 − 13px,2 − 19px,3 + 3py,1 + 39py,2 + 387py,3 + σ+(a2) − σ−(a2) −σ+(a3) + σ−(a3) > 0.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

41 / 54

slide-70
SLIDE 70
  • 8. UTA-poly and UTA-splines

UTA-poly - Example III

◮ We impose the derivative of u∗ 1 and u∗ 2 to be SOS :

u∗′

1 = xTQx

= 1 x T q0,0 q0,1 q1,0 q1,1 1 x

  • = q0,0 + (q0,1 + q0,1) x + q1,1x2,

u∗′

2 = yTRy

= r0,0 + (r0,1 + r1,0) y + r1,1y2.

◮ Q and R have to be semi-definite positive, in conjunction with :

     px,1 = q0,0, 2px,2 = q0,1 + q1,0, 3px,3 = q1,1, and      py,1 = r0,0, 2py,2 = r0,1 + r1,0, 3py,3 = r1,1.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

42 / 54

slide-71
SLIDE 71
  • 8. UTA-poly and UTA-splines

UTA-poly - Example IV

◮ We add normalization constraints :

   px,0 = 0, py,0 = 0, 10px,1 + 100px,2 + 1000px,3 + 10py,1 + 100py,2 + 1000py,3 = 1.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

43 / 54

slide-72
SLIDE 72
  • 8. UTA-poly and UTA-splines

UTA-poly - Example V

min σ+(a1) + σ−(a1) + σ+(a2) + σ−(a2) + σ+(a3) + σ−(a3). such that :                                            4px,1 + 64px,2 + 776px,3 − py,1 − 15py,2 − 231py,3 +σ+(a1) − σ−(a1) − σ+(a2) + σ−(a2) > 0, −px,1 − 13px,2 − 19px,3 + 3py,1 + 39py,2 + 387py,3 +σ+(a2) − σ−(a2) − σ+(a3) + σ−(a3) > 0, px,0 = 0, py,0 = 0, 10px,1 + 100px,2 + 1000px,3 + 10py,1 + 100py,2 + 1000py,3 = 1, px,1 = q0,0, 2px,2 = q0,1 + q1,0, 3px,3 = q1,1, py,1 = r0,0, 2py,2 = r0,1 + r1,0, 3py,3 = r1,1, with :

  • Q, R

PSD, σ+(a1), σ−(a1), σ+(a2), σ−(a2), σ+(a3), σ−(a3) ≥ 0.

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

44 / 54

slide-73
SLIDE 73
  • 8. UTA-poly and UTA-splines

Example of marginals learning with UTA-poly

300 600 0.0 0.2 0.4 euro u1 D = 2 300 600 0.0 0.2 0.4 euro D = 6 300 600 0.0 0.2 0.4 euro D = 10 500 1,500 0.0 0.2 0.4 km u2 500 1,500 0.0 0.2 0.4 km 500 1,500 0.0 0.2 0.4 km 50 150 0.0 0.1 0.2 m2 u3 50 150 0.0 0.1 0.2 m2 50 150 0.0 0.1 0.2 m2 real marginal learned marginal

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

45 / 54

slide-74
SLIDE 74
  • 8. UTA-poly and UTA-splines

Example of marginals learning with UTA-splines

300 600 0.0 0.2 0.4 euro u1 D = 1 300 600 0.0 0.2 0.4 euro D = 2 300 600 0.0 0.2 0.4 euro D = 3 500 1,500 0.0 0.2 0.4 km u2 500 1,500 0.0 0.2 0.4 km 500 1,500 0.0 0.2 0.4 km 50 150 0.0 0.1 0.2 m2 u3 50 150 0.0 0.1 0.2 m2 50 150 0.0 0.1 0.2 m2 real marginal learned marginal

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

46 / 54

slide-75
SLIDE 75
  • 8. UTA-poly and UTA-splines

Experiments with UTA-poly and UTA-splines

Artificial datasets

◮ Artificial datasets built on the basis of various type of additive

value functions (exponentials, polynomials, etc.)

◮ UTA-poly and UTA-splines models learned ◮ UTA(DIS)-poly and UTA(DIS)-splines computing time of the same

  • rder of magnitude as UTA(DIS)

◮ Model retrieval

Real datasets

◮ Datasets issued from the preference learning field ◮ Results at least as good as with UTADIS ◮ Overfitting if too much degrees of freedom let to the semi-definite

program

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

47 / 54

slide-76
SLIDE 76
  • 8. UTA-poly and UTA-splines

Contributions

◮ Sobrie, O., Gillis, N., Mousseau, V., and Pirlot, M. (2016a). UTA-poly

and UTA-splines: additive value functions with polynomial marginals. Submitted

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

48 / 54

slide-77
SLIDE 77
  • 8. UTA-poly and UTA-splines

Outline of the presentation

Background Contributions

Introduction AVF UTA-poly UTA-splines 8 7 MR-Sort New veto rule 6 NCS 5 Metaheuristic Application 4 3 2 1

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

48 / 54

slide-78
SLIDE 78
  • 9. Conclusion

Conclusion and further research I

Use MCDA models to deal with PL problems (outranking models and additive value function models)

◮ MR-Sort and NCS outranking methods ◮ Algorithms for learning MR-Sort and NCS models from large

datasets

◮ Methods for learning AVF models

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

49 / 54

slide-79
SLIDE 79
  • 9. Conclusion

Conclusion and further research II

Validation of the learning algorithms as done in PL

◮ Tests with PL datasets ◮ Statistical tests (learning and test sets)

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

50 / 54

slide-80
SLIDE 80
  • 9. Conclusion

Conclusion and further research III

Test the algorithms and models on a real application

◮ Test of MR-Sort with the ASA dataset ◮ Results comparable to other machine learning algorithms ◮ MR-Sort easier to explain than other algorithms

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

51 / 54

slide-81
SLIDE 81
  • 9. Conclusion

Conclusion and further research IV

Study the expressivity of the MCDA models

◮ Expressivity of MR-Sort and NCS has been studied ◮ Proportion of rule that can be represented by a set of k-additive

weights for models involving a number of criteria smaller than 7

◮ Extension of the expressivity with coalitional veto

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

52 / 54

slide-82
SLIDE 82
  • 9. Conclusion

Conclusion and further research V

Bring new techniques in MCDA and PL

◮ UTA-poly and UTA-splines ◮ Semi-definite programming

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

53 / 54

slide-83
SLIDE 83
  • 9. Conclusion

Further research

◮ Use of relaxation techniques for learning the models ◮ Improvement of the interpretability of MR-Sort (weights and cut

thresholds)

◮ Study of rules that can be represented by k-additive weights for

models involving 7 criteria

◮ Analysis of complexity of the MR-Sort model (e.g. VC dimension) ◮ Algorithm for learning a MR-Sort model using coalitional veto ◮ Extend semi-definite programming to other MCDA methods

(MACBETH, GAI network)

◮ Improvement of UTA(DIS)-poly/splines objective function

Learning preferences with multiple-criteria models

  • O. Sobrie - June 21, 2016

54 / 54

slide-84
SLIDE 84

Thank you for your attention !

slide-85
SLIDE 85

References I

Bouyssou, D. and Marchant, T. (2007a). An axiomatic approach to noncompensatory sorting methods in MCDM, I : The case of two

  • categories. European Journal of Operational Research, 178(1) :217–245.

Bouyssou, D. and Marchant, T. (2007b). An axiomatic approach to noncompensatory sorting methods in MCDM, II : More than two

  • categories. European Journal of Operational Research, 178(1) :246–276.

Ersek Uyanık, E., Sobrie, O., Mousseau, V., and Pirlot, M. (2016). Families

  • f sufficient coalitions of criteria involved in ordered classification
  • procedures. Submitted.

Leroy, A., Mousseau, V., and Pirlot, M. (2011). Learning the parameters of a multiple criteria sorting method. In Brafman, R., Roberts, F., and Tsoukiàs, A., editors, Algorithmic Decision Theory, volume 6992 of Lecture Notes in Artificial Intelligence, pages 219–233. Springer.

slide-86
SLIDE 86

References II

Sobrie, O., Gillis, N., Mousseau, V., and Pirlot, M. (2016a). UTA-poly and UTA-splines : additive value functions with polynomial marginals. Submitted. Sobrie, O., Lazouni, M. E. A., Mahmoudi, S., Mousseau, V., and Pirlot, M. (2016b). A new decision support model for preanesthetic evaluation. Computer Methods and Programs in Biomedicine. Accepted. Sobrie, O., Mousseau, V., and Pirlot, M. (2012). Learning the parameters

  • f a multiple criteria sorting method from large sets of assignment
  • examples. In DA2PL 2012 Workshop From Multiple Criteria Decision Aid

to Preference Learning, pages 21–31. Mons, Belgique. Sobrie, O., Mousseau, V., and Pirlot, M. (2013). Learning a majority rule model from large sets of assignment examples. In Perny, P., Pirlot, M., and Tsoukiás, A., editors, Algorithmic Decision Theory, volume 8176 of Lecture Notes in Artificial Intelligence, pages 336–350, Brussels,

  • Belgium. Springer.
slide-87
SLIDE 87

References III

Sobrie, O., Mousseau, V., and Pirlot, M. (2014). New veto rules for sorting

  • models. In 20th Conference of the International Federation of

Operational Research Societies, Barcelona, Spain. Sobrie, O., Mousseau, V., and Pirlot, M. (2015). Learning the parameters

  • f a non compensatory sorting model. In Walsh, T., editor, Algorithmic

Decision Theory, volume 9346 of Lecture Notes in Artificial Intelligence, pages 153–170, Lexington, KY, USA. Springer.