Adaptive Elicitation of Rank-Depenent Aggregation Models based on - - PowerPoint PPT Presentation

adaptive elicitation of rank depenent aggregation models
SMART_READER_LITE
LIVE PREVIEW

Adaptive Elicitation of Rank-Depenent Aggregation Models based on - - PowerPoint PPT Presentation

Adaptive Elicitation of Rank-Depenent Aggregation Models based on Bayesian Linear Regression Nadjet Bourdache & Patrice Perny & Olivier Spanjaard LIP6, Sorbonne Universit, Paris, France DA2PL, Pozna, Poland Context X : Set of


slide-1
SLIDE 1

Adaptive Elicitation of Rank-Depenent Aggregation Models based

  • n Bayesian Linear Regression

Nadjet Bourdache & Patrice Perny & Olivier Spanjaard LIP6, Sorbonne Université, Paris, France DA2PL, Poznań, Poland

slide-2
SLIDE 2

Context

X : Set of alternatives explicitly defined (solutions, candidates, ..). p evaluation functions : x ∈ X − → (x1, ..., xp)

Multicriteria decision problem: xi is the performance of x on criteria i. Multiagents decision problem : xi is the utility of x for agent i. Robust optimization problem : xi is the utility of x in scenario i

A decision maker (DM) with imprecisely known preferences. Objective: find a solution maximizing the DM’s satisfaction.

1 / 17

slide-3
SLIDE 3

Rank-dependant models for preference representation

Use of parameterized rank-dependent aggregation functions to model the DM’s decision behaviour:

OWAλ(x) =

p

  • i=1

λix(i). Cv(x)=

p

  • i=1

(v(X(i)) − v(X(i+1)))x(i) =

p

  • i=1

[x(i) − x(i−1)]v(X(i)).

where x(1) ≤ .. ≤ x(n) and X(i) = {x(i), . . . , x(n)} The parameter (λ and v) allows to model the DM’s preferences. ⇒ incremental elicitation of these parameters.

2 / 17

slide-4
SLIDE 4

Interactive preference elicitation : example

c1 c2 1 1 α1 α2 c1 c2 1 1 α1 α2

3 / 17

slide-5
SLIDE 5

Interactive preference elicitation : example

c1 c2 1 1 α1 α2 c1 c2 1 1 α1 α2

⇒ Sensitivity to possible errors (fα(red) = 0.4 < fα(green) = 0.6)

3 / 17

slide-6
SLIDE 6

Interactive preference elicitation : example

c1 c2 1 1 α1 α2 c1 c2 1 1 α1 α2

⇒ Sensitivity to possible errors (fα(red) = 0.4 < fα(green) = 0.6) ⇒ Bayesian incremental preference elicitation.

3 / 17

slide-7
SLIDE 7

Bayesian incremental preference elicitation

Associate a probability distribution to model the uncertainty about the set of possible parameters. Bayesian updating of the probability distribution according to the DM’s answers.

α1 α2

0.5 1 0.5 1 0.5 1

4 / 17

slide-8
SLIDE 8

Related work

Incremental elicitation of linear and non-linear models [Wang and Boutilier 03, Benabbou et al. 17]. Bayesian elicitation of utilities [Cajewska et at. 00, Viappiani and Boutilier 03, Gua and Sanner 10] Our contribution: Bayesian incremental elicitation of the parameters of rank-dependant (non-linear) aggregation functions: OWA and 2-additive Choquet integrals.

5 / 17

slide-9
SLIDE 9

Bayesian linear regression method

Bayesian updating: p(w|y(i)) ∝ p(y(i)|w)

  • B(f (w,d(i)))

p(w|y(1), . . . , y(i−1)) where y(i) ∈ {0, 1} is the answer to question i (a(i) b(i)?) and d(i) = a(i) − b(i). → Hard to compute analytically. Probit model: z(i) = wTd(i) + ε(i)

6 / 17

slide-10
SLIDE 10

Bayesian linear regression method

Probit model: z(i) = wTd(i) + ε(i) Reformulation of p(w|y(i)): p(w|y(i))=

  • p(w, z|y(i))dz =
  • p(w|z)p(z|y(i))dz

⇒ if the prior is a gaussian and ε(i) ∼ N(0, 1) then w|y(i) ∼ N(µ, S) [Albert and Chib 93]

7 / 17

slide-11
SLIDE 11

Basis functions for rank-dependent models

Use basis function to extend linear regression to nonlinear functions: fw(x) =

q

  • i=1

wigi(x) (w1, . . . , wq) is a weighting vector. {g1(x), . . . , gq(x)} is a set of non-linear functions defined from criterion values. What are the basis functions for OWA and Choquet integrals?

8 / 17

slide-12
SLIDE 12

Basis functions for rank-dependent models

OWA with decreasing weights: OWAλ(x) =

p

  • i=1

λix(i) =

p−1

  • i=1

λi − λi+1Li(x) + λpLp(x) with L(x) = (x(1), x(1) + x(2), . . . , x(1) + · · · + x(p)) → wi = λi − λi+1, i ∈ {1, . . . , p} with wp+1 = 0 → gi(x) = i

k=1 x(k)

9 / 17

slide-13
SLIDE 13

Basis functions for rank-dependent models

2-additive Choquet integrals: gi(x) = Cvi(x)

unanimity games: vi(X) = 1 if Yi ⊆ X 0 otherwise for i ∈ 1, p(p + 1) 2

  • where Yi ⊆ C is any nonempty subset of size at most 2;

conjugates of unanimity games : vi(X) =

  • 1 if Yi ∩ X = ∅

0 otherwise for i ∈ p(p + 1) 2 + 1, p2

10 / 17

slide-14
SLIDE 14

Basis functions for Choquet : example

For 3 criteria, any v is written as a convex combination of v1, . . . , v9:

X {1} {2} {3} {1, 2} {1, 3} {2, 3} v1(X) 1 1 1 v2(X) 1 1 1 v3(X) 1 1 1 v4(X) 1 v5(X) 1 v6(X) 1 v7(X) 1 1 1 1 1 v8(X) 1 1 1 1 1 v9(X) 1 1 1 1 1 X {1} {2} {3} {1, 2} {1, 3} {2, 3} v(X) 0.1 0.2 0.3 0.5 0.5 0.6 v = 0.1v2 + 0.3v3 + 0.3v4 + 0.1v5 + 0.1v6 + 0.1v7

11 / 17

slide-15
SLIDE 15

Algorithm

A: set of alternatives δ: acceptance threshold p0: gaussian prior distribution on vectors w

1 Chose 2 alternatives a∗and b∗ 2 Ask the DM if a∗ is preferred to b∗ 3 y(k) ← 1 if the answer is yes and 0 otherwise 4 p(w) ← p(w|y(k)) (Bayesian updating) 5 if stoppig criterion then STOP, else go to 1 12 / 17

slide-16
SLIDE 16

Elicitation by expected regret minimization

Pairwise expected regret

PER(a, b, p) =

  • max{0, fw(b) − fw(a)}p(w)dw

Max expected regret

MER(a, A, p) = maxb∈A PER(a, b, p)

Minimax expected regret

MMER(A, p) = mina∈A MER(a, A, p) Choice of the query to ask :

a∗ ← arg mina∈A MER(a, A, p) b∗ ← arg maxb∈A PER(a∗, b, p) Compare a∗ to b∗

13 / 17

slide-17
SLIDE 17

Algorithm

A: set of alternatives δ: acceptance threshold p0: gaussian prior distribution on vectors w

1 a∗ ← arg mina∈A MER(a, A, p) 2 b∗ ← arg maxb∈A PER(a∗, b, p) 3 Ask the DM if a∗ is preferred to b∗ 4 y(k) ← 1 if the answer is yes and 0 otherwise 5 p(w) ← p(w|y(k)) (Bayesian updating) 6 if MMER(A, p) ≤ δ then STOP, else go to 1 7 return a∗ ← arg mina∈A MER(a, A, p) 14 / 17

slide-18
SLIDE 18

Bayesian interactive preference elicitation : back to example

c1 c2 1 1

15 / 17

slide-19
SLIDE 19

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-20
SLIDE 20

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-21
SLIDE 21

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-22
SLIDE 22

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-23
SLIDE 23

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-24
SLIDE 24

Bayesian interactive preference elicitation : back to example

Query : a∗ b∗?

c1 c2 1 1

a∗ b∗

15 / 17

slide-25
SLIDE 25

Bayesian interactive preference elicitation : back to example

Optimal solution : a∗

c1 c2 1 1

a∗

15 / 17

slide-26
SLIDE 26

Experimental results

16 / 17

slide-27
SLIDE 27

Conclusion and perspectives

Adaptive preference elicitation method for multicriteria decision support with OWA and Choquet integrals. The approach is tolerent to local inconsistencies of the DM. The approach is also tolerent to inconsistencies that may result from the unability of the model. Futur works: Extend the approach to larger classes of capacities. Extend the approach to combinatorial domain.

17 / 17