adaptive elicitation of rank depenent aggregation models
play

Adaptive Elicitation of Rank-Depenent Aggregation Models based on - PowerPoint PPT Presentation

Adaptive Elicitation of Rank-Depenent Aggregation Models based on Bayesian Linear Regression Nadjet Bourdache & Patrice Perny & Olivier Spanjaard LIP6, Sorbonne Universit, Paris, France DA2PL, Pozna, Poland Context X : Set of


  1. Adaptive Elicitation of Rank-Depenent Aggregation Models based on Bayesian Linear Regression Nadjet Bourdache & Patrice Perny & Olivier Spanjaard LIP6, Sorbonne Université, Paris, France DA2PL, Poznań, Poland

  2. Context X : Set of alternatives explicitly defined (solutions, candidates, ..). p evaluation functions : x ∈ X �− → ( x 1 , ..., x p ) Multicriteria decision problem: x i is the performance of x on criteria i . Multiagents decision problem : x i is the utility of x for agent i . Robust optimization problem : x i is the utility of x in scenario i A decision maker (DM) with imprecisely known preferences. Objective: find a solution maximizing the DM’s satisfaction. 1 / 17

  3. Rank-dependant models for preference representation Use of parameterized rank-dependent aggregation functions to model the DM’s decision behaviour: p � OWA λ ( x ) = λ i x ( i ) . i = 1 p p � � C v ( x )= ( v ( X ( i ) ) − v ( X ( i + 1 ) )) x ( i ) = [ x ( i ) − x ( i − 1 ) ] v ( X ( i ) ) . i = 1 i = 1 where x ( 1 ) ≤ .. ≤ x ( n ) and X ( i ) = { x ( i ) , . . . , x ( n ) } The parameter ( λ and v) allows to model the DM’s preferences. ⇒ incremental elicitation of these parameters. 2 / 17

  4. Interactive preference elicitation : example c 2 α 2 1 c 1 α 1 1 c 2 α 2 1 c 1 α 1 1 3 / 17

  5. Interactive preference elicitation : example c 2 α 2 1 c 1 α 1 1 c 2 α 2 1 c 1 α 1 1 ⇒ Sensitivity to possible errors ( f α ( red ) = 0 . 4 < f α ( green ) = 0 . 6) 3 / 17

  6. Interactive preference elicitation : example c 2 α 2 1 c 1 α 1 1 c 2 α 2 1 c 1 α 1 1 ⇒ Sensitivity to possible errors ( f α ( red ) = 0 . 4 < f α ( green ) = 0 . 6) ⇒ Bayesian incremental preference elicitation. 3 / 17

  7. Bayesian incremental preference elicitation Associate a probability distribution to model the uncertainty about the set of possible parameters. Bayesian updating of the probability distribution according to the DM’s answers. α 2 1 0 . 5 0 0 α 1 1 0 . 5 0 . 5 0 1 4 / 17

  8. Related work Incremental elicitation of linear and non-linear models [Wang and Boutilier 03, Benabbou et al. 17]. Bayesian elicitation of utilities [Cajewska et at. 00, Viappiani and Boutilier 03, Gua and Sanner 10] Our contribution: Bayesian incremental elicitation of the parameters of rank-dependant (non-linear) aggregation functions: OWA and 2-additive Choquet integrals. 5 / 17

  9. Bayesian linear regression method Bayesian updating: p ( w | y ( i ) ) ∝ p ( y ( i ) | w ) p ( w | y ( 1 ) , . . . , y ( i − 1 ) ) � �� � B ( f ( w , d ( i ) )) where y ( i ) ∈ { 0 , 1 } is the answer to question i ( a ( i ) � b ( i ) ? ) and d ( i ) = a ( i ) − b ( i ) . → Hard to compute analytically. Probit model: z ( i ) = w T d ( i ) + ε ( i ) 6 / 17

  10. Bayesian linear regression method Probit model: z ( i ) = w T d ( i ) + ε ( i ) Reformulation of p ( w | y ( i ) ) : � � p ( w | y ( i ) )= p ( w , z | y ( i ) ) dz = p ( w | z ) p ( z | y ( i ) ) dz ⇒ if the prior is a gaussian and ε ( i ) ∼ N ( 0 , 1 ) then w | y ( i ) ∼ N ( µ, S ) [Albert and Chib 93] 7 / 17

  11. Basis functions for rank-dependent models Use basis function to extend linear regression to nonlinear functions: q � f w ( x ) = w i g i ( x ) i = 1 ( w 1 , . . . , w q ) is a weighting vector. { g 1 ( x ) , . . . , g q ( x ) } is a set of non-linear functions defined from criterion values. What are the basis functions for OWA and Choquet integrals? 8 / 17

  12. Basis functions for rank-dependent models OWA with decreasing weights: p p − 1 � � λ i − λ i + 1 L i ( x ) + λ p L p ( x ) OWA λ ( x ) = λ i x ( i ) = i = 1 i = 1 with L ( x ) = ( x ( 1 ) , x ( 1 ) + x ( 2 ) , . . . , x ( 1 ) + · · · + x ( p ) ) → w i = λ i − λ i + 1 , i ∈ { 1 , . . . , p } with w p + 1 = 0 → g i ( x ) = � i k = 1 x ( k ) 9 / 17

  13. Basis functions for rank-dependent models 2-additive Choquet integrals: g i ( x ) = C v i ( x ) unanimity games: � 1 if Y i ⊆ X for i ∈ � 1 , p ( p + 1 ) v i ( X ) = � 0 otherwise 2 where Y i ⊆ C is any nonempty subset of size at most 2; conjugates of unanimity games : � 1 if Y i ∩ X � = ∅ for i ∈ � p ( p + 1 ) + 1 , p 2 � v i ( X ) = 0 otherwise 2 10 / 17

  14. Basis functions for Choquet : example For 3 criteria, any v is written as a convex combination of v 1 , . . . , v 9 : X { 1 } { 2 } { 3 } { 1 , 2 } { 1 , 3 } { 2 , 3 } v 1 ( X ) 1 0 0 1 1 0 v 2 ( X ) 0 1 0 1 0 1 v 3 ( X ) 0 0 1 0 1 1 v 4 ( X ) 0 0 0 1 0 0 v 5 ( X ) 0 0 0 0 1 0 v 6 ( X ) 0 0 0 0 0 1 v 7 ( X ) 1 1 0 1 1 1 v 8 ( X ) 1 0 1 1 1 1 v 9 ( X ) 0 1 1 1 1 1 X { 1 } { 2 } { 3 } { 1 , 2 } { 1 , 3 } { 2 , 3 } v ( X ) 0 . 1 0 . 2 0 . 3 0 . 5 0 . 5 0 . 6 v = 0 . 1 v 2 + 0 . 3 v 3 + 0 . 3 v 4 + 0 . 1 v 5 + 0 . 1 v 6 + 0 . 1 v 7 11 / 17

  15. Algorithm A : set of alternatives δ : acceptance threshold p 0 : gaussian prior distribution on vectors w 1 Chose 2 alternatives a ∗ and b ∗ 2 Ask the DM if a ∗ is preferred to b ∗ 3 y ( k ) ← 1 if the answer is yes and 0 otherwise 4 p ( w ) ← p ( w | y ( k ) ) (Bayesian updating) 5 if stoppig criterion then STOP, else go to 1 12 / 17

  16. Elicitation by expected regret minimization Pairwise expected regret � PER ( a , b , p ) = max { 0 , f w ( b ) − f w ( a ) } p ( w ) d w Max expected regret MER ( a , A , p ) = max b ∈A PER ( a , b , p ) Minimax expected regret MMER ( A , p ) = min a ∈A MER ( a , A , p ) Choice of the query to ask : a ∗ ← arg min a ∈A MER ( a , A , p ) b ∗ ← arg max b ∈A PER ( a ∗ , b , p ) Compare a ∗ to b ∗ 13 / 17

  17. Algorithm A : set of alternatives δ : acceptance threshold p 0 : gaussian prior distribution on vectors w 1 a ∗ ← arg min a ∈A MER ( a , A , p ) 2 b ∗ ← arg max b ∈A PER ( a ∗ , b , p ) 3 Ask the DM if a ∗ is preferred to b ∗ 4 y ( k ) ← 1 if the answer is yes and 0 otherwise 5 p ( w ) ← p ( w | y ( k ) ) (Bayesian updating) 6 if MMER ( A , p ) ≤ δ then STOP, else go to 1 7 return a ∗ ← arg min a ∈A MER ( a , A , p ) 14 / 17

  18. Bayesian interactive preference elicitation : back to example c 2 1 c 1 1 15 / 17

  19. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 b ∗ 1 a ∗ c 1 1 15 / 17

  20. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 1 b ∗ a ∗ c 1 1 15 / 17

  21. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 1 a ∗ b ∗ c 1 1 15 / 17

  22. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 1 a ∗ b ∗ c 1 1 15 / 17

  23. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 1 b ∗ a ∗ c 1 1 15 / 17

  24. Bayesian interactive preference elicitation : back to example Query : a ∗ � b ∗ ? c 2 a ∗ 1 b ∗ c 1 1 15 / 17

  25. Bayesian interactive preference elicitation : back to example Optimal solution : a ∗ c 2 a ∗ 1 c 1 1 15 / 17

  26. Experimental results 16 / 17

  27. Conclusion and perspectives Adaptive preference elicitation method for multicriteria decision support with OWA and Choquet integrals. The approach is tolerent to local inconsistencies of the DM. The approach is also tolerent to inconsistencies that may result from the unability of the model. Futur works: Extend the approach to larger classes of capacities. Extend the approach to combinatorial domain. 17 / 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend