goals and preferences
play

Goals and Preferences Alice . . . went on Would you please tell me, - PowerPoint PPT Presentation

Goals and Preferences Alice . . . went on Would you please tell me, please, which way I ought to go from here? That depends a good deal on where you want to get to, said the Cat. I dont much care where said Alice.


  1. Goals and Preferences Alice . . . went on “Would you please tell me, please, which way I ought to go from here?” “That depends a good deal on where you want to get to,” said the Cat. “I don’t much care where —” said Alice. “Then it doesn’t matter which way you go,” said the Cat. Lewis Carroll, 1832–1898 Alice’s Adventures in Wonderland, 1865 Chapter 6 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 1

  2. Learning Objectives At the end of the class you should be able to: justify the use and semantics of utility estimate the utility of an outcome build a decision network for a domain compute the optimal policy of a decision network � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 2

  3. Preferences Actions result in outcomes Agents have preferences over outcomes A rational agent will do the action that has the best outcome for them Sometimes agents don’t know the outcomes of the actions, but they still need to compare actions Agents have to act. (Doing nothing is (often) an action). � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 3

  4. Preferences Over Outcomes If o 1 and o 2 are outcomes o 1 � o 2 means o 1 is at least as desirable as o 2 . o 1 ∼ o 2 means o 1 � o 2 and o 2 � o 1 . o 1 ≻ o 2 means o 1 � o 2 and o 2 �� o 1 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 4

  5. Lotteries An agent may not know the outcomes of their actions, but only have a probability distribution of the outcomes. A lottery is a probability distribution over outcomes. It is written [ p 1 : o 1 , p 2 : o 2 , . . . , p k : o k ] where the o i are outcomes and p i ≥ 0 such that � p i = 1 i The lottery specifies that outcome o i occurs with probability p i . When we talk about outcomes, we will include lotteries. � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 5

  6. Properties of Preferences Completeness: Agents have to act, so they must have preferences: ∀ o 1 ∀ o 2 o 1 � o 2 or o 2 � o 1 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 6

  7. Properties of Preferences Completeness: Agents have to act, so they must have preferences: ∀ o 1 ∀ o 2 o 1 � o 2 or o 2 � o 1 Transitivity: Preferences must be transitive: if o 1 � o 2 and o 2 ≻ o 3 then o 1 ≻ o 3 (Similarly for other mixtures of ≻ and � .) � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 7

  8. Properties of Preferences Completeness: Agents have to act, so they must have preferences: ∀ o 1 ∀ o 2 o 1 � o 2 or o 2 � o 1 Transitivity: Preferences must be transitive: if o 1 � o 2 and o 2 ≻ o 3 then o 1 ≻ o 3 (Similarly for other mixtures of ≻ and � .) Rationale: otherwise o 1 � o 2 and o 2 ≻ o 3 and o 3 � o 1 . If they are prepared to pay to get o 2 instead of o 3 , and are happy to have o 1 instead of o 2 , and are happy to have o 3 instead of o 1 − → money pump. � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 8

  9. Properties of Preferences (cont.) Monotonicity: An agent prefers a larger chance of getting a better outcome than a smaller chance: If o 1 ≻ o 2 and p > q then [ p : o 1 , 1 − p : o 2 ] ≻ [ q : o 1 , 1 − q : o 2 ] � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 9

  10. Consequence of axioms Suppose o 1 ≻ o 2 and o 2 ≻ o 3 . Consider whether the agent would prefer ◮ o 2 ◮ the lottery [ p : o 1 , 1 − p : o 3 ] for different values of p ∈ [0 , 1]. Plot which one is preferred as a function of p : o 2 - lottery - 0 1 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 10

  11. Properties of Preferences (cont.) Continuity: Suppose o 1 ≻ o 2 and o 2 ≻ o 3 , then there exists a p ∈ [0 , 1] such that o 2 ∼ [ p : o 1 , 1 − p : o 3 ] � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 11

  12. Properties of Preferences (cont.) Decomposability: (no fun in gambling). An agent is indifferent between lotteries that have same probabilities and outcomes. This includes lotteries over lotteries. For example: [ p : o 1 , 1 − p : [ q : o 2 , 1 − q : o 3 ]] ∼ [ p : o 1 , (1 − p ) q : o 2 , (1 − p )(1 − q ) : o 3 ] � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 12

  13. Properties of Preferences (cont.) Substitutability: if o 1 ∼ o 2 then the agent is indifferent between lotteries that only differ by o 1 and o 2 : [ p : o 1 , 1 − p : o 3 ] ∼ [ p : o 2 , 1 − p : o 3 ] � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 13

  14. Alternative Axiom for Substitutability Substitutability: if o 1 � o 2 then the agent weakly prefers lotteries that contain o 1 instead of o 2 , everything else being equal. That is, for any number p and outcome o 3 : [ p : o 1 , (1 − p ) : o 3 ] � [ p : o 2 , (1 − p ) : o 3 ] � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 14

  15. What we would like We would like a measure of preference that can be combined with probabilities. So that value ([ p : o 1 , 1 − p : o 2 ]) = p × value ( o 1 ) + (1 − p ) × value ( o 2 ) Money does not act like this. What would you prefer $1 , 000 , 000 or [0 . 5 : $0 , 0 . 5 : $2 , 000 , 000]? � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 15

  16. What we would like We would like a measure of preference that can be combined with probabilities. So that value ([ p : o 1 , 1 − p : o 2 ]) = p × value ( o 1 ) + (1 − p ) × value ( o 2 ) Money does not act like this. What would you prefer $1 , 000 , 000 or [0 . 5 : $0 , 0 . 5 : $2 , 000 , 000]? It may seem that preferences are too complex and muti-faceted to be represented by single numbers. � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 16

  17. Theorem If preferences follow the preceding properties, then preferences can be measured by a function utility : outcomes → [0 , 1] such that o 1 � o 2 if and only if utility ( o 1 ) ≥ utility ( o 2). Utilities are linear with probabilities: utility ([ p 1 : o 1 , p 2 : o 2 , . . . , p k : o k ]) k � = p i × utility ( o i ) i =1 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 17

  18. Proof If all outcomes are equally preferred, � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 18

  19. Proof If all outcomes are equally preferred, set utility ( o i ) = 0 for all outcomes o i . Otherwise, suppose the best outcome is best and the worst outcome is worst . For any outcome o i , define utility ( o i ) to be the number u i such that o i ∼ [ u i : best , 1 − u i : worst ] This exists by � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 19

  20. Proof If all outcomes are equally preferred, set utility ( o i ) = 0 for all outcomes o i . Otherwise, suppose the best outcome is best and the worst outcome is worst . For any outcome o i , define utility ( o i ) to be the number u i such that o i ∼ [ u i : best , 1 − u i : worst ] This exists by the Continuity property. � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 20

  21. Proof (cont.) Suppose o 1 � o 2 and utility ( o i ) = u i , then by Substitutability, [ u 1 : best , 1 − u 1 : worst ] � � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 21

  22. Proof (cont.) Suppose o 1 � o 2 and utility ( o i ) = u i , then by Substitutability, [ u 1 : best , 1 − u 1 : worst ] � [ u 2 : best , 1 − u 2 : worst ] Which, by completeness and monotonicity implies � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 22

  23. Proof (cont.) Suppose o 1 � o 2 and utility ( o i ) = u i , then by Substitutability, [ u 1 : best , 1 − u 1 : worst ] � [ u 2 : best , 1 − u 2 : worst ] Which, by completeness and monotonicity implies u 1 ≥ u 2 . � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 23

  24. Proof (cont.) Suppose p = utility ([ p 1 : o 1 , p 2 : o 2 , . . . , p k : o k ]). Suppose utility ( o i ) = u i . We know: o i ∼ [ u i : best , 1 − u i : worst ] By substitutability, we can replace each o i by [ u i : best , 1 − u i : worst ], so p = utility ( [ p 1 : [ u 1 : best , 1 − u 1 : worst ] . . . p k : [ u k : best , 1 − u k : worst ]]) � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 24

  25. By decomposability, this is equivalent to: p = utility ( [ p 1 u 1 + · · · + p k u k : best , p 1 (1 − u 1 ) + · · · + p k (1 − u k ) : worst ]]) Thus, by definition of utility, p = p 1 × u 1 + · · · + p k × u k � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 25

  26. Utility as a function of money 1 l a r t u e Risk averse n k s i R Utility Risk seeking 0 $0 $2,000,000 � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 26

  27. Possible utility as a function of money Someone who really wants a toy worth $30, but who would also like one worth $20: 1 utility 0 10 20 30 40 50 60 70 80 90 100 dollars � D. Poole and A. Mackworth 2010 c Artificial Intelligence, Lecture 9.1, Page 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend