decision making
play

Decision-Making Paolo Turrini Department of Computing, Imperial - PowerPoint PPT Presentation

Intro to AI (2nd Part) Decision-Making Paolo Turrini Department of Computing, Imperial College London Introduction to Artificial Intelligence 2nd Part Paolo Turrini Intro to AI (2nd Part) Intro to AI (2nd Part) Outline Lotteries (and how


  1. Intro to AI (2nd Part) Humans and Expected Utility Tverski and Kahneman’s Prospect Theory: Humans have complex utility estimates Risk aversion, satisfaction level Warning! controversial statement: PT does not refute the principle of maximization of expected utility. We can incorporate risk aversion and satisfaction as properties of outcomes. Figure: Typical empirical data Paolo Turrini Intro to AI (2nd Part)

  2. Intro to AI (2nd Part) Preferences A preference relation is a relation �⊆ L × L over the set of lotteries. Paolo Turrini Intro to AI (2nd Part)

  3. Intro to AI (2nd Part) Preferences A preference relation is a relation �⊆ L × L over the set of lotteries. A � B means that lottery A is weakly preferred to lottery B . Paolo Turrini Intro to AI (2nd Part)

  4. Intro to AI (2nd Part) Preferences A preference relation is a relation �⊆ L × L over the set of lotteries. A � B means that lottery A is weakly preferred to lottery B . A ≻ B = ( A � B and not B � A ) means that lotter A is strictly preferred to lottery B . Paolo Turrini Intro to AI (2nd Part)

  5. Intro to AI (2nd Part) Preferences A preference relation is a relation �⊆ L × L over the set of lotteries. A � B means that lottery A is weakly preferred to lottery B . A ≻ B = ( A � B and not B � A ) means that lotter A is strictly preferred to lottery B . A ∼ B = ( A � B and B � A ) means that lottery A the same as lottery B value-wise (indifference). Paolo Turrini Intro to AI (2nd Part)

  6. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. Paolo Turrini Intro to AI (2nd Part)

  7. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Paolo Turrini Intro to AI (2nd Part)

  8. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Orderability ( A ≻ B ) ∨ ( B ∼ A ) ∨ ( B ≻ A ) Paolo Turrini Intro to AI (2nd Part)

  9. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Orderability ( A ≻ B ) ∨ ( B ∼ A ) ∨ ( B ≻ A ) Transitivity ( A ≻ B ) ∧ ( B ≻ C ) ⇒ ( A ≻ C ) Paolo Turrini Intro to AI (2nd Part)

  10. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Orderability ( A ≻ B ) ∨ ( B ∼ A ) ∨ ( B ≻ A ) Transitivity ( A ≻ B ) ∧ ( B ≻ C ) ⇒ ( A ≻ C ) Continuity A ≻ B ≻ C ⇒ ∃ p [ p , A ; 1 − p , C ] ∼ B Paolo Turrini Intro to AI (2nd Part)

  11. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Orderability ( A ≻ B ) ∨ ( B ∼ A ) ∨ ( B ≻ A ) Transitivity ( A ≻ B ) ∧ ( B ≻ C ) ⇒ ( A ≻ C ) Continuity A ≻ B ≻ C ⇒ ∃ p [ p , A ; 1 − p , C ] ∼ B Substitutability A ∼ B ⇒ [ p , A ; 1 − p , C ] ∼ [ p , B ; 1 − p , C ] Paolo Turrini Intro to AI (2nd Part)

  12. Intro to AI (2nd Part) Rational preferences Let A , B , C be three states and let p , q ∈ [0 , 1]. A preference relation � makes sense if it satisfies the following constraints Orderability ( A ≻ B ) ∨ ( B ∼ A ) ∨ ( B ≻ A ) Transitivity ( A ≻ B ) ∧ ( B ≻ C ) ⇒ ( A ≻ C ) Continuity A ≻ B ≻ C ⇒ ∃ p [ p , A ; 1 − p , C ] ∼ B Substitutability A ∼ B ⇒ [ p , A ; 1 − p , C ] ∼ [ p , B ; 1 − p , C ] Monotonicity A ≻ B ⇒ ( p ≥ q ⇔ [ p , A ; 1 − p , B ] ≻ ∼ [ q , A ; 1 − q , B ]) Paolo Turrini Intro to AI (2nd Part)

  13. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Paolo Turrini Intro to AI (2nd Part)

  14. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Take transitivity. Paolo Turrini Intro to AI (2nd Part)

  15. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Take transitivity. Paolo Turrini Intro to AI (2nd Part)

  16. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Take transitivity. If B ≻ C , then an agent who has C would pay (say) 1 cent to get B Paolo Turrini Intro to AI (2nd Part)

  17. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Take transitivity. If B ≻ C , then an agent who has C would pay (say) 1 cent to get B If A ≻ B , then an agent who has B would pay (say) 1 cent to get A Paolo Turrini Intro to AI (2nd Part)

  18. Intro to AI (2nd Part) Rational preferences contd. Violating the constraints leads to self-evident irrationality. Take transitivity. If B ≻ C , then an agent who has C would pay (say) 1 cent to get B If A ≻ B , then an agent who has B would pay (say) 1 cent to get A If C ≻ A , then an agent who has A would pay (say) 1 cent to get C Paolo Turrini Intro to AI (2nd Part)

  19. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: Paolo Turrini Intro to AI (2nd Part)

  20. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B Paolo Turrini Intro to AI (2nd Part)

  21. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B u ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i u ( S i ) Paolo Turrini Intro to AI (2nd Part)

  22. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B u ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i u ( S i ) [ ⇐ ] Paolo Turrini Intro to AI (2nd Part)

  23. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B u ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i u ( S i ) [ ⇐ ] By contraposition. E.g., pick transitivity and show that if the relation is not transitive there is no way of associating numbers to outcomes. Paolo Turrini Intro to AI (2nd Part)

  24. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B u ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i u ( S i ) [ ⇐ ] By contraposition. E.g., pick transitivity and show that if the relation is not transitive there is no way of associating numbers to outcomes. [ ⇒ ] Paolo Turrini Intro to AI (2nd Part)

  25. Intro to AI (2nd Part) Representation Theorem Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944) A preference relation ≻ ∼ makes sense if and only if there exists a real-valued function u such that: A ≻ u ( A ) ≥ u ( B ) ⇔ ∼ B u ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i u ( S i ) [ ⇐ ] By contraposition. E.g., pick transitivity and show that if the relation is not transitive there is no way of associating numbers to outcomes. [ ⇒ ] We use the axioms to show that there are infinitely many functions that satisfy them, but they are all “equivalent” to a unique real-valued utility functions. Paolo Turrini Intro to AI (2nd Part)

  26. Intro to AI (2nd Part) Representation Theorem Michael Maschler, Eilon Solan and Shmiel Zamir Game Theory (Ch. 2) Cambridge University Press, 2013. Paolo Turrini Intro to AI (2nd Part)

  27. Intro to AI (2nd Part) Representation Theorem Michael Maschler, Eilon Solan and Shmiel Zamir Game Theory (Ch. 2) Cambridge University Press, 2013. The main message Give me any order on outcomes that makes sense and I can turn it into a utility function! Paolo Turrini Intro to AI (2nd Part)

  28. Intro to AI (2nd Part) Multicriteria decision-making Certain outcomes seem difficult to compare: Paolo Turrini Intro to AI (2nd Part)

  29. Intro to AI (2nd Part) Multicriteria decision-making Certain outcomes seem difficult to compare: what factors are more important? Paolo Turrini Intro to AI (2nd Part)

  30. Intro to AI (2nd Part) Multicriteria decision-making Certain outcomes seem difficult to compare: what factors are more important? have we considered all the relevant ones? Paolo Turrini Intro to AI (2nd Part)

  31. Intro to AI (2nd Part) Multicriteria decision-making Certain outcomes seem difficult to compare: what factors are more important? have we considered all the relevant ones? do factor interfere with one another? Paolo Turrini Intro to AI (2nd Part)

  32. Intro to AI (2nd Part) Multicriteria decision-making Certain outcomes seem difficult to compare: what factors are more important? have we considered all the relevant ones? do factor interfere with one another? In other situations the utility function may be updated because of new incoming information (e.g., evaluating non-terminal positions in a long extensive game like Chess or Go) Paolo Turrini Intro to AI (2nd Part)

  33. Intro to AI (2nd Part) Multicriteria decision-making Figure: Deep Blue- Kasparov 1996, Final Game. Material favours Black but the position is hopeless Paolo Turrini Intro to AI (2nd Part)

  34. Intro to AI (2nd Part) Multicriteria decision-making How can we handle utility functions of many variables X 1 . . . X n ? Paolo Turrini Intro to AI (2nd Part)

  35. Intro to AI (2nd Part) Multicriteria decision-making How can we handle utility functions of many variables X 1 . . . X n ? e.g., what is U (king safety , material advantage , control of the centre)? Paolo Turrini Intro to AI (2nd Part)

  36. Intro to AI (2nd Part) Multicriteria decision-making How can we handle utility functions of many variables X 1 . . . X n ? e.g., what is U (king safety , material advantage , control of the centre)? We need to find ways to compare bundles of factors, but might be difficult in general (strict dominance, stochastic dominance). Paolo Turrini Intro to AI (2nd Part)

  37. Intro to AI (2nd Part) Multicriteria decision-making How can we handle utility functions of many variables X 1 . . . X n ? e.g., what is U (king safety , material advantage , control of the centre)? We need to find ways to compare bundles of factors, but might be difficult in general (strict dominance, stochastic dominance). Search methods to avoid multicriteria altogether: Monte Carlo Tree Search generates random endgames. Paolo Turrini Intro to AI (2nd Part)

  38. Intro to AI (2nd Part) Multicriteria decision-making How can we handle utility functions of many variables X 1 . . . X n ? e.g., what is U (king safety , material advantage , control of the centre)? We need to find ways to compare bundles of factors, but might be difficult in general (strict dominance, stochastic dominance). Search methods to avoid multicriteria altogether: Monte Carlo Tree Search generates random endgames. We assume there is a way of assigning a utility function to bundles of factors and therefore compare them. Paolo Turrini Intro to AI (2nd Part)

  39. Intro to AI (2nd Part) Rationality and expected utility “A person’s behavior is rational if it is in his best interests, given his information” Robert J. Aumann Nobel Prize Winner Economics Paolo Turrini Intro to AI (2nd Part)

  40. Intro to AI (2nd Part) Rationality and expected utility “A person’s behavior is rational if it is in his best interests, given his information” Choose an action that Robert J. Aumann maximises the expected utility Nobel Prize Winner Economics Paolo Turrini Intro to AI (2nd Part)

  41. Intro to AI (2nd Part) Beliefs and Expected Utility Paolo Turrini Intro to AI (2nd Part)

  42. Intro to AI (2nd Part) Beliefs and Expected Utility Rewards: − 1000 for dying 0 any other square Paolo Turrini Intro to AI (2nd Part)

  43. Intro to AI (2nd Part) Beliefs and Expected Utility Rewards: − 1000 for dying 0 any other square What’s the expected utility of going to [3 , 1] , [2 , 2] , [1 , 3]? Paolo Turrini Intro to AI (2nd Part)

  44. Intro to AI (2nd Part) Using conditional independence contd. α ′ � 0 . 2(0 . 04 + 0 . 16 + 0 . 16) , 0 . 8(0 . 04 + 0 . 16) � P ( P 1 , 3 | known , b ) = ≈ � 0 . 31 , 0 . 69 � P ( P 2 , 2 | known , b ) ≈ � 0 . 86 , 0 . 14 � Paolo Turrini Intro to AI (2nd Part)

  45. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = Paolo Turrini Intro to AI (2nd Part)

  46. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] Paolo Turrini Intro to AI (2nd Part)

  47. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 Paolo Turrini Intro to AI (2nd Part)

  48. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) Paolo Turrini Intro to AI (2nd Part)

  49. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) u (2 , 2) = Paolo Turrini Intro to AI (2nd Part)

  50. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) u (2 , 2) = u [0 . 86 , − 1000; 0 . 14 , 0] Paolo Turrini Intro to AI (2nd Part)

  51. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) u (2 , 2) = u [0 . 86 , − 1000; 0 . 14 , 0] = − 860 Paolo Turrini Intro to AI (2nd Part)

  52. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) u (2 , 2) = u [0 . 86 , − 1000; 0 . 14 , 0] = − 860 Clearly going to [2 , 2] from either [1 , 2] or [2 , 1] is irrational. Paolo Turrini Intro to AI (2nd Part)

  53. Intro to AI (2nd Part) Beliefs and expected utility The expected utility u (1 , 3) of the action (1 , 3) of going to [1 , 3] from an explored adjacent square is: u (1 , 3) = u [0 . 31 , − 1000; 0 . 69 , 0] = − 310 u (3 , 1) = u (1 , 3) u (2 , 2) = u [0 . 86 , − 1000; 0 . 14 , 0] = − 860 Clearly going to [2 , 2] from either [1 , 2] or [2 , 1] is irrational. Either going to [1 , 3] or [3 , 1] is the rational choice. Paolo Turrini Intro to AI (2nd Part)

  54. Intro to AI (2nd Part) Risky moves Paolo Turrini Intro to AI (2nd Part)

  55. Intro to AI (2nd Part) Actuators Sensors Breeze, Glitter, Smell Actuators Turn L/R, Go, Grab, Release, Shoot, Climb Rewards 1000 escaping with gold, -1000 dying, -10 using arrow, -1 walking Environment Squares adjacent to Wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills Wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Paolo Turrini Intro to AI (2nd Part)

  56. Intro to AI (2nd Part) Actuators Sensors Breeze, Glitter, Smell Actuators Turn L/R, Go, Grab, Release, Shoot, Climb Rewards 1000 escaping with gold, -1000 dying, -10 using arrow, -1 walking Environment Squares adjacent to Wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills Wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Paolo Turrini Intro to AI (2nd Part)

  57. Intro to AI (2nd Part) Deterministic actions Actions in the Wumpus World are deterministic Paolo Turrini Intro to AI (2nd Part)

  58. Intro to AI (2nd Part) Deterministic actions Actions in the Wumpus World are deterministic If I want to go from [2 , 3] to [2 , 2] I just go. Paolo Turrini Intro to AI (2nd Part)

  59. Intro to AI (2nd Part) Deterministic actions Actions in the Wumpus World are deterministic If I want to go from [2 , 3] to [2 , 2] I just go. P ([2 , 2] | [2 , 3] , (2 , 2)) Paolo Turrini Intro to AI (2nd Part)

  60. Intro to AI (2nd Part) Deterministic actions Actions in the Wumpus World are deterministic If I want to go from [2 , 3] to [2 , 2] I just go. P ([2 , 2] | [2 , 3] , (2 , 2)) =1 Paolo Turrini Intro to AI (2nd Part)

  61. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. Paolo Turrini Intro to AI (2nd Part)

  62. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. ( s , a ) = [ p 1 , A 1 ; p 2 , A 2 ; . . . p n , A n ] Paolo Turrini Intro to AI (2nd Part)

  63. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. ( s , a ) = [ p 1 , A 1 ; p 2 , A 2 ; . . . p n , A n ] e.g., the agent decides to go from [2 , 1] to [2 , 2] but: Paolo Turrini Intro to AI (2nd Part)

  64. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. ( s , a ) = [ p 1 , A 1 ; p 2 , A 2 ; . . . p n , A n ] e.g., the agent decides to go from [2 , 1] to [2 , 2] but: Goes to [2 , 2] with probability 0 . 5 Paolo Turrini Intro to AI (2nd Part)

  65. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. ( s , a ) = [ p 1 , A 1 ; p 2 , A 2 ; . . . p n , A n ] e.g., the agent decides to go from [2 , 1] to [2 , 2] but: Goes to [2 , 2] with probability 0 . 5 Goes to [3 , 1] with probability 0 . 3 Paolo Turrini Intro to AI (2nd Part)

  66. Intro to AI (2nd Part) Stochastic actions The result of performing a in state s is a lottery over S , i.e., probability distribution over the set of all possible states. ( s , a ) = [ p 1 , A 1 ; p 2 , A 2 ; . . . p n , A n ] e.g., the agent decides to go from [2 , 1] to [2 , 2] but: Goes to [2 , 2] with probability 0 . 5 Goes to [3 , 1] with probability 0 . 3 Goes back to [1 , 1] with probability 0 . 1 Paolo Turrini Intro to AI (2nd Part)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend