making complex decisions
play

Making Complex Decisions Paolo Turrini Department of Computing, - PowerPoint PPT Presentation

Intro to AI (2nd Part) Making Complex Decisions Paolo Turrini Department of Computing, Imperial College London Introduction to Artificial Intelligence 2nd Part Paolo Turrini Intro to AI (2nd Part) Intro to AI (2nd Part) AlphaGo beats World


  1. Intro to AI (2nd Part) Makings plans In this case, the probability of accidental successes doesn’t play a significant role. However it might very well, under different decision models, rewards, environments etc. 0 . 32776 is still less than 1 3 , so we don’t seem to be doing very well. Paolo Turrini Intro to AI (2nd Part)

  2. Intro to AI (2nd Part) Rewards We introduce a utility function r : S → R Paolo Turrini Intro to AI (2nd Part)

  3. Intro to AI (2nd Part) Rewards We introduce a utility function r : S → R r stands for rewards. To avoid confusion with established terminology, we also call it a reward function. Paolo Turrini Intro to AI (2nd Part)

  4. Intro to AI (2nd Part) Rewards We introduce a utility function r : S → R r stands for rewards. To avoid confusion with established terminology, we also call it a reward function. Paolo Turrini Intro to AI (2nd Part)

  5. Intro to AI (2nd Part) Terminology Paolo Turrini Intro to AI (2nd Part)

  6. Intro to AI (2nd Part) Terminology rewards for local utilities, assigned to states - denoted r Paolo Turrini Intro to AI (2nd Part)

  7. Intro to AI (2nd Part) Terminology rewards for local utilities, assigned to states - denoted r values for global long-range utilities, also assigned to states - denoted v Paolo Turrini Intro to AI (2nd Part)

  8. Intro to AI (2nd Part) Terminology rewards for local utilities, assigned to states - denoted r values for global long-range utilities, also assigned to states - denoted v utility and expected utility used as general terms applied to actions, states, sequences of states etc. - denoted u Paolo Turrini Intro to AI (2nd Part)

  9. Intro to AI (2nd Part) Rewards Consider now the following. The reward is: Paolo Turrini Intro to AI (2nd Part)

  10. Intro to AI (2nd Part) Rewards Consider now the following. The reward is: +1 at state +1, -1 at -1, -0.04 in all other states. Paolo Turrini Intro to AI (2nd Part)

  11. Intro to AI (2nd Part) Rewards Consider now the following. The reward is: +1 at state +1, -1 at -1, -0.04 in all other states. What’s the expected utility of [ Up , Up , Right , Right , Right ]? Paolo Turrini Intro to AI (2nd Part)

  12. Intro to AI (2nd Part) Rewards Consider now the following. The reward is: +1 at state +1, -1 at -1, -0.04 in all other states. What’s the expected utility of [ Up , Up , Right , Right , Right ]? IT DEPENDS Paolo Turrini Intro to AI (2nd Part)

  13. Intro to AI (2nd Part) Rewards Consider now the following. The reward is: +1 at state +1, -1 at -1, -0.04 in all other states. What’s the expected utility of [ Up , Up , Right , Right , Right ]? IT DEPENDS on how we are going to put rewards together! Paolo Turrini Intro to AI (2nd Part)

  14. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Paolo Turrini Intro to AI (2nd Part)

  15. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: Paolo Turrini Intro to AI (2nd Part)

  16. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Paolo Turrini Intro to AI (2nd Part)

  17. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? Paolo Turrini Intro to AI (2nd Part)

  18. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? multi-criteria decision making Paolo Turrini Intro to AI (2nd Part)

  19. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? multi-criteria decision making Many ways of comparing states: Paolo Turrini Intro to AI (2nd Part)

  20. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? multi-criteria decision making Many ways of comparing states: summing all the rewards Paolo Turrini Intro to AI (2nd Part)

  21. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? multi-criteria decision making Many ways of comparing states: summing all the rewards giving priority to the immediate rewards Paolo Turrini Intro to AI (2nd Part)

  22. Intro to AI (2nd Part) Utility of state sequences We need to compare sequences of states. Look at the following: u [ s 1 , s 2 , . . . s n ] is the utility of sequence s 1 , s 2 , . . . s n . Does it remind you of anything? multi-criteria decision making Many ways of comparing states: summing all the rewards giving priority to the immediate rewards . . . Paolo Turrini Intro to AI (2nd Part)

  23. Intro to AI (2nd Part) Utility of state sequences We are going to assume only one axiom, Paolo Turrini Intro to AI (2nd Part)

  24. Intro to AI (2nd Part) Utility of state sequences We are going to assume only one axiom, stationary preferences on reward sequences: Paolo Turrini Intro to AI (2nd Part)

  25. Intro to AI (2nd Part) Utility of state sequences We are going to assume only one axiom, stationary preferences on reward sequences: [ r , r 0 , r 1 , r 2 , . . . ] ≻ [ r , r ′ 0 , r ′ 1 , r ′ 2 , . . . ] ⇔ [ r 0 , r 1 , r 2 , . . . ] ≻ [ r ′ 0 , r ′ 1 , r ′ 2 , . . . ] Paolo Turrini Intro to AI (2nd Part)

  26. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Paolo Turrini Intro to AI (2nd Part)

  27. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Additive utility function: Paolo Turrini Intro to AI (2nd Part)

  28. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Additive utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + r ( s 1 ) + r ( s 2 ) + · · · Paolo Turrini Intro to AI (2nd Part)

  29. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Additive utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + r ( s 1 ) + r ( s 2 ) + · · · Discounted utility function: Paolo Turrini Intro to AI (2nd Part)

  30. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Additive utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + r ( s 1 ) + r ( s 2 ) + · · · Discounted utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + γ r ( s 1 ) + γ 2 r ( s 2 ) + · · · Paolo Turrini Intro to AI (2nd Part)

  31. Intro to AI (2nd Part) Utility of state sequences Theorem There are only two ways to combine rewards over time. Additive utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + r ( s 1 ) + r ( s 2 ) + · · · Discounted utility function: u ([ s 0 , s 1 , s 2 , . . . ]) = r ( s 0 ) + γ r ( s 1 ) + γ 2 r ( s 2 ) + · · · where γ ∈ [0 , 1] is the discount factor Paolo Turrini Intro to AI (2nd Part)

  32. Intro to AI (2nd Part) Discount factor Paolo Turrini Intro to AI (2nd Part)

  33. Intro to AI (2nd Part) Discount factor γ is a measure of the agent patience. How much more she values a gain of 5 today than a gain of 5 tomorrow, the day after etc... Paolo Turrini Intro to AI (2nd Part)

  34. Intro to AI (2nd Part) Discount factor γ is a measure of the agent patience. How much more she values a gain of 5 today than a gain of 5 tomorrow, the day after etc... Used everywhere in AI, game theory, cognitive psychology Paolo Turrini Intro to AI (2nd Part)

  35. Intro to AI (2nd Part) Discount factor γ is a measure of the agent patience. How much more she values a gain of 5 today than a gain of 5 tomorrow, the day after etc... Used everywhere in AI, game theory, cognitive psychology A lot of experimental research on it Paolo Turrini Intro to AI (2nd Part)

  36. Intro to AI (2nd Part) Discount factor γ is a measure of the agent patience. How much more she values a gain of 5 today than a gain of 5 tomorrow, the day after etc... Used everywhere in AI, game theory, cognitive psychology A lot of experimental research on it Variants: hyperbolic discounting Paolo Turrini Intro to AI (2nd Part)

  37. Intro to AI (2nd Part) Discounting Paolo Turrini Intro to AI (2nd Part)

  38. Intro to AI (2nd Part) Discounting With discounted rewards the utility of an infinite sequence if finite Paolo Turrini Intro to AI (2nd Part)

  39. Intro to AI (2nd Part) Discounting With discounted rewards the utility of an infinite sequence if finite In fact, if γ < 1 and rewards are bounded by r , we have: Paolo Turrini Intro to AI (2nd Part)

  40. Intro to AI (2nd Part) Discounting With discounted rewards the utility of an infinite sequence if finite In fact, if γ < 1 and rewards are bounded by r , we have: ∞ ∞ r � � γ t r ( s t ) ≤ γ t r = u [ s 1 , s 2 , . . . ] = 1 − γ t =0 t =0 Paolo Turrini Intro to AI (2nd Part)

  41. Intro to AI (2nd Part) Markov Decision Process A Markov Decision Process is a sequential decision problem for a: Paolo Turrini Intro to AI (2nd Part)

  42. Intro to AI (2nd Part) Markov Decision Process A Markov Decision Process is a sequential decision problem for a: fully observable environment Paolo Turrini Intro to AI (2nd Part)

  43. Intro to AI (2nd Part) Markov Decision Process A Markov Decision Process is a sequential decision problem for a: fully observable environment with stochastic actions Paolo Turrini Intro to AI (2nd Part)

  44. Intro to AI (2nd Part) Markov Decision Process A Markov Decision Process is a sequential decision problem for a: fully observable environment with stochastic actions with a Markovian transition model Paolo Turrini Intro to AI (2nd Part)

  45. Intro to AI (2nd Part) Markov Decision Process A Markov Decision Process is a sequential decision problem for a: fully observable environment with stochastic actions with a Markovian transition model and with discounted (possibly additive) rewards Paolo Turrini Intro to AI (2nd Part)

  46. Intro to AI (2nd Part) MDPs formally Definition States s ∈ S , actions a ∈ A Model P ( s ′ | s , a ) = probability that a in s leads to s ′ Reward function R ( s ) (or R ( s , a ), R ( s , a , s ′ )) = � − 0 . 04 (small penalty) for nonterminal states ± 1 for terminal states Paolo Turrini Intro to AI (2nd Part)

  47. Intro to AI (2nd Part) Value of plans The utility of executing a plan p from state s is given by: Paolo Turrini Intro to AI (2nd Part)

  48. Intro to AI (2nd Part) Value of plans The utility of executing a plan p from state s is given by: ∞ � v p ( s ) = E [ γ t r ( S t )] t =0 Paolo Turrini Intro to AI (2nd Part)

  49. Intro to AI (2nd Part) Value of plans The utility of executing a plan p from state s is given by: ∞ � v p ( s ) = E [ γ t r ( S t )] t =0 Where S t is a random variable and the expectation is wrt to the probability distribution over state sequences determined by s and p . Paolo Turrini Intro to AI (2nd Part)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend