making decisions under uncertainty
play

Making Decisions Under Uncertainty What an agent should do depends - PowerPoint PPT Presentation

Making Decisions Under Uncertainty What an agent should do depends on: The agents ability what options are available to it. The agents beliefs the ways the world could be, given the agents knowledge. Sensing the world updates


  1. Making Decisions Under Uncertainty What an agent should do depends on: The agent’s ability — what options are available to it. The agent’s beliefs — the ways the world could be, given the agent’s knowledge. Sensing the world updates the agent’s beliefs. The agent’s preferences — what the agent actually wants and the tradeoffs when there are risks. Decision theory specifies how to trade off the desirability and probabilities of the possible outcomes for competing actions. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 1

  2. Decision Variables Decision variables are like random variables that an agent gets to choose the value of. A possible world specifies the value for each decision variable and each random variable. For each assignment of values to all decision variables, the measures of the worlds satisfying that assignment sum to 1. The probability of a proposition is undefined unless you condition on the values of all decision variables. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 2

  3. Decision Tree for Delivery Robot The robot can choose to wear pads to protect itself or not. The robot can choose to go the short way past the stairs or a long way that reduces the chance of an accident. There is one random variable of whether there is an accident. accident w0 - moderate damage short way w1 - quick, extra weight no accident wear pads accident w2 - moderate damage long way w3 - slow, extra weight no accident accident w4 - severe damage short way don’t w5 - quick, no weight no accident wear accident w6 - severe damage pads long way w7 - slow, no weight no accident � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 3

  4. Expected Values The expected value of a function of possible worlds is its average value, weighting possible worlds by their probability. Suppose f ( ω ) is the value of function f on world ω . ◮ The expected value of f is � E ( f ) = P ( ω ) × f ( ω ) . ω ∈ Ω ◮ The conditional expected value of f given e is � E ( f | e ) = P ( ω | e ) × f ( ω ) . ω | = e � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 4

  5. Utility Utility is a measure of desirability of worlds to an agent. Let u be a real-valued function such that u ( ω ) represents how good the world is to an agent. Simple goals can be specified by: worlds that satisfy the goal have utility 1; other worlds have utility 0. Often utilities are more complicated: for example some function of the amount of damage to a robot, how much energy is left, what goals are achieved, and how much time it has taken. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 5

  6. Single decisions In a single decision variable, the agent can choose D = d i for any d i ∈ dom ( D ). The expected utility of decision D = d i is E ( u | D = d i ). An optimal single decision is the decision D = d max whose expected utility is maximal: E ( u | D = d max ) = d i ∈ dom ( D ) E ( u | D = d i ) . max � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 6

  7. Single-stage decision networks Extend belief networks with: Decision nodes, that the agent chooses the value for. Domain is the set of possible actions. Drawn as rectangle. Utility node, the parents are the variables on which the utility depends. Drawn as a diamond. Accident Which Way Utility Wear Pads This shows explicitly which nodes affect whether there is an accident. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 7

  8. Finding the optimal decision Suppose the random variables are X 1 , . . . , X n , and utility depends on X i 1 , . . . , X i k � E ( u | D ) = P ( X 1 , . . . , X n | D ) × u ( X i 1 , . . . , X i k ) X 1 ,..., X n n � � = P ( X i | parents ( X i )) × u ( X i 1 , . . . , X i k ) X 1 ,..., X n i =1 To find the optimal decision: ◮ Create a factor for each conditional probability and for the utility ◮ Sum out all of the random variables ◮ This creates a factor on D that gives the expected utility for each D ◮ Choose the D with the maximum value in the factor. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 8

  9. Example Initial Factors Which Way Accident Value long true 0.01 long false 0.99 short true 0.2 short false 0.8 Which Way Accident Wear Pads Value long true true 30 long true false 0 long false true 75 long false false 80 short true true 35 short true false 3 short false true 95 short false false 100 � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 9

  10. Sequential Decisions An intelligent agent doesn’t make a multi-step decision and carry it out without considering revising it based on future information. A more typical scenario is where the agent: observes, acts, observes, acts, . . . Subsequent actions can depend on what is observed. What is observed depends on previous actions. Often the sole reason for carrying out an action is to provide information for future actions. For example: diagnostic tests, spying. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 10

  11. Sequential decision problems A sequential decision problem consists of a sequence of decision variables D 1 , . . . , D n . Each D i has an information set of variables parents ( D i ), whose value will be known at the time decision D i is made. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 11

  12. Decision Networks A decision network is a graphical representation of a finite sequential decision problem. Decision networks extend belief networks to include decision variables and utility. A decision network specifies what information is available when the agent has to act. A decision network specifies which variables the utility depends on. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 12

  13. Decisions Networks A random variable is drawn as an ellipse. Arcs into the node represent probabilistic dependence. A decision variable is drawn as an rectangle. Arcs into the node represent information available when the decision is make. A utility node is drawn as a diamond. Arcs into the node represent variables that the utility depends on. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 13

  14. Umbrella Decision Network Weather Forecast Utility Umbrella You don’t get to observe the weather when you have to decide whether to take your umbrella. You do get to observe the forecast. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 14

  15. Decision Network for the Alarm Problem Utility Tampering Fire Alarm Smoke Leaving SeeSmoke Check Smoke Report Call � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 15

  16. No-forgetting A No-forgetting decision network is a decision network where: The decision nodes are totally ordered. This is the order the actions will be taken. All decision nodes that come before D i are parents of decision node D i . Thus the agent remembers its previous actions. Any parent of a decision node is a parent of subsequent decision nodes. Thus the agent remembers its previous observations. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 16

  17. What should an agent do? What an agent should do at any time depends on what it will do in the future. What an agent does in the future depends on what it did before. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 17

  18. Policies A policy specifies what an agent should do under each circumstance. A policy is a sequence δ 1 , . . . , δ n of decision functions δ i : dom ( parents ( D i )) → dom ( D i ) . This policy means that when the agent has observed O ∈ dom ( parents ( D i )), it will do δ i ( O ). � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 18

  19. Expected Utility of a Policy Possible world ω satisfies policy δ , written ω | = δ if the world assigns the value to each decision node that the policy specifies. The expected utility of policy δ is � E ( u | δ ) = u ( ω ) × P ( ω ) , ω | = δ An optimal policy is one with the highest expected utility. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 19

  20. Finding the optimal policy Remove all variables that are not ancestors of the utility node Create a factor for each conditional probability table and a factor for the utility. Sum out variables that are not parents of a decision node. Select a variable D that is only in a factor f with (some of) its parents. Eliminate D by maximizing. This returns: ◮ the optimal decision function for D , arg max D f ◮ a new factor to use in VE, max D f Repeat till there are no more decision nodes. Eliminate the remaining random variables. Multiply the factors: this is the expected utility of the optimal policy. � D. Poole and A. Mackworth 2008 c Artificial Intelligence, Lecture 9.2, Page 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend