cs 188 artificial intelligence
play

CS 188: Artificial Intelligence Decision Networks and Value of - PowerPoint PPT Presentation

CS 188: Artificial Intelligence Decision Networks and Value of Perfect Information Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.] Recap:


  1. CS 188: Artificial Intelligence Decision Networks and Value of Perfect Information Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]

  2. Recap: Bayesian Inference (Exact) P ( L ) = ? R o Inference by Enumeration § Variable Elimination T X X = P ( L | t ) P ( r ) P ( t | r ) X X = P ( L | t ) P ( r ) P ( t | r ) L t r t r Join on r Join on r Join on t Eliminate r Eliminate r Join on t Eliminate t Eliminate t

  3. Recap: Bayesian Inference (Sampling) +c 0.5 -c 0.5 Cloudy Cloudy +s 0.1 +r 0.8 +c -s 0.9 +c -r 0.2 +s 0.5 +r 0.2 Sprinkler Sprinkler Rain Rain -c -s 0.5 -c -r 0.8 Samples: WetGrass WetGrass +w 0.99 +c, -s, +r, +w +r -w 0.01 +s +w 0.90 -c, +s, -r, +w -r -w 0.10 … +w 0.90 +r -w 0.10 -s +w 0.01 -r -w 0.99

  4. Recap: Bayesian Inference (Sampling - Rejection) o P(C|+s) C S R W +c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w

  5. Recap: Bayesian Inference (Sampling - Rejection) o P(C|+s) C S R W +c, -s, +c, +s, +r, +w -c, +s, +r, -w +c, -s, -c, -s,

  6. Recap: Bayesian Inference (Sampling - Likelihood) +c 0.5 -c 0.5 Cloudy Cloudy +s 0.1 +r 0.8 +c +c -s 0.9 -r 0.2 +s 0.5 +r 0.2 -c -c Sprinkler Sprinkler Rain Rain -s 0.5 -r 0.8 Samples: WetGrass WetGrass +r +w 0.99 +s +c, +s, +r, +w -w 0.01 +w 0.90 -r … -w 0.10 +w 0.90 +r -s -w 0.10 +w 0.01 -r -w 0.99

  7. Recap: Bayesian Inference (Sampling – Gibbs) P(S|+r): § Step 2: Initialize other variables o Step 1: Fix evidence C C § Randomly o R = +r S +r S +r W W o Steps 3: Repeat o Choose a non-evidence variable X o Resample X from P( X | all other variables) C C C C C C S +r S +r S +r S +r S +r S +r W W W W W W

  8. Decision Networks

  9. Decision Networks Umbrella U Weather Forecast

  10. Decision Networks o MEU: choose the action which maximizes the expected utility given the evidence Can directly operationalize this with § decision networks Umbrella § Bayes nets with nodes for utility and actions § Lets us calculate the expected utility for U each action New node types: Weather § § Chance nodes (just like BNs) § Actions (rectangles, cannot have parents, act as observed evidence) Forecast § Utility node (diamond, depends on action and chance nodes)

  11. Decision Networks § Action selection § Instantiate all evidence Umbrella § Set action node(s) each possible way U § Calculate posterior for all Weather parents of utility node, given the evidence § Calculate expected utility for each action Forecast § Choose maximizing action

  12. Maximum Expected Utility Umbrella = leave Umbrella U Umbrella = take Weather A W U(A,W) W P(W) leave sun 100 sun 0.7 leave rain 0 rain 0.3 take sun 20 Optimal decision = leave take rain 70

  13. Decisions as Outcome Trees {} leave e k a t Umbrella Weather | {} Weather | {} U rain sun rain sun Weather U(t,s) U(t,r) U(l,s) U(l,r) o Almost exactly like expectimax / MDPs o What � s changed?

  14. Maximum Expected Utility A W U(A,W) Umbrella = leave Umbrella leave sun 100 leave rain 0 take sun 20 U take rain 70 P ( W ) P ( F | W ) Weather W P(W|F=bad) P ( W , F ) sun 0.34 P ( W | F ) = ∑ w P ( w , F ) rain 0.66 Forecast P ( F | W ) P ( W ) =bad = ∑ w P ( F | w ) P ( w )

  15. Maximum Expected Utility A W U(A,W) Umbrella = leave Umbrella leave sun 100 leave rain 0 take sun 20 U take rain 70 Umbrella = take Weather W P(W|F=bad) sun 0.34 rain 0.66 Forecast Optimal decision = take =bad

  16. Decisions as Outcome Trees Umbrella {b} leave e k a t U W | {b} W | {b} Weather sun rain rain sun U(t,s) U(t,r) U(l,s) U(l,r) Forecast =bad

  17. Video of Demo Ghostbusters with Probability

  18. Ghostbusters Decision Network Demo: Ghostbusters with probability Bust U Ghost Location … Sensor (1,1) Sensor (1,2) Sensor (1,3) Sensor (1,n) Sensor (2,1) … … … Sensor (m,n) Sensor (m,1)

  19. Value of Information

  20. Value of Information o Idea: compute value of acquiring evidence D O U o Can be done directly from decision network DrillLoc a a k U a b 0 o Example: buying oil drilling rights O P b a 0 o Two blocks A and B, exactly one has oil, worth k OilLoc a 1/2 b b k o You can drill in one location b 1/2 o Prior probabilities 0.5 each, & mutually exclusive o Drilling in either A or B has EU = k/2, MEU = k/2 o Question: what � s the value of information of O? o Value of knowing which of A or B has oil o Value is expected gain in MEU from new info o Survey may say � oil in a � or � oil in b, � prob 0.5 each o If we know OilLoc, MEU is k (either way) o Gain in MEU from knowing OilLoc? o VPI(OilLoc) = k/2 o Fair price of information: k/2

  21. Value of Perfect Information A W U MEU with no evidence Umbrella leave sun 100 U leave rain 0 take sun 20 MEU if forecast is bad Weather take rain 70 MEU if forecast is good Forecast Forecast distribution F P(F) good 0.59 bad 0.41

  22. Value of Information {+e} o Assume we have evidence E=e. Value if we act now: a P(s | +e) o Assume we see that E � = e � . Value if we act then: U {+e, +e � } a o BUT E � is a random variable whose value is unknown, so we don � t know what e � will be P(s | +e, +e � ) U o Expected value if E � is revealed and then we act: {+e} P(+e � | +e) P(-e � | +e) {+e, +e � } {+e, -e � } a o Value of information: how much MEU goes up by revealing E � first then acting, over acting now:

  23. <latexit sha1_base64="WjXHj1qgFq7XCzRsTi8xjuTZm8=">ACEHicbZA9SwNBEIb34leMX6eWNotBYkDCXRS0EYI2lhGMCk45jYTXbJ7d+zuieHMT7Dxr9hYKGJrae/cRNTqPGFhYd3ZpidN0wE18bzPp3c1PTM7Fx+vrCwuLS84q6unes4VQwbLBaxugxBo+ARNgw3Ai8ThSBDgRdh73hYv7hBpXkcnZl+gm0JVxHvcgbGWoFbOmxJuA2AtnQqgwxLgxFoWt/WO1i6w3LDApQDt+hVvJHoJPhjKJKx6oH70erELJUYGSZA6bvJadgTKcCRwUWqnGBFgPrBpMQKJup2NDhrQLet0aDdW9kWGjtyfExlIrfsytJ0SzLX+Wxua/9WaqeketDMeJanBiH0v6qaCmpgO06EdrpAZ0bcATH7V8quQEzNsOCDcH/e/IknFcr/m6lerpXrB2N48iTDbJtolP9kmNnJA6aRBG7skjeSYvzoPz5Lw6b9+tOWc8s05+yXn/Aluim34=</latexit> <latexit sha1_base64="zCWzkavAye+jYEqeXezI3ypFu+E=">ACF3icbVDLSgNBEJz1GeMr6tHLYJAkEMJuFPQiF48RjBGSMLSO+kgzO7y8ysGNb8hRd/xYsHRbzqzb9x8j4Kmgoqrp7gpiwbVx3U9nZnZufmExs5RdXldW89tbF7qKFEM6ywSkboKQKPgIdYNwKvYoUgA4GN4Pp05DduUGkehRdmEGNbQi/kXc7AWMnPVY5aOpF+ioVhrYiFOy1JNz6QMeyprWivsMyFkr1oi5Dyc/l3Yo7Bv1LvCnJkylqfu6j1YlYIjE0TIDWTc+NTsFZTgTOMy2Eo0xsGvoYdPSECTqdjr+a0h3rdKh3UjZCg0dq98nUpBaD2RgOyWYv7tjcT/vGZiuoftlIdxYjBk0XdRFAT0VFItMVMiMGlgBT3N5KWR8UMGOjzNoQvN8v/yWX1Yq3V6me7+ePT6ZxZMg2SF4pEDckzOSI3UCSP35JE8kxfnwXlyXp23SeuM53ZIj/gvH8B3nGd0g=</latexit> <latexit sha1_base64="eDqwJnVpTz3mTAXpTEhXmvG7lic=">ACF3icbVBNSwMxEM36WetX1aOXYBErSNmtgl4E0YvHCrYW2rLMptM2mOwuSVYsa/+F/+KFw+KeNWb/8b046CtDwJv3pthMi+IBdfGdb+dmdm5+YXFzFJ2eWV1bT23sVnVUaIYVlgkIlULQKPgIVYMNwJrsUKQgcCb4PZi4N/codI8Cq9NL8amhE7I25yBsZKfK542JNz7QBs6kX6Ke/1yAR9wb39Ya1ou6Ac8sHWloA9g38/l3aI7BJ0m3pjkyRhlP/fVaEUskRgaJkDrufGpmCMpwJ7GcbicY2C10sG5pCBJ1Mx3e1ae7VmnRdqTsCw0dqr8nUpBa92RgOyWYrp70BuJ/Xj0x7ZNmysM4MRiy0aJ2IqiJ6CAk2uIKmRE9S4Apbv9KWRcUMGOjzNoQvMmTp0m1VPQOi6Wro/zZ+TiODNkmO6RAPHJMzsglKZMKYeSRPJNX8uY8OS/Ou/Mxap1xjNb5A+czx/ftp3S</latexit> Value of Information {+e} a P(s | +e) U X X P ( e 0 | e ) max P ( s | e, e 0 ) U ( s, a ) = {+e, +e � } a a e 0 s P(s | +e, +e � ) U {+e} X X P ( s, e 0 | e ) U ( s, a ) = max P(+e � | +e) a P(-e � | +e) e 0 s {+e, +e � } {+e, -e � } X X P ( e | e 0 ) P ( s | e, e 0 ) U ( s, a ) = max a a e 0 s

  24. VPI Properties § Nonnegative § Nonadditive (think of observing E j twice) § Order-independent

  25. Quick VPI Questions o The soup of the day is either clam chowder or split pea, but you wouldn � t order either one. What � s the value of knowing which it is? o There are two kinds of plastic forks at a picnic. One kind is slightly sturdier. What � s the value of knowing which? o You � re playing the lottery. The prize will be $0 or $100. You can play any number between 1 and 100 (chance of winning is 1%). What is the value of knowing the winning number?

  26. Value of Imperfect Information? o No such thing o Information corresponds to the observation of a node in the decision network o If data is “noisy” that just means we don’t observe the original variable, but another variable which is a noisy version of the original one

  27. VPI Question o VPI(OilLoc) ? DrillLoc U o VPI(ScoutingReport) ? Scout OilLoc o VPI(Scout) ? Scouting Report o VPI(Scout | ScoutingReport) ? o Generally: If Parents(U) Z | CurrentEvidence Then VPI( Z | CurrentEvidence) = 0

  28. POMDPs

  29. POMDPs s o MDPs have: o States S a o Actions A s, a o Transition function P(s � |s,a) (or T(s,a,s � )) o Rewards R(s,a,s � ) s,a,s � s � o POMDPs add: b o Observations O a o Observation function P(o|s) (or O(s,o)) b, a o POMDPs are MDPs over belief o b � states b (distributions over S)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend