long range planning and behavioral biases a computational
play

Long-Range Planning and Behavioral Biases: A Computational Approach - PowerPoint PPT Presentation

Long-Range Planning and Behavioral Biases: A Computational Approach Jon Kleinberg Including joint work with Manish Raghavan and Sigal Oren. Cornell University Long-Range Planning Growth in on-line systems where users and groups have long


  1. Long-Range Planning and Behavioral Biases: A Computational Approach Jon Kleinberg Including joint work with Manish Raghavan and Sigal Oren. Cornell University

  2. Long-Range Planning Growth in on-line systems where users and groups have long visible careers and set long-range goals. Reputation, promotion, status, individual achievement. On-line groups that create multi-step tasks and set timelines and deadlines.

  3. Badges on Stack Overflow Civic Duty Electorate Number of actions per day Number of actions per day 10 14 Qs Qs As As 8 12 Q-votes Q-votes 10 A-votes A-votes 6 8 4 6 4 2 2 0 0 − 60 − 40 − 20 0 20 40 60 − 60 − 40 − 20 0 20 40 60 Number of days relative to badge win Number of days relative to badge win Badges, Milestones, and Incentives The Placement Problem: Given a desired mixture of actions, how should one define milestones to (approximately) induce these actions? How do badges and milestones derive their value? Social / Motivational / Transactional? Antin-Churchill 2011, Deterding et al 2011, Chawla-Hartline-Sivan 2012, Easley-Ghosh 2013, Anderson-Huttenlocher-Kleinberg-Leskovec 2013

  4. Planning and Time-Inconsistency Tacoma Public School System Fundamental behavioral process: Making plans for the future. Plans can be multi-step. Natural model: agents chooses optimal sequence given costs and benefits. What could go wrong? Costs and benefits are unknown, and/or genuinely changing over time. Time-inconsistency.

  5. Planning and Time-Inconsistency Fundamental behavioral process: Making plans for the future. Plans can be multi-step. Natural model: agents chooses optimal sequence given costs and benefits. What could go wrong? Costs and benefits are unknown, and/or genuinely changing over time. Time-inconsistency.

  6. Planning and Time-Inconsistency Fundamental behavioral process: Making plans for the future. Plans can be multi-step. Natural model: agents chooses optimal sequence given costs and benefits. What could go wrong? Costs and benefits are unknown, and/or genuinely changing over time. Time-inconsistency.

  7. Why did George Akerlof not make it to the post office? Agent must ship a package sometime in next n days. One-time effort cost c to ship it. Loss-of-use cost x each day hasn’t been shipped.

  8. Why did George Akerlof not make it to the post office? Agent must ship a package sometime in next n days. One-time effort cost c to ship it. Loss-of-use cost x each day hasn’t been shipped. An optimization problem: If shipped on day t , cost is c + tx . Goal: 1 ≤ t ≤ n c + tx . min Optimized at t = 1.

  9. Why did George Akerlof not make it to the post office? Agent must ship a package sometime in next n days. One-time effort cost c to ship it. Loss-of-use cost x each day hasn’t been shipped. An optimization problem: If shipped on day t , cost is c + tx . Goal: 1 ≤ t ≤ n c + tx . min Optimized at t = 1. In Akerlof’s story, he was the agent, and he procrastinated : Each day he planned that he’d do it tomorrow. Effect: waiting until day n , when it must be shipped, and doing it then, at a significantly higher cumulative cost.

  10. Why did George Akerlof not make it to the post office? Agent must ship a package sometime in next n days. One-time effort cost c to ship it. Loss-of-use cost x each day hasn’t been shipped. A model based on present bias [Akerlof 91; cf. Strotz 55, Pollak 68] Costs incurred today are more salient: raised by factor b > 1. On day t : Remaining cost if sent today is bc . Remaining cost if sent tomorrow is bx + c . Tomorrow is preferable if ( b − 1) c > bx .

  11. Why did George Akerlof not make it to the post office? Agent must ship a package sometime in next n days. One-time effort cost c to ship it. Loss-of-use cost x each day hasn’t been shipped. A model based on present bias [Akerlof 91; cf. Strotz 55, Pollak 68] Costs incurred today are more salient: raised by factor b > 1. On day t : Remaining cost if sent today is bc . Remaining cost if sent tomorrow is bx + c . Tomorrow is preferable if ( b − 1) c > bx . General framework: quasi-hyperbolic discounting [Laibson 1997] Cost/reward c realized t units in future has present value βδ t c Special case: δ = 1, b = β − 1 , and agent is naive about bias. Can model procrastination, task abandonment [O’Donoghue-Rabin08], and benefits of choice reduction [Ariely and Wertenbroch 02, Kaur-Kremer-Mullainathan 10]

  12. Cost Ratio Cost ratio: Cost incurred by present-biased agent Minimum cost achievable Across all stories in which present bias has an effect, what’s the worst cost ratio? max cost ratio( S ) . stories S

  13. Cost Ratio Cost ratio: Cost incurred by present-biased agent Minimum cost achievable Across all stories in which present bias has an effect, what’s the worst cost ratio? max cost ratio( S ) . stories S ???

  14. A Graph-Theoretic Framework a 2 b 2 16 s 8 c 8 d 8 t 2 16 e Use graphs as basic structure to represent scenarios [Kleinberg-Oren 2014] Agent plans to follow cheapest path from s to t . From a given node, immediately outgoing edges have costs multplied by b > 1.

  15. A Graph-Theoretic Framework a 2 b 36 2 16 32 s 8 c 8 d 8 t 2 16 34 e Use graphs as basic structure to represent scenarios [Kleinberg-Oren 2014] Agent plans to follow cheapest path from s to t . From a given node, immediately outgoing edges have costs multplied by b > 1.

  16. A Graph-Theoretic Framework a 2 b 2 16 24 s 8 c 8 d 8 t 2 16 20 e Use graphs as basic structure to represent scenarios [Kleinberg-Oren 2014] Agent plans to follow cheapest path from s to t . From a given node, immediately outgoing edges have costs multplied by b > 1.

  17. Example: Akerlof’s Story as a Graph v2 v3 x x x v1 v4 c c c c x x v5 s c c t Node v i = reaching day i without sending the package.

  18. Paths with Rewards 10 12 a reward 11 6 2 s t 3 5 b 12 Variation: agent only continues on path if cost ≤ reward at t . Can model abandonment: agent stops partway through a completed path. Can model benefits of choice reduction: deleting nodes can sometimes make graph become traversable.

  19. Paths with Rewards 10 12 a reward 11 6 2 s t 3 5 b 11 Variation: agent only continues on path if cost ≤ reward at t . Can model abandonment: agent stops partway through a completed path. Can model benefits of choice reduction: deleting nodes can sometimes make graph become traversable.

  20. Paths with Rewards 10 12 a reward 11 6 2 s t 3 5 b 12 Variation: agent only continues on path if cost ≤ reward at t . Can model abandonment: agent stops partway through a completed path. Can model benefits of choice reduction: deleting nodes can sometimes make graph become traversable.

  21. Paths with Rewards 10 12 a reward 11 6 2 s t 3 5 b 11 Variation: agent only continues on path if cost ≤ reward at t . Can model abandonment: agent stops partway through a completed path. Can model benefits of choice reduction: deleting nodes can sometimes make graph become traversable.

  22. A More Elaborate Example 2 + 4 + 4 =10 v10 v20 v30 s 13 20 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

  23. A More Elaborate Example 2 + 4 + 4 =10 v10 v20 v30 s 13 20 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

  24. A More Elaborate Example 2 + 9 = 11 v10 v20 v30 s 12 19 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

  25. A More Elaborate Example 2 + 4 + 4 =10 v10 v20 v30 s 13 18 20 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

  26. A More Elaborate Example 2 + 4 + 4 =10 v10 v20 v30 s 13 20 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

  27. A More Elaborate Example 2 + 4 + 4 =10 v10 v20 v30 s 13 20 v01 v11 v21 v31 v02 v12 v22 t Three-week short course with two projects. Reward of 16 from finishing the course. Effort cost in a given week: 1 from doing no project, 4 from doing one, 9 from doing both. v ij = the state in which i weeks of the course are done and the student has completed j projects.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend