data science methods for treatment personalization in
play

Data Science methods for treatment personalization in Persuasive - PowerPoint PPT Presentation

Data Science methods for treatment personalization in Persuasive Technology Prof. dr. M.C.Kaptein Professor Data Science & Health Principal Investigator @ JADS 12 April 2019 Undergraduate Topics in Computer Science Maurits Kaptein Edwin


  1. Data Science methods for treatment personalization in Persuasive Technology Prof. dr. M.C.Kaptein Professor Data Science & Health Principal Investigator @ JADS 12 April 2019

  2. Undergraduate Topics in Computer Science Maurits Kaptein Edwin van den Heuvel Statistics for Data Scientists An introduction to probability, statistics, and data analysis

  3. Personalization With personalization we try to find the “right content for the right person at the right time” [10]. Applications in Communication, Persuasive technology, Marketing, Healthcare, etc., etc. More formally, we assume we have a population of N units, which represent themselves sequentially . For each unit i = 1 , . . . , i = N we first observe their properties � x i , and subsequently, using some decision policy π (), we choose a treatment a i , (i.e. π : ( x i , d ) → a t ). After the content is shown, we observe the associated outcome, or reward r i , and our aim is to choose π () such that we maximize � N i =1 r i .

  4. Overview Selecting persuasive interventions Selecting personalized persuasive interventions Applications in persuasive technology design Available software

  5. Section 1 Selecting persuasive interventions

  6. The multi-armed bandit problem For i = 1 , . . . , i = N ◮ We select and action a i . (Often actions k = 1 , . . . , k = K , not always). ◮ Observe reward r i . Select actions according to some policy π : { a 1 , . . . , a i − 1 , r 1 , . . . , r i − 1 } �→ a i . Aim: Maximize (expected) cumulative reward � N i =1 r i (or, minimize Regret which is simply � N i =1 ( π max − π ()) [3].

  7. The canonical solution: the “experiment” For i = 1 , . . . , i = n (where n << N ): ◮ Choose k with Pr( a i = k ) = 1 K . ◮ Observe reward. r K and create guideline / business rule. r 1 , . . . , ¯ Compute ¯ For i > n : r 1 , . . . , ¯ ◮ Choose a i = arg max k (¯ r K ) [12, 6, 9].

  8. Alternative solutions 1. ǫ -Greedy: For i = 1 , . . . , i = N : ◮ w. Probability ǫ choose k with Pr( a i = k ) = 1 K . ◮ w. Probability 1 − ǫ choose a = arg max k (¯ r 1 , . . . , ¯ r K ) (given the data up to that point) [2]. 2. Thompson sampling: Setup a Bayesian model for r 1 , . . . , r K For i = 1 , . . . , i = N : ◮ Play arm with a probability proportional to your belief that it is the best arm. ◮ Update model parameters Easily implemented by taking a draw from the posterior [4, 1].

  9. Performance of different content selection policies EpsilonFirst 30 EpsilonGreedy ThompsonSampling 25 Cumulative regret 20 15 10 5 0 0 200 400 600 800 1000 Time step Figure: Comparison in terms of regret between three different bandit policies on a 3-arm Bernoulli bandit problem with true probabilities p 1 = . 6 , p 2 = p 3 = . 4 in terms of regret. Figure averages over m = 10 . 000 simulation runs. Thompson sampling outperforms the other policies.

  10. Intuition behind a well performing allocation policy A good policy effectively weights exploration and exploitation: ◮ Exploration: Try out the content that we are unsure about: learn. ◮ Exploitation: Use the knowledge we have / choose the content we think are effective: earn. We can think about the experiment as moving all exploration up front. In this case, it is a) hard to determine how much we need to explore (since there is no outcome data yet), and b) we might make a wrong decision.

  11. Section 2 Selecting personalized persuasive interventions

  12. The problem For i = 1 , . . . , i = N ◮ We observe the context � x i . ◮ We select and action a i . ◮ Observe reward r i Aim remains the same, but problem more challenging: the best action might depend on the context.

  13. The current approach ◮ Do experiments within subgroups of users (or, re-analyze existing RCT data to find heterogeneity). ◮ Subgroup selection driven by a theoretical understanding of the underlying mechanism. ◮ Effectively solve a non-contextual problem within each context . Thus, we see the problem as many separate problems In the limit: no room for exploration when users are fully unique! ( N = 1)

  14. Searching the context × action space 0.6 0.4 S u r v i v a 0.2 l 1.0 0.6 0.0 0.8 20 0.6 e s 40 0.4 D o Weight 0.2 60 80 0.0 0.4 0.6 S u 0.5 r v i v 0.4 Survival a l Weight = 20 0.2 0.3 0.2 1.0 0.1 0.0 0.8 0.0 0.2 0.4 0.6 0.8 1.0 Dose 0.6 20 0.6 Dose 0.5 40 0.4 0.4 Survival Weight = 60 Weight 0.3 0.2 60 0.2 0.1 0.0 80 0.0 0.2 0.4 0.6 0.8 1.0 Dose ◮ Different outcome for each action for each covariate. ◮ We need to learn this relation efficiently .

  15. An alternative approach It is easy to extend Thompson sampling to include a context For i = 1 , . . . , i = N : ◮ Create a model to predict E ( r t ) = f ( a t , x t ) and quantify your uncertainty (e.g., Bayes) ◮ Exploration: Choose actions with uncertain outcomes. ◮ Exploitation: Choose action with high expected outcomes. Very flexible models available for E ( r t ) = f ( a t , x t ) [8, 7, 13] and efficient procedures are available for incorporating uncertainty: LinUCB [5], Thompson Sampling [11], Bootstrap Thompson Sampling [6], etc.

  16. Performance 0.70 Cumulative reward rate 0.60 0.60 Average reward 0.50 0.50 EpsilonFirst EpsilonFirst EpsilonGreedy EpsilonGreedy 0.40 0.40 LinUCBDisjoint LinUCBDisjoint 0 100 200 300 400 0 100 200 300 400 Figure: Simple comparison of LinUCB (“frequentist Thompson sampling”) with non-contextual approaches for a 3-armed Bernoulli bandit with a single, binary, context variable. Already in this very simple case the difference between the contextual and non-contextual approach is very large.

  17. Section 3 Applications in persuasive technology design

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend