introduction to
play

Introduction to Adaptive Designs FUNDAMENTAL DESIGN PRINCIPLES, - PowerPoint PPT Presentation

Introduction to Adaptive Designs FUNDAMENTAL DESIGN PRINCIPLES, CASE STUDIES, AND HANDS-ON PRACTICE Minh Huynh, Ph.D. Aaron Heuser, Ph.D. Chunxiao Zhou, Ph.D. Course Objectives Gain familiarity with the basic principles of adaptive 1.


  1. Case Study #3 Simon’s 2 -Stage Design  Originally designed for phase II clinical trials with binary response  Suitable for a one-arm (uncontrolled) study testing H 0 : p  p 0  At the end of first round, study may be stopped for futility  This method minimizes sample size under H 0 for a given  -level Stage 1 Stage 2 Rejecting Rule Met ? Rejecting Rule Met ? Yes Yes Reject Treatment

  2. Case Study #3 Simon’s 2 -Stage Design  Suppose you have a new treatment that either works, or does not work at all  This treatment is administered to everyone (uncontrolled)  You currently have a treatment that works 20% of the time (p 0 =.2)  Your policy advisors tell you that is your new treatment is worth pursuing only if it can hit the 40% mark (p 1 =.4)  Since you do not know if this will work, you would like the minimal sample size required to test this new treatment

  3. Case Study #3 Simon’s 2 -Stage Design  Simon’s method allows you to enroll some subjects, administer treatment, stop and take a look, and decide whether to proceed.  Can we do better than the traditional design?

  4. Case Study #3 Simon’s 2 -Stage Design 1. Let n 1 = number of subjects in 1 st stage and n 2 = number of subjects in 2 nd stage Y 1 = y 1 number of successes among n 1 Y 1  Bin( n 1 ,p) Y 2 = y 2 number of successes among n 2 Y 2  Bin( n 2 ,p) r 1 = number of successes below which we terminate the study r= number of successes below which we reject the treatment

  5. Case Study #3 Simon’s 2 -Stage Design 1. Type 1 and type 2 Error Constraints The treatment is defined to be non-promising if y 1  r 1 or ( y 1 > r 1 )  ( y 1 + y 2  r ) ; or promising if ( y 1 > r 1 )  ( y 1 + y 2 > r ); or promising if ( y 1 > r )

  6. Case Study #3 Simon’s 2 -Stage Design 1. Type 1 and type 2 Error Constraints Thus we need Prob( promising | p  p 0 ) <  Prob( promising | p  p 1 ) >  At the boundaries, we need Prob(( y 1 > r 1 )  ( y 1 + y 2 > r ) | p = p 0 ) =  ; Prob(( y 1 > r 1 )  ( y 1 + y 2 > r ) | p = p 1 ) =  (2)

  7. Case Study #3 Simon’s 2 -Stage Design 1. Assuming independence between Y 1 and Y 2 Prob{( y 1 > r 1 )  ( y 1 + y 2 > r ) | p } 𝑄(𝑧 1 |𝑞) 𝑄(𝑧 2 |𝑞) = 𝑧 1 >𝑠 1 𝑧 2 >𝑠−𝑧 1 𝑜 1 𝑧 1 𝑞 𝑧 1 (1 − 𝑞) 𝑜 1 −𝑧 1 𝑜 2 𝑧 2 𝑞 𝑧 2 (1 − 𝑞) 𝑜 2 −𝑧 2 = 𝑧 1 >𝑠 1 𝑧 2 >𝑠−𝑧 1

  8. Case Study #3 Simon’s 2 -Stage Design 1. Simon’s method chooses the set of ( n 1 , n 2 , r 1 , r ) to minimize sample size, subject to Type 1 and Type 2 errors probability constraints in (2)

  9. Case Study #3 Simon’s 2 -Stage Design 1. ( n 1 , n 2 , r 1 , r ) can be chosen to meet one of two optimality criteria (1) Minimizes Expected Sample size under the null, E(N|H 0 ) where E(N|H 0 ) = n 1 + n 2 Prob(Proceed to Stage 2|H 0 ) = n 1 + n 2 Prob(r 1 +1  y 1  r | p = p 0 ) Optimal OR (2) Minimizes maximum size in the trial, n= n 1 + n 2 Minimax

  10. Case Study #3 Simon’s 2 -Stage Design 1. Simon’s method chooses the set of ( n 1 , n 2 , r 1 , r ) to minimize sample size, subject to Type 1 and Type 2 errors probability constraints in (2) Step 1 Specify p 0 , p 1 ,  and  Step 2 For each value of total sample size n, and each value of n 1 in the range (1, n – 1), determine the integer values of r 1 and r which satisfy the error constraints and minimize EN when p = p 0 a. For each value of r 1 in (0,n 1 ), find the maximum value of r that satisfy the Type 2 constraint b. Determine whether the identified set of parameters (n,n 1 ,r 1 ,r) satisfy the Type 1 constraint c. If yes, compute EN and compare to previously identified feasible design and continue to search over r 1 d. Keeping n fixed, search over the range of n 1 to find the optimal two-stage design.

  11. http://cancer.unc.edu/biostatistics/program/ivanova/SimonsTwoStageDesign.aspx Alpha Power Current Treatment success rate New Treatment success rate Click

  12. Total Sample Size = 54 Total Stage 1 Sample Size = 19 End Study and Reject Treatment after Stage 1 if success  4/19 Reject Treatment after Stage 2 if success  15/54 Expected Sample Size Under H 0 is 30.4 Probability of Stopping Early Under H 0 is .6733

  13. Case Study #3 Simon’s 2 -Stage Design From these calculations, the resulting adaptive design is Step 1 Observe 19 subjects Step 2 If fewer than 4 subjects receive successful treatments  Step 3 If 4 or more subjects receive successful treatments, go to Stage 2 Step 4 Observe 35 more subjects and count total successes Step 5 If fewer than 15 subjects total receive successful treatments, fail to reject Ho,  reject new treatment Step 6 If 15 or more subjects total receive successful treatments, reject Ho,  suggest new treatment is better than the existing treatment

  14. Case Study #3 Simon’s 2 -Stage Design If these steps and stopping rules are followed, Expected Sample Size is only 30.4 subjects Probability of Stopping early is .6733  This is very good efficiency gain ! NOTE: This design stops early for futility, and not for efficacy

  15. Case Study #3 Hands-on Practice Suppose you conduct a Phase II clinical trial to test whether a new drug is effective in treating the ZIKA virus. You would like to have  =0.05 and your trial to have power = 90%. Your Section Chief informs you that the previous candidate drug had a 20% response rate, and for the new drug to have move to Phase III you need a response rate of at least 40%. You have a research budget that can enroll no more than 100 subjects. (a) Design a Simon 2-stage trial using both optimal and minimax criteria (b) How far below the 100 subjects mark will you expect to be ? (c) What is the probability that this will happen (i.e., that you will have an early stop?)

  16. Questions ?

  17. Case Study #4 Bayesian Phase II Design with Posterior Probability  Use successive predictive probabilities of success as stopping rules  Data accrual are monitored continuously to make adaptive decisions  Can be stopped for both efficacy or futility  Suited for situations where a new treatment is being considered for further study, but you do not have a large number of subjects (or you do not want to)  Simple to implement for studies with binary outcomes

  18. Case Study #4 Bayesian Phase II Design with Posterior Probability Let p 1 = response rate for new, experimental treatment, p 1  Beta(  1 ,  1 ) Let p 0 = response rate for currently existing treatment, p 0  Beta(  0 ,  0 ) At any stage of the study, let Y = number of successes among n subjects treated, Y  Bin(n,p 1 ) Because of conjugate property between the binomial distribution and the beta distribution, the posterior distribution of p 1 given Y = y is a beta distribution p 1 |Y = y  Beta(  1 +y,  1 +N-y)

  19. Case Study #4 Bayesian Phase II Design with Posterior Probability 𝑞 𝑔 𝑦;  1 ,  its Let f(p;  1 ,  ) = p.d.f. of p  Beta(  1 ,  ) and let F(p;  1 ,  ) = 0 correspondent c.d.f; then we can compute 1−𝜀 Pr 𝑞 1 > 𝑞 0 + 𝜀 𝑧 = 1 − 𝐺 𝑞 + 𝜀; 𝛽 1 + 𝑧, 𝛾 1 + 𝑜 − 𝑧 𝑔 𝑞; 𝛽 0, , 𝛾 0 𝑒𝑞; 0 where 0<  <1 is the minimally acceptable change in the new treatment’s response rate compared to the standard treatment.

  20. Case Study #4 Bayesian Phase II Design with Posterior Probability Now we are almost ready to specify the design: Let  U = upper probability cut-off,  U  [.95,.99]  L = lower probability cut-off,  U  [.01,.05] U n =smallest integer of y such that Pr(p 1 >p 0 |y)   U L n =largest integer of y such that Pr(p 1 >p 0 +  |y)   L

  21. Case Study #4 Bayesian Phase II Design with Posterior Probability Step 0 Let N = the max number of subjects you can enroll Step 1 Enroll n subjects, and observe the first y the number of successes Step 2 If y  U n = end Phase II for efficacy, treatment is promising If y  L n = terminate the study for futility, treatment is not promising Step 3 If L n < y < U n = and n < N , continue enroll the next subject Step 4 If n reaches N before y crosses any stopping boundary, then the study is inconclusive

  22. Case Study #4a Bayesian Phase II Design with Posterior Probability Example 3a Suppose the most we can enroll in a study is N=40 And suppose the prior beta distribution for the current treatment is p 0  Beta(  0 ,  0 )=Beta(15,35) and prior beta distribution for the new treatment is p 1  Beta(  1 ,  1 )=Beta(.6,1.4)

  23. Case Study #4a Bayesian Phase II Design with Posterior Probability Using  L = .05 and  =0 for futility stopping, we enroll our subjects and monitor each of them. Our stopping rules are given by L n 0 1 2 3 4 5 6 n 6 13 18 24 29 35 40 Each pair ( L n , n ) is a stopping rule = stop the study if the number of successes after enrolling n subjects is less than or equal to L n For example, (2,18) means that after enrolling 18 subjects, if there are 2 or fewer successes, then the study should be terminated for futility

  24. Questions ?

  25. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability One particular feature of the Bayesian paradigm is that one can obtain predictions based on the posterior predictive distribution. Frequentist predictive methods use conditional probability based on a particular value of a model parameter. Bayesian predictive methods average these probabilities over the parameter space given the observed data. Lee and Liu (2008) used this concept to derive a method, Predictive Probability Monitoring

  26. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability Let n = current number of subjects, 1 < n < N max , where N max is the maximum sample size planned Let X = the number of successes among n treated patients, X ~ Bin(n,p 1 ) . Assume that the prior distribution of the success rate  ( p 1 ) follows ~ Beta(a 0 ,b 0 ). The posterior distribution for p 1 thus follows p 1 |X=x ~ Beta(a 0 + x, b 0 + n - x)

  27. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability Let Y denote the number of successful subjects out of m=N-n future recruits , Y< m The probability of Y = y given the current data x follows a beta-binomial distribution, Y|x ~ Beta-Bin(N – n, a 0 + x, b 0 + n – x), with probability mass function 1 𝑂 − 𝑜 𝑞 𝑧 (1 − 𝑞) 𝑂−𝑜−𝑧 𝑞 𝑏 0 +𝑦−1 (1 − 𝑞) 𝑐 0 +𝑜−𝑦−1 Pr 𝑧 𝑦 = 𝐶(𝑏 0 + 𝑦; 𝑐 0 + 𝑜 − 𝑦) 𝑒𝑞; 𝑧 0 = 𝑂 − 𝑜 𝐶(𝑏 0 +𝑦+𝑧,𝑐 0 +𝑜−𝑦−𝑧) 𝑧 𝐶(𝑏 0 +𝑦,𝑐 0 +𝑜−𝑦)

  28. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability Thus the posterior distribution of the success rate, given y and x, is p 1 |Y=y,X=x~ Beta(a 0 +y+x,b 0 +N-y-x) The criterion for declaring the treatment is promising is Prob( p 1 >p 0 |y,x)   T with p 0 the previous success rate and  T is some threshold.

  29. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability Lee and Liu next defined the Predictive Probability (PP) as 𝑛 𝑄𝑄 ≡ 𝑄 𝑍 = 𝑗 𝑌 = 𝑦 × 𝑱[𝑄 𝑞 1 > 𝑞 0 𝑍 = 𝑗, 𝑌 = 𝑦 > 𝜄 𝑈 ] 𝑗=1 where I I [  ] is an indicator function, and Prob(Y=i|X=x) is the probability of observing i responses in m patients given current data X= x

  30. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability  PP is the predictive probability of obtaining a positive result by the end of the trial based on the current cumulative information  A high PP means that the treatment is likely to be efficacious by the end of the study, given the.  A low PP suggests that the treatment may not have sufficient activity.  Therefore, PP can be used to determine whether the trial should be stopped early due to efficacy/futility based on the current data . We are almost there…

  31. Case Study #5 Bayesian Predictive Monitoring with Posterior Probability  Next, choose two cut-offs  L (small number) ,  U (large number)  (0,1) If PP >  U , stop the study  the new treatment is promising. 1. If PP <  L , stop the study  the new treatment is not promising 1. Otherwise, continue the study until N max is reached. 2. To illustrate, we use a hands-on example, and this software from the MD Anderson Cancer Center Software Repository

  32. Case Study #5 Hands-On, Part A Suppose you have a Phase II trial, where you have observed the first 10 subjects. Your trial has a prior distribution of success rates of beta(.6,.4), and you have a budget for a maximum of 50 subjects. Suppose you set the threshold  T at 0.9, and would like the Type I error to be 0.05, and Power to be 0.9. You also would not stop unless  U =1.0. Use the Lee and Liu (2008) Predictive Probability Method to design your trial.

  33. the number of patients in the first cohort being evaluated for response when PP interim decision starts to be implemented

  34. How many incoming subjects any given time

  35. Maximum sample size

  36. Choose one of these

  37. Set your thresholds here; Note the flexible starting point, ending point, and step size

  38. Success rate of current treatment

  39. Target success rate for your trial

  40. Set desired alpha and power levels

  41. Information from previous trial(s) What if you do not have any information?

  42. Case Study #5 What if your trial is the first, and do not have any prior information?  Recall our assumption that the prior distribution of the response rate,  (p), follows a beta distribution, namely beta(a 0 , b 0 )  The quantity a 0 /(a 0 +b 0 ) reflects the prior mean  while size of a 0 +b 0 indicates how informative the prior is, with a 0 =the number of successes and b 0 = the number of failures  Thus, a 0 +b 0 can be considered as a measure of the amount of information contained in the prior.  You can always run the trial and record the first a 0 , b 0

  43. Click to see output Click to run

  44. suitable ranges of  L and  T under an N max that satisfies the constrained Type I and Type II error rates

  45. Subject number

  46. First (Negative) Rejection Region: This shows the maximum number of patients with positive response to terminate a trial. If the number of responses is less than or equal to this boundary, stop the trial and reject the alternative hypothesis

  47. Second (Positive) Rejection Region: This shows minimum number of patients with positive response to terminal a trial and claim to reject the null hypothesis. If the number of responses is greater than this boundary, stop the trial and reject the null hypothesis. If this number is greater than what you have in the sample, keep going. You cannot reject the null

  48. Probability of making the negative decision under the null hypothesis

  49. Probability of making the positive decision under the null hypothesis

  50. Probability of continuing the trial under the null hypothesis Same information is also computed under the alternative

  51. PET=probability of early termination for futility (PP<  L ), probability of early termination for efficacy (PP>  U ), and total probability of early termination

  52. Expected number of subjects under the null

  53. Highest Type I and Type II error rates within suitable range of  L and  T

  54. Case Study #6 What are some major weakness with the methods we discussed? Erratic Accrual Rate Suppose a few subjects have been treated at a leisurely pace and then the long line of patients appears. The outcomes of a small number of patients would affect the computed probabilities for everyone, and additional information do not get to play a role. Accrual Rate Mismatch If subjects are accruing faster than information is accruing, adaptive learning is compromised, or if subjects are treated at a faster rate than outcomes can be recorded, there can be no adaptation to the data,

  55. Case Study #6 What are some major weakness with the nethods we discussed? Single-Arm Biases Many promising drugs eventually fail in Phase III trials, even though they showed efficacy in Phase II trials. The reason  Single-arm trials can introduce bias, because there can be significant but unobserved differences in patient populations, in study criteria, and in medical facilities between the current and previous studies. For a better assessment, Predictive Monitoring in Randomized Phase II trials should be used

  56. Case Study #6 Predictive Monitoring in Randomized Phase II trials Main idea: the posterior predictive probability can be use to monitor a study by predicting the outcome of the study at t = after all the subjects are enrolled. If there is a high predictive probability that a definitive conclusion would be reached by the end of the study (e.g., superiority or futility), then the study could be stopped earlier.

  57. Case Study #6 Predictive Monitoring in Randomized Phase II trials Suppose you have a two-arm trial, and let p k = the success rate for treatment k , p k ~ Beta(a k ,ß k ), k=1,2 N k = the maximum sample size planned for arm k, and Y k = the number of successes among k treated subjects, 1 < n k < N k , then Y k ~ Bin(n k ,P k ) Thus, as before, the posterior distribution of p k is p k |Y=y k ~ Beta( a k + y k , b k + n k - y k ) k=1,2

  58. Case Study #6 Predictive Monitoring in Randomized Phase II trials Let X k = the number of future successes among the remaining N k  n subjects in arm k. Then, as before, the posterior predictive distribution of X k given Y k = y k is Beta-Binomial: Pr 𝑦 𝑙 𝑧 𝑙 = 𝑂 𝑙 − 𝑦 𝑙 𝐶(𝛽 1 +𝑦 𝑙 +𝑧 𝑙 , 𝛾 1 +𝑂 𝑙 −𝑦 𝑙 −𝑧 𝑙 ) 𝑦 𝑙 𝐶(𝛽 1 +𝑧 𝑙 , 𝛾 1 +𝑜 𝑙 −𝑧 𝑙 )

  59. Case Study #6 Predictive Monitoring in Randomized Phase II trials Next, let H 0 : p 1 = p 2 H 1 : p 1  p 2 For each pair of future data (X 1 =x 1 , X 2 =x 2 ), we can draw a conclusion on whether this hypothesis test would give a significant difference by the time the study concludes.

  60. Case Study #6 Predictive Monitoring in Randomized Phase II trials The predictive probability of rejecting H 0 is found by summing over all possible future outcomes of this pair (x 1 ,x 2 ), and is given by Pr 𝑡𝑗𝑕𝑜𝑗𝑔𝑗𝑑𝑏𝑜𝑢 𝑒𝑗𝑔𝑔𝑓𝑠𝑓𝑜𝑑𝑓 𝑏𝑢 𝑓𝑜𝑒 𝑝𝑔 𝑡𝑢𝑣𝑒𝑧 𝑒𝑏𝑢𝑏 𝑂 1 −𝑜 1 𝑂 2 −𝑜 2 (6) = 𝑄 𝑦 1 𝑧 1 𝑄 𝑦 2 𝑧 2 𝑱[𝑆𝑓𝑘𝑓𝑑𝑢𝑗𝑜𝑕 𝐼 0 ] 𝑦 1 =0 𝑦 2 =0 where I[ I[  ] ] is an indicator function indicating whether a binomial test of two proportions for H 0 : p 1 =p 2 is significant.

  61. Case Study #6 Predictive Monitoring in Randomized Phase II trials 𝑞 1 − 𝑞 2 We would reject H 0 if |𝑎| ≥ 𝑨 𝛽 2 where 𝑎 = 𝑞)( 1 𝑂 1 + 𝑞(1 − 1 𝑂 2 ) 𝑞 𝑙 = 𝑧 𝑙 + 𝑦 𝑙 and 𝑂 𝑙 𝛽 2 is the 100(1-  /2) th percentile of the standard normal distribution. and 𝑨 This is a hybrid Frequentist/Bayesian approach. See Yin (2012)

  62. Case Study #6 Predictive Monitoring in Randomized Phase II trials We can also employ a fully Bayesian interim monitoring procedure using predictive probability. Given current data (y 1 ,y 2 ) and future data (x 1 ,x 2 ), we compute the posterior probability 1 1 Pr 𝑞 1 > 𝑞 2 𝑦 1 , 𝑦 2 , 𝑧 1 , 𝑧 2 = 𝑔 𝑞 2 𝑦 2 , 𝑧 2 𝑔 𝑞 1 𝑦 1 , 𝑧 1 𝑒𝑞 1 𝑒𝑞 2 ≥ 𝜄 𝑈 0 𝑞 2 Where 𝑔 𝑞 𝑙 𝑦 𝑙 , 𝑧 𝑙 is the probability density function of 𝑞 𝑙 with the distribution 𝑞 𝑙 ~Beta 𝛽 𝑙 + 𝑧 𝑙 + 𝑦 𝑙 , 𝛾 𝑙 + 𝑂 𝑙 − 𝑧 𝑙 − 𝑦 𝑙 , 𝑙 = 1,2

  63. Case Study #6 Predictive Monitoring in Randomized Phase II trials Treatment 1 is superior if Pr 𝑞 1 > 𝑞 2 𝑦 1 , 𝑦 2 , 𝑧 1 , 𝑧 2 ≥ 𝜄 𝑈 where is 𝜄 𝑙 the usual threshold. But since future data (x 1 ,x 2 ) have not yet been observed, we use 𝑂 1 −𝑜 1 𝑂 2 −𝑜 2 𝑄 𝑦 1 𝑧 1 𝑄 𝑦 2 𝑧 2 𝑱[Pr 𝑞 1 > 𝑞 2 𝑦 1 , 𝑦 2 , 𝑧 1 , 𝑧 2 ≥ 𝜄 𝑈 ] 𝑦 1 =0 𝑦 2 =0

  64. Case Study #6 Predictive Monitoring in Randomized Phase II trials Yin (2012) Illustration : For the frequentist hypothesis testing, use a two- sided binomial test at  = 0.05. For the Bayesian method, use prior distributions for p 1 and p 2 as Beta(0.2,0.8) . = 0.95 in (5.5), and 𝜄 𝑈 = .095 Arm 1 Arm 2 Pr(favoring Arm 1) Pr(favoring Arm 2) N y 1 /n 1 y 2 /n 2 Frequentist Bayesian Frequentist Bayesian 40 5/10 2/10 0.5062 0.6702 <0.0001 <0.0001 60 5/10 2/10 0.6266 0.7225 0.0005 0.0013 80 5/10 2/10 0.6915 0.7567 0.002 0.0037 100 5/10 2/10 0.7291 0.7815 0.004 0.0065 100 10/20 4/20 0.8415 0.8999 <0.0001 <0.0001 100 15/30 6/30 0.9306 0.9735 <0.0001 <0.0001 100 20/40 8/40 0.991 0.9993 <0.0001 <0.0001

  65. Case Study #6 Arm 1 Arm 2 Pr(favoring Arm 1) Pr(favoring Arm 2) N y 1 /n 1 y 2 /n 2 Frequentist Bayesian Frequentist Bayesian 40 5/10 2/10 0.5062 0.6702 <0.0001 <0.0001 60 5/10 2/10 0.6266 0.7225 0.0005 0.0013 80 5/10 2/10 0.6915 0.7567 0.002 0.0037 100 5/10 2/10 0.7291 0.7815 0.004 0.0065 100 10/20 4/20 0.8415 0.8999 <0.0001 <0.0001 100 15/30 6/30 0.9306 0.9735 <0.0001 <0.0001 100 20/40 8/40 0.991 0.9993 <0.0001 <0.0001 N=40 : Enroll 20 subjects in each arm to compare the two treatments. Suppose after 10 patients were treated in each arm, 5 patients responded in Arm 1 and 2 patients responded in Arm 2. Predictive Probability of picking Arm 1 is 51% for frequentist, 67% for Bayesian Predictive Probability of picking Arm 2 is close to zero

  66. Case Study #6 Arm 1 Arm 2 Pr(favoring Arm 1) Pr(favoring Arm 2) N y 1 /n 1 y 2 /n 2 Frequentist Bayesian Frequentist Bayesian 40 5/10 2/10 0.5062 0.6702 <0.0001 <0.0001 60 5/10 2/10 0.6266 0.7225 0.0005 0.0013 80 5/10 2/10 0.6915 0.7567 0.002 0.0037 100 5/10 2/10 0.7291 0.7815 0.004 0.0065 100 10/20 4/20 0.8415 0.8999 <0.0001 <0.0001 100 15/30 6/30 0.9306 0.9735 <0.0001 <0.0001 100 20/40 8/40 0.991 0.9993 <0.0001 <0.0001 N=60-100 : As N max increases, the predictive probability of choosing Arm 1 increases, but reaches a plateau. These calculations show that there is value to adding additional data, but the conclusion is unchanged beyond certain level of success.

  67. Case Study #6 Adaptive Randomization  For a more objective comparison of different treatments, subjects should be randomized the two (or more) arms.  Randomization can use a fixed probability, or an outcome-based adaptive probability — this is called adaptive randomization (AR)  AR is more beneficial, as each new subject has a higher probability of receiving the favored treatment.  Yin, Chen and Lee (2012) proposed a method to combine PP and AR for trial monitoring.

  68. Case Study #6 Adaptive Randomization Yin, Chen and Lee (2012) proposed the use of a tuning parameter  Each new subject is randomized into Arm 1 with probability  𝜌(  ,  )=   +(1−  )  where  = Pr(p 1 > p 2 |y 1 ,y 2 ), and  is a tuning parameter This method is implement in the AR software by the M.D. Anderson Canter Center, Department of Biostatistics.

  69. What we covered thus far Basic Concepts in Adaptive Design Key Design Principles Simple Adaptive Stopping Rule Gehan’s 2-Stage Design Simon’s 2 -Stage Design Bayesian Predictive Monitoring with Posterior Probability Predictive Monitoring in Randomized Phase II trials Adaptive Randomization

  70. Next Adaptive Sampling Methods Dose-escalation Designs Hands-on Practice General Concerns with Adaptive Design Starter Kit

  71. Adaptive Survey Design Key Concepts in Adaptive Survey Design  In the adaptive survey methodology literature, adaptive surveys mean two different things  One is the use of innovative adaptive sampling methods to improve survey quality metrics, such as response rate, sample balance, non-response error, stability/quality of estimates, sampling errors, etc…  This is referred to as adaptive survey designs

  72. Adaptive Survey Design Key Concepts for Dynamic Adaptive Survey Management  The other meaning is the use of accumulated data to improve the conduct of the survey. At pre-determined points in the survey implementation, these data are examined to guide the rest of the implementation  Based on information in collected data, adjustments are made to improve cost, efficiency, and precision This is sometimes referred to as dynamic survey management.  We will be introducing you to both concepts today, but neither in depth.

  73. Dynamic Survey Management Key Concepts Data collected and examined are of four groups  Response data: estimates of key variables  Frame Data: type of structure, block group demographic statistics, alternative modes, previous response data  Paradata: data accrual rate, contact history, effort and response propensity, interviewer observations, time and travel, survey progress rate, Web survey metrics  Quality metrics: R-indicators, sample balance, response rate, stability/quality of estimates

  74. Dynamic Survey Management Key Concepts After the data are examined, the following survey design features can be changed ● Case priorities ● Mode priorities ● Oversampling ● Timing of Data collection stop ● Incentives ● Speed of data collection ● Processing flow and priorities ● Timing of Benchmarking

  75. Dynamic Survey Management R-indicators and their use An important tool to use for dynamic survey management is the R-indicator First introduced by Schouten and Cobben (2007), it is a measure used to show the extent to which survey response deviates from the representative measure. Schouten and Cobben introduced three measures, and there have been my variants introduced since. R-indicators are used to track how a survey is preforming over time, and corrections or adjustments can be made

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend