exploring changes in treatment effects across design
play

Exploring changes in treatment effects across design stages in - PowerPoint PPT Presentation

Exploring changes in treatment effects across design stages in adaptive trials Tim Friede Warwick Medical School The University of Warwick, UK t.friede@warwick.ac.uk Joint work with Robin Henderson (University of Newcastle, UK) EMEA / EFPIA


  1. Exploring changes in treatment effects across design stages in adaptive trials Tim Friede Warwick Medical School The University of Warwick, UK t.friede@warwick.ac.uk Joint work with Robin Henderson (University of Newcastle, UK) EMEA / EFPIA Adaptive Designs 2007 1

  2. Heterogeneity in Treatment Effect Estimates • When treatment effects differ across design stages . . . – results might be difficult to interpret – did information ’leak out’ at interim??? • minimum requirement (CHMP guideline, Section 4.2.1) “ [. . . ] the same careful investigation of heterogeneity and justification to combine the results of different stages as is usually required for the combination of individual trials in a meta-analysis.” EMEA / EFPIA Adaptive Designs 2007 2

  3. Investigation of Heterogeneity in Meta-Analyses Basic procedure • formal hypothesis test : do the treatment effects differ across stages? • if significant , studies are not combined in meta-analysis • significance levels α = 0 . 10 or 0 . 15 common since power of heterogeneity test generally low EMEA / EFPIA Adaptive Designs 2007 3

  4. Applying the MA Procedure to Adaptive Trials What are the consequences for adaptive trials? EMEA / EFPIA Adaptive Designs 2007 4

  5. Heterogeneity Test Confounded by Calendar Time Treatment Treatment Effect Effect 0.500 0.500 0.375 0.375 0.250 0.250 0.125 0.125 0.000 0.000 First Interim Last First Interim Last patient Analysis patient patient Analysis patient Calendar Time Calendar Time Gradual Change Step Change EMEA / EFPIA Adaptive Designs 2007 5

  6. An Investigation into Heterogeneity Testing in Adaptive Trials • situation considered – two-stage trials with equally sized first and second stage – equally sized treatment arms – continuous (normal) outcomes – significance levels: heterogeneity α = 0 . 15, efficacy α ⋆ = 0 . 025 • ’successful study’ : non-significant heterogeneity test + signifi- cant efficacy test (probability of success is called power here) EMEA / EFPIA Adaptive Designs 2007 6

  7. Relative Loss in Power due to Heterogeneity Test • loss in success probability 0.6 (power) due to heterogeneity test 0.5 • relative power loss = power of Relative Power Loss 0.4 heterogeneity test 0.3 • change in effect as fraction f 0.2 of average effect • effect change could be due 0.1 to calendar time effects unre- 0.0 lated to interim analysis ( e.g. 0.0 0.2 0.4 0.6 0.8 1.0 learning effects) f 1 − β ⋆ 0.05 0.10 0.20 • power 1 − β ⋆ of efficacy test EMEA / EFPIA Adaptive Designs 2007 7

  8. Can the power loss be compensated by larger samples? Power 1.0 • power of procedure with het- erogeneity test 0.8 • total sample size n 0.6 • average trt effect θ = 0 . 5 0.4 • change in treatment effect ∆ 0.2 • effect change could be due to calendar time effects unre- 0.0 lated to interim analysis ( e.g. 0 200 400 600 800 1000 n learning effects) 0.00 0.10 0.25 0.50 ∆ EMEA / EFPIA Adaptive Designs 2007 8

  9. Motivating the Use of Change Point Methods ts • simulated trial – 100 patients per stage – step change after 50 pa- Log likelihood −274 tients with effect changing from 0.25 to 0.75 • heterogeneity test : p = 0 . 01 −276 • change point methods – search for maximum test −278 statistics – adjust critical value 50 100 150 – calendar time confounding Change Point in studies with historic con- trols (Heuer & Abel 1998) CP method suggests change before IA. EMEA / EFPIA Adaptive Designs 2007 9

  10. Alternative Testing Procedure • initial heterogeneity test at level α 1 : if significant, then . . . • Considering only data of first stage: search for a change point and test whether it is significant at level α 2 . 1 . – if not, then conclude “change due to IA” – if yes, then . . . • Carry out a test comparing treatment effects in the first stage after the change point and the second stage at level α 2 . 2 . – if (not) significant, then conclude “change (not) due to IA” EMEA / EFPIA Adaptive Designs 2007 10

  11. Simulated Probability of “Change due to IA” Conclusion CP at IA CP before IA P(D) P(D) 1 1 0.8 0.8 0.6 0.6 0.4 0.4 Heterog. ( α 2 . 1 , α 2 . 2 ) (.1,.1) 0.2 0.2 (.1,.5) (.5,.1) (.5,.5) 0 0 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Difference in Treatment Effects Difference in Treatment Effects EMEA / EFPIA Adaptive Designs 2007 11

  12. Investigation of Calendar Time Effects • Altman & Royston (1988) suggest use of CUSUM plots – popular tool in quality control (Grigg et al 2003) • patient number as predictor in linear model (Senn 2000) • critical issue in adaptive randomisation – see e.g. Coad (1994), Hu & Rosenberger (2000) EMEA / EFPIA Adaptive Designs 2007 12

  13. What can be learned from meta-analysis? • investigating heterogeneity : stage vs. patient level covariates – small number of stages ⇒ investigation of stage-level covari- ates difficult – patient level data available in adaptive trials (unlike in publica- tion based meta-analysis) – individual patient data: interactions of prognostic factors with treatment effect (subgroup analyses) • importance of treatment effect scale : multiplicative vs. addi- tive model (see Sutton et al 2000 Sec. 3.5.1, or Hand 1994 Ex.6) EMEA / EFPIA Adaptive Designs 2007 13

  14. Conclusions and Discussion • heterogeneity test approach – leads to great loss and power that cannot be compensated for by larger sample sizes – calendar time effects unrelated to IA make matters worse • alternative approaches allowing for calendar time effects need more attention • design : careful consideration and discussion in planning phase EMEA / EFPIA Adaptive Designs 2007 14

  15. References • Altman DG, Royston P (1988) The hidden effect of time. Statistics in Medicine 7: 629–637. • Committee for Medicinal Products for Human Use (2007) Reflection paper on methodological issues in confirmatory clinical trials with flexible design and analysis plan. London, 18 October 2007, Doc. Ref. CHMP/EWP/2459/02. • Coad DS (1994) Sequential estimation for two-stage and three-stage clinical trials. Journal of Statistical Planning and Inference 42: 343–351. • Grigg OA et al (2003) Use of risk-adjusted CUSUM and RSPRTcharts for monitoring in medical contexts. Statistical Methods in Medical Research 12: 147–170. • Hand DJ (1994) Deconstructing statistical questions. RSS Series A 157: 317– 356. • Heuer C, Abel U (1998) The analysis of intervention effects using observational data bases. In: Abel U, Koch A (eds.). Nonrandomized comparative clinical studies. Duesseldorf: Symposium Publishing; pp 101-107. EMEA / EFPIA Adaptive Designs 2007 15

  16. • Hu F, Rosenberger WF (2000) Analysis of time trends in adaptive designs with application to a neurophysiology experiment. Statistics in Medicine 19: 2067–2075. • Senn S (2000) Consensus and controversy in pharmaceutical statistics. The Statistician 49: 135–176. • Sutton AJ et al (2000) Methods for meta-analysis in medical research. Wiley. EMEA / EFPIA Adaptive Designs 2007 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend