Reprod oducibility i y in R Resear earch Dr Alexandra - - PowerPoint PPT Presentation

reprod oducibility i y in r resear earch
SMART_READER_LITE
LIVE PREVIEW

Reprod oducibility i y in R Resear earch Dr Alexandra - - PowerPoint PPT Presentation

Reprod oducibility i y in R Resear earch Dr Alexandra Bannach-Brown Institute for Evidence-Based Healthcare Bond University @ABannachBrown Background Systematic review of biomedical literature describing animal models Translating


slide-1
SLIDE 1

Reprod

  • ducibility i

y in R Resear earch

Dr Alexandra Bannach-Brown Institute for Evidence-Based Healthcare Bond University

@ABannachBrown

slide-2
SLIDE 2

Background

  • Systematic review of biomedical literature describing

animal models

  • Translating systematic review findings
  • Research Quality
  • Open Science & Open Access
slide-3
SLIDE 3

Research

  • Aim: to improve scientific theories for increased understanding of

natural phenomena

  • Analysis and interpretation of observations
  • Experimental or spontaneous
  • Leads to knowledge claim
  • Usually involves statistical analysis
slide-4
SLIDE 4

Research Cycle

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

“Manifesto for Reproducible Science”, Munafo et al., 2017

slide-5
SLIDE 5

Bench to Bedside

Nonclinical Development

  • Formulation
  • Laboratory development
  • Quality control & assurance

Preclinical Development

  • Animal studies
  • Bioavailability
  • Pharmacokinetic & pharmacodynamic

studies

Clinical Development

  • Phase I
  • Phase II
  • Phase III
  • Phase IV

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

slide-6
SLIDE 6

Reproducibility and Replication

“Reproducibility”- re-analysis of existing data using the same analytical procedures. “Replication” - collection of new data, following the same methods.

Peng, (2011). "Reproducible Research in Computational Science", SCIENCE;1226-1227

Reproducibility Spectrum

Publication + Publication Only Fully Replicable

Fully Reported Methods Methods & Data Linked Methods, Data & Data Analysis Code

Not Reproducible Gold Standard

slide-7
SLIDE 7

Research Quality

  • How credible are the findings presented?
  • Is the design appropriate?
  • Rigorous
  • Designed mitigate risk of bias
slide-8
SLIDE 8
slide-9
SLIDE 9

Replication

Ioannidis et al., 2014 “Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126

slide-10
SLIDE 10

Replication

“Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126 “Estimating the reproducibility of psychological science”, Open Science Collaboration, Science, 2015; 349(6251)

Average neuroscience study powered between 8-31%

(Button et al., 2013)

slide-11
SLIDE 11

Threats to reproducible science

“Manifesto for Reproducible Science”, Munafo et al., 2017

slide-12
SLIDE 12

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

slide-13
SLIDE 13

Research Plan

  • Your research proposal is based on what is already known
  • How do you know that this knowledge is reliable?
  • Were the experiments on which this knowledge is based at risk of bias?
  • Is the summary of knowledge skewed by publication bias?
slide-14
SLIDE 14
slide-15
SLIDE 15
  • Infarct Volume
  • 11 publications, 29 experiments, 408 animals
  • Improved outcome by 44% (35-53%)

Macleod et al, 2008 Efficacy Randomisation Blinded assessment of

  • utcome

Blinded conduct

  • f experiment

Bias in in vivo stroke studies

slide-16
SLIDE 16

SAINT II Phase 3 Clinical Trial

slide-17
SLIDE 17

Bias in other in vivo domains

Multiple Sclerosis Parkinson´s disease Alzheimer´s disease Stroke

slide-18
SLIDE 18

Reporting of measures to reduce risk of bias in laboratory studies

  • Reporting of measures to

reduce bias in 254 reports

  • f in vivo, ex vivo or in

vitro studies involving non-human animals, identified from random sample of 2000 publications from PubMed

slide-19
SLIDE 19

Reporting of measures to reduce the risk of bias in publications from ‘leading’ UK institutions

Reporting of measures to reduce bias in 1173 in vivo studies involving non- human animals, published from leading UK institutions published in 2009 and 2010

slide-20
SLIDE 20

Journal impact factor Prevalence of reporting

Reporting risk of bias by journal impact factor

slide-21
SLIDE 21

Is the summary of knowledge skewed by publication bias?

slide-22
SLIDE 22

Benefit Harm

  • utcome
  • bserved

corrected Disease models improvement 40% 30% Less improvement Toxicology model harm 0.32 0.56 More harm

Different patterns of publication bias in different fields

slide-23
SLIDE 23

n expts Estimated unpublished Reported efficacy Corrected efficacy Stroke – infarct volume 1359 214 31.3% 23.8% EAE - neurobehaviour 1892 505 33.1% 15.0% EAE – inflammation 818 14 38.2% 37.5% EAE – demyelination 290 74 45.1% 30.5% EAE – axon loss 170 46 54.8% 41.7% AD – Water Maze 80 15 0.688 sd 0.498 sd AD – plaque burden 632 154 0.999 sd 0.610 sd

20%

Publication bias

slide-24
SLIDE 24

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

slide-25
SLIDE 25

Is the current study at risk of bias?

slide-26
SLIDE 26

Expectancy effects

  • 12 graduate psychology students
  • 5 day experiment: rats in T maze with dark arm alternating at

random, and the dark arm always reinforced

  • 2 groups – “Maze Bright” and “Maze dull”

Group Day 1 Day 2 Day 3 Day 4 Day 5 “Maze bright” 1.33 1.60 2.60 2.83 3.26 “Maze dull” 0.72 1.10 2.23 1.83 1.83 Δ +0.60 +0.50 +0.37 +1.00 +1.43

Rosenthal and Fode, Behav Sci 8, 183-9

slide-27
SLIDE 27

It’s not just in the measurement

Blinded assessment of behavioural outcome

No Yes

Improvement in behavioural outcome (Standardised Effect Size)

0.0 0.2 0.4 0.6 0.8 1.0 1.2

slide-28
SLIDE 28

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

slide-29
SLIDE 29

Lack of an a priori plan (protocol) leaves you chasing shadows

slide-30
SLIDE 30

Perils of testing non- prespecified hypotheses

  • International Study of Infarct Survival-2

– Aspirin improves outcome in myocardial infarction BUT – Non-significant worsening of outcome for patients born under Gemini or Libra

  • What if it was patients with migraine?

Baigent et al., 1998. ISIS-2. BMJ. Peto, R., 2011. Current misconception 3. Br J Cancer

slide-31
SLIDE 31

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

slide-32
SLIDE 32

The final paper doesn’t provide useful information

slide-33
SLIDE 33

Poor Reporting & Quality

  • Clinical Trials: Cochrane
  • High/unclear risk of bias
  • TIDieR – poor reporting of interventions
  • (Hoffmann et al., 2014)
  • Animal Studies:
  • measures to reduce the risk of bias
  • Poor reporting of general methods

Poor Reporting in 20,920 RCTs – Dechartres et al., 2017, BMJ;357:j2490

slide-34
SLIDE 34

Robustness

Crabbe et al., 1999, Science

slide-35
SLIDE 35

Researchers are different …

number quality F.F.P. HARKing Open Science Preregistration Risks of bias

slide-36
SLIDE 36

How we do research integrity

slide-37
SLIDE 37

Research Improvement Strategy

slide-38
SLIDE 38

Biomedical Research Investment

  • $300bn globally, €50bn in Europe
  • Glasziou & Chalmers estimated that 85% of research is wasted
  • Even if waste is only 50%, improvements which reduced that by 1%

would free $3bn globally every year.

  • Investing ~1% of research expenditure in improvement activity would

go a long way

slide-39
SLIDE 39

Research Cycle

Generate and specify hypothesis Design study Conduct and collect data Analyse data and test hypothesis Interpret results Publish/conduct next experiment

“Manifesto for Reproducible Science”, Munafo et al., 2017

slide-40
SLIDE 40

Take home messages

Before the Experiment:

  • 1. Thoroughly check the quality of the prior knowledge base
  • 2. Ensure an a priori plan or protocol, including a priori hypotheses

During:

  • 3. Implement measures to reduce the risk of bias in experiments, record

keeping After:

  • 4. Transparent reporting: report the study in full, make the data available on

e.g. Zenodo, preprint at BioRxiv

slide-41
SLIDE 41

Executing Best Practice: Before

  • 1. Thoroughly check the quality of your knowledge base

Systematic review of the field + critical appraisal

slide-42
SLIDE 42

Executing Best Practice

  • 2. Make an a priori plan or protocol
  • Register the protocol
  • Exploratory or confirmatory experiment?
  • Study population, intervention, primary outcome, sample size calculation,

hypothesis, statistical analysis plan

  • Time-stamped, read-only, with persisting unique digital identifier
  • Before beginning data collection
  • Can remain private until work is published
slide-43
SLIDE 43

Executing Best Practice: During

  • 1. Implement measures to reduce the risk of bias in your experiments

Randomisation, Blinding, handling of drop-outs, etc

  • 2. Traceability of materials, antibodies, code, etc
  • 3. Record deviations from the study protocol
slide-44
SLIDE 44

Executing Best Practice: After

  • 4. Transparent reporting:
  • Ensure study is reported in full – methods, reagents, intervention
  • Make the data and analysis code available - "as open as possible as

closed as necessary"

  • Link all with DOIs

Reporting Guidelines: Before & After

https://www.equator-network.org/

slide-45
SLIDE 45

Preprints

  • A version of your final manuscript before peer review
  • Preprint archives

○ medArXiv ○ bioXiv ○ metaArXiv ○ PsyArXiv ○ ArXiv

  • Free & DOI = citable
  • Compatible with all (but 1) major publishing groups

○ Link to published article from preprint

slide-46
SLIDE 46

Threats to reproducible science

“Manifesto for Reproducible Science”, Munafo et al., 2017

slide-47
SLIDE 47

Dr Alexandra Bannach-Brown Institute for Evidence-Based Healthcare Bond University Abannach@bond.edu.au

@ABannachBrown