better science
play

Better Science? Verena Heise Open Science Day @ MRC CBU 20 - PowerPoint PPT Presentation

Open Science Better Science? Verena Heise Open Science Day @ MRC CBU 20 November 2018 Slides: osf.io/5rfm6 Whats Open Science? Andreas E. Neuhold, Six Open Science Principles, https://en.wikipedia.org/wiki/Open_science Why Open


  1. Open Science – Better Science? Verena Heise Open Science Day @ MRC CBU 20 November 2018 Slides: osf.io/5rfm6

  2. What’s Open Science?

  3. Andreas E. Neuhold, Six Open Science Principles, https://en.wikipedia.org/wiki/Open_science

  4. Why Open Science? Citations Collaboration s Reuse Impac t Transparency Accountabilit y Validatio n

  5. Why do we care?

  6. Ethics • Waste of animal lives • Waste of patients’ time, hope and lives • Waste of money (up to 85% of research funding, Chalmers and Glasziou, Lancet 2009 https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(09)60329-9 )

  7. Trust in evidence

  8. How to work reproducibly?

  9. Andreas E. Neuhold, Six Open Science Principles, https://en.wikipedia.org/wiki/Open_science

  10. Good Research Practice Hypothes es Publishing Design Interpretation Data collection Data analysis All images on CC0 license from https://pixabay.com/

  11. Ask the right question • Do a proper literature review, look for systematic reviews and meta-analyses, where are the gaps? • Talk to your colleagues • Speak to clinicians • Involve patients and the public e.g. James Lind Alliance, social media, Citizen Science projects => Be open and collaborate

  12. Good Research Practice Hypothes es Design

  13. Do you need to collect new data?

  14. Study design http://www.equator-network.org/

  15. Study design Confounde Bias? r Cause Effect Chance ? All images on CC0 license from https://pixabay.com/

  16. Statistical power and sample size

  17. Statistical power • Median statistical power in neuroscience: 21% • Median statistical power in animal studies: between 18 – 31% • Median statistical power in neuroimaging: 8% => chance of false negative = 1-power => “the likelihood that any nominally significant finding actually reflects a true effect is small” (low positive predictive value)

  18. Sample size calculation - example • Effect size = Cohen’s d of 0.5 (medium effect) • Statistical power to find effect = 90% • alpha = 0.05 • One sample one-tailed t test  36 participants • Independent groups two-tailed t test  86 participants per group

  19. Pre-register https://osf.io/tvyxz/wiki/home/

  20. Pre-registration • Write up your hypothesis, study design and detailed analysis pipeline (introduction and methods) • Pre-register it online (e.g. on OSF) or use registered report • Registered report means your study will be published IRRESPECTIVE of results, purely based on scientific merit => you get feedback before you collect the data => you can put it on your CV before you have finished the study • More info on pre-registration: https://tomstafford.staff.shef.ac.uk/?p=573 and registered reports: https://cos.io/rr/

  21. Pre-registration Munafo et al., Nature Human Behaviour, 2017, 1:0021

  22. Pre-registration of analysis pipeline Poldrack et al., Nat Rev Neuroscience, 2017, 18:115-126

  23. Good Research Practice Hypothes es Design Data collection Data analysis

  24. Do you need a new method?

  25. Reproducible measures • Validity (Can I get the right answer?) • Reliability (Can I get the same answer twice?) Reliable Valid Not reliable Both reliable and Not Valid Not reliable Not valid valid

  26. Reproducible measures • How reliable and valid are your tests? • Can you compare with gold standard or well- established tests? • Are you doing any quality control of your tools (experimental setup, acquired data, etc.) • Analysis pipelines • Use well-established tools • Follow good programming practice • Test your code using simulations

  27. There are no miracles! Sidney Harris, What’s so funny about Science? (1977)

  28. Reproducible workflows https://en.wikipedia.org/wiki/List_of_ELN_software_packages

  29. Good statistical practice

  30. Good Research Practice Hypothes es Publishing Design Interpretation Data collection Data analysis

  31. Open reporting • Publish ALL the analyses you did (pre-registered and exploratory) • Publish ALL results (not just “significant” ones) • Publish according to best practice guidelines • Use preprints

  32. Preprints • bioRxiv: https://www.biorxiv.org/ • OSF: https://osf.io/preprints/ • PsyArXiv: https://psyarxiv.com/

  33. Open reporting • Publish ALL the analyses you did (pre-registered and exploratory) • Publish ALL results (not just “significant” ones) • Publish according to best practice guidelines • Be honest about your biases and conflicts of interest • Use preprints • Publish Open Access

  34. Publish data and materials https://osf.io/tvyxz/wiki/home/

  35. Robust Research - summary • Open Research • Data • Materials • Reporting (and pre-registration) • Good Research Practice • Relevant research question • Robust study design • Reproducible measures • Reproducible workflows

  36. https://en.wikipedia.org/wiki/The_Scream

  37. Change the system Researcher s Institution The Public s Scientific Ecosystem Profession Industry al Societies Publishers Funders

  38. Original clipart on CC0 license from https://openclipart.org/

  39. Reproducible Research Oxford • Started initiative in September 2017 • Mainly early career researchers • Disciplines: • experimental psychology • biomedical sciences (preclinical to clinical) • social sciences (archaeology, anthropology) • bioethics

  40. Journal Club, @ReproducibiliT Seminar series on Reproducibility and Open Research Software/ Data Carpentry workshops Berlin-Oxford summer school: https://www.bihealth.org/de/aktuell/berlin-oxford- summer-school/ Provide speakers for lab meetings Develop skills training for DTCs, MSc programmes, etc.

  41. Education • Ethics of reproducible research • Open data/ materials/ reporting/ publishing/ workflows • Experimental design (incl. bias and confounding) • Statistics • Programming skills • Critical thinking and peer review • Pre-registration of research projects

  42. Next: the institution • Incentives - HR policies • Infrastructure • Research ethics review • And many other plans

  43. And the UK

  44. Acknowledgements Reproducible Research Oxford (core people): Dorothy Bishop, Laura Fortunato, David Gavaghan, Amy Orben, Sam Parsons, Thees Spreckelsen, Jackie Thompson Chris Chambers (Cardiff, UKRN) Marcus Munafò (Bristol, UKRN ) Uli Dirnagl and the QUEST team (Berlin)

  45. Thanks for your attention! Get in touch: verena.heise@ndph.ox.ac.uk

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend