system in breast imaging
play

System in Breast Imaging Elizabeth Burnside, MD, MPH, MS - PowerPoint PPT Presentation

Creating a Learning Health System in Breast Imaging Elizabeth Burnside, MD, MPH, MS Departments: Radiology Population Health Biostatistics and Medical Informatics Industrial and Systems Engineering Learning Health System? General


  1. Creating a “Learning Health System” in Breast Imaging Elizabeth Burnside, MD, MPH, MS Departments: Radiology Population Health Biostatistics and Medical Informatics Industrial and Systems Engineering

  2. Learning Health System? • General Overview – Motivation • Methodological Considerations – Algorithms & metrics to measure performance • Projects – Improving mammographic predictions – Improving image-guided core biopsy

  3. Motivation • Information overload – Medical articles in pubmed-online – EHR information – Genetic risk factors • Human decision making involves heuristics that may not scale up alone

  4. Motivation • Information overload – Medical articles in pubmed-online – EHR information – Genetic risk factors • Human decision making involves heuristics that may not scale up alone • We are not using this valuable resource

  5. Motivation • Information overload – Medical articles in pubmed-online – EHR information – Genetic risk factors • Human decision making involves heuristics that may not scale up alone • We are not using this valuable resource

  6. The Gail Model • Uses data (BCDDP) • Predicts Breast CA – Five year/lifetime risk Low signal predictors http://www.cancer.gov/bcrisktool/Default.aspx

  7. Predictive Information Age Breast Cancer

  8. Human Computer Interaction COMMUNICATION Structured or Free Text Report Risk Score/ Risk Score/ Probability Probability

  9. The Mammography Risk Prediction Project Elizabeth Burnside, MD, MPH, MS C. David Page, PhD Jude Shavlik, PhD Charles Kahn, MD (MCW)

  10. Background-Opportunity • 200,000 breast cancer diagnosed in US • 20 million mammograms per year – False positives • Millions of diagnostic mammograms/US • Hundreds of thousands biopsies – False negative • 10-30% of breast cancers not detected on mammography • Variability of practice impacts many women • Evidence-based decision support has the potential to drive substantial improvement

  11. BI-RADS Margins - circumscribed -microlobulated -obscured Special -indistinct Associated Mass Cases Skin -Spiculated Findings Thickening Tubular Shape Density Skin -round Density -high Lesion -oval -equal Lymph -lobular -low -irregular Node -fat containing Architectural Trabecular Thickening Distortion Assymetric Calcifications Breast Tissue Typically Benign Nipple -skin Distribution -vascular Retraction Focal Assymetric -clustered -coarse/popcorn Density -linear -rod-like Intermediate -segmental Axillary -round -regional -amorphous Adenopathy -lucent-centered -diffuse/scattered -course heterog -eggshell/rim -milk of calcium Skin -suture Higher Probability Retraction -dystrophic Malignancy -punctate - fine pleomorphic -linear/branching

  12. Breast Cancer Probability Based on BI-RADS Category BI-RADS 0: Needs Additional Imaging BI-RADS 1: Negative BI-RADS 2: Benign BI-RADS 3: Probably Benign BI-RADS 4: Suspicious for malignancy BI-RADS 5: Highly suggestive of malignancy

  13. Ca ++ Lucent Milk of Mass Stability Centered Calcium Ca ++ Dermal Mass Margins Ca ++ Round Mass Density Mass Ca ++ Dystrophic Mass Shape Mass Size Ca ++ Popcorn Ca ++ Fine/ Breast Breast Disease Linear Density Ca ++ Eggshell Skin Lesion Ca ++ Pleomorphic Tubular Ca ++ Punctate FHx Age Density Ca ++ Amorphous HRT Architectural Asymmetric LN Distortion Density Ca ++ Rod-like

  14. Case Example Ductal Carcinoma in situ .48 Fibrocystic change .21 Pleomorphic microcalcifications Malignant .760 DC/DCIS .16 Clustered microcalcifications Benign .239 Ductal Carcinoma (NOS) .12 Atypical .001

  15. Training on Data • Motivation – Accurate probabilities are critical – Some are not available in literature – Modeling the relevant patient population is possible with training Expert & Machine Rule Based Learning

  16. Idea: Data Driven Decisions Abnormality rmality Table Patien ent t Table Mass shape Age Mass margins Personal Hx Breast CA Mass density Family Hx Breast CA Mass size ….. Mass stability MicroCa ++ shape MicroCa ++ distribution Biopsy sy Table le BI-RADS category Regist stry ry Table le Needle size ….. Number of samples Patient ID Patholo hology y Table Post-proc appearance Margin status Accurate clip position Grade Pathology Result ….. Prior radiation Concordance ….. Recommendation …..

  17. Data • Our dataset contains – 350 malignancies – 65,630 benign abnormalities • Linked to cancer registry data – Outcomes (benign/malignant)

  18. Training the BN Tuning Test Training set set set • Standard Machine learning – Use known cases to train – Use the tuning set for optimal training – Performance based on hold out test set

  19. Performance • AUC 0.960 vs . 0.939 – P < 0.002 • Sensitivity – 90.0% vs . 85.3% – P < 0.001 • Specificity – 93.9% vs . 88.1% – P < 0.001

  20. What does that mean? • At a specificity of 90% 38 conversions FN TP • At a sensitivity of 85% 4226 conversions FP TN

  21. Ultimately Decision Support Aids the Physician • Output of the system is – Advisory – Utilized in the clinical context – System performance alone is not the point – Performance/Physician performance is the key to improvement of care

  22. Collaborative Experiment Radiologist ROC curves 1 .916 0.9 0.8 0.7 Bayes Net 0.6 .919 TPF 0.5 BN Radiologist 0.4 Combined 0.3 Combined 0.2 .948 0.1 0 0 0.2 0.4 0.6 0.8 1 FPF

  23. Results Radiologist ROC curves 1 .916 0.9 0.8 0.7 Bayes Net 0.6 p=.03 .919 TPF 0.5 BN Radiologist 0.4 Combined 0.3 Combined 0.2 .948 0.1 0 0 0.2 0.4 0.6 0.8 1 FPF

  24. Results Radiologist ROC curves 1 .916 0.9 0.8 0.7 Bayes Net 0.6 .919 TPF 0.5 BN Radiologist p=.065 0.4 Combined 0.3 Combined 0.2 .948 0.1 0 0 0.2 0.4 0.6 0.8 1 FPF

  25. Results Radiologist ROC curves 1 .916 0.9 p=.99 0.8 0.7 Bayes Net 0.6 .919 TPF 0.5 BN Radiologist 0.4 Combined 0.3 Combined 0.2 .948 0.1 0 0 0.2 0.4 0.6 0.8 1 FPF

  26. Calibration Curves < 25% Observed disease frequency 25-50% 50-75% > 100% < 25% 25-50% 50-75% >75% Predicted Risk

  27. Calibration Curves < 25% Observed disease frequency 25-50% 50-75% > 100% < 25% 25-50% 50-75% >75% Predicted Risk

  28. Calibration • Hosemer-Lemishow goodness of fit Ayer, T., et al., Breast cancer risk estimation with artificial neural networks revisited: discrimination and calibration. Cancer, 2010. 116 (14): p. 3310-21.

  29. Creating a Learning Health System • Capturing directly from the EHR • Using it to inform future practice • Can it be done?

  30. UW Dataset Date range: from Oct 1, 2005 to Mar 30, 2012 Number of patients: 30,024 Number of mammograms: 89,610 Number of screening mammograms: 69,484 Number of diagnostic mammograms: 20,126 Number of MRIs: ~ 3000 Number of US: ~10,000

  31. What is the Key? Abnormality rmality Table Patien ent t Table Mass shape Age Mass margins Personal Hx Breast CA Mass density Family Hx Breast CA Mass size ….. Mass stability MicroCa ++ shape MicroCa ++ distribution Biopsy sy Table le BI-RADS category Regist stry ry Table le Needle size ….. Number of samples Patient ID Patholo hology y Table Post-proc appearance Margin status Accurate clip position Grade Pathology Result ….. Prior radiation Concordance ….. Recommendation …..

  32. The Breast Biopsy Project Elizabeth Burnside, MD, MPH, MS Heather Neuman, MD, MS Ines Dutra, PhD C. David Page, PhD Jude Shavlik, PhD

  33. ILP Abnormality A in Mammogram M for Is malignant if: Biopsy B in Patient P Malignant (A) IF A has mass present A has stability increasing P has family history of breast cancer B has atypia

  34. How does it work? • Learn if-then rules that will become features in a predictive model – Inductive logic programming (ILP) to learn the rules – Integrated search strategy for constructing and selecting rules for classifcation algorithm

  35. Human Computer Interaction COMMUNICATION Logical Rules Logical Rules

  36. Breast Biopsy • Biopsy: single most costly component of a breast cancer screening program • Annual breast biopsy utilization in 2010 62.6/10,000 women  700,000 women ~35,000-105,000 non-definitive

  37. Non-Definitive Breast Biopsy Screening Diagnostic Mammography Work-up/Biopsy False Positives 1000 1000 115 # women # women # women 5 5 4 # cancers # cancers # cancers 10 # women 1 # cancers What should I Non-definitive? tell my patient?

  38. Breast Biopsy at UW • 6 year experience at UW – 2808 consecutive image-guided core biopsies • 30% Malignant; 70% Benign • 238 were deemed non-definitive Excision • Hypothesis: ILP rules from the data and from physicians could improve the accuracy of upgrade prediction

  39. All biopsies (2006-2011) 2808 core biopsies 892 Stereo 173 MRI GUIDANCE 1743 US 18 (non-definitive) 96 (non-definitive) 124 (non-definitive) 23 D 4 I 5 D 1 I 61 D 34 I 65 ARS 12 ARS 24 ARS 0 N 4 N 5 N

  40. Biopsies in Practice (2006-11) 2808 core biopsies 1,909 breast core biopsies with diagnostic mammograms 601 M 1,308 B/HR 130 (non-definitive)

  41. Physician rules Evaluate Evaluate Incorporate Incorporate Machine rules

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend