eliot siegel md fsiim facr professor of radiology
play

Eliot Siegel, MD, FSIIM, FACR Professor of Radiology University of - PowerPoint PPT Presentation

Eliot Siegel, MD, FSIIM, FACR Professor of Radiology University of Maryland School of Medicine Chief Imaging Services, VA Maryland Healthcare System Dw yer AI Session Outline Tanveer F. Syeda-Mahmood, PhD Chief Scientist, Medical Sieve


  1. Many Many AIs and Dr. Evil • They were asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk • They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.” 25

  2. AI/Machine Learning Basic Terms

  3. Deep Learning Falls Within Machine Learning Within AI

  4. Artificial Intelligence • Basically an umbrella term for a variety of applications and techniques • Artificial intelligence refers to "a broad set of methods, algorithms and technologies that make software 'smart' in a way that may seem human- like to an outside observer” » Lynne Parker, director of the division of Information and Intelligent Systems for the National Science Foundation • John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”

  5. Artificial Intelligence • Machine learning, computer vision, natural language processing, robotics and related topics are all part of A.I. • Also referred to as “machine intelligence” or “computational intelligence” • Can distinguish different types of AI • When will AI Arrive? – It’s here already!!!

  6. Is There A General Equation for Winning at AI? • The action-value function is the maximum sum of rewards r t discounted by γ at time step t, achievable by a behavior policy Π =P(a|s), after making an observation (s) and taking an action (a) • Can be optimized using a Deep convolutional neural network • Key to winning at Atari Video games • Key to “happiness”? • Key to LIFE?

  7. Where is AI? Everyw here. My Monday Morning Diary • Wake up by iphone, One Dance/Drake • Arrive at work at VA Hospital • Check indoor temperature on Nest • Big stack of papers on desk to be signed-rummage through drawers to • Amazon Echo checks out weather and find pen and move papers from one traffic on the way to work and turns on side of the desk to the other the lights • Take 10 minutes to sign into EMR to • Google Now says flying to Portland check consults after waiting about tomorrow with itinerary • Take another 8 minutes to sign into • Set temperature in the Car on iphone PACS • Get read for work while Siri plays latest • Take 10 minutes to play messages on unread messages and e-mails phone machine • Car displays today’s schedule from • Grab stack of paper requisitions to Google calendar and goes on autopilot protocol and does 95% of driving to work autonomously

  8. Artificial Intelligence (Narrow ) • Also referred to as Weak AI • AI that specializes in one area • There’s AI that can beat the world chess champion in chess, but that’s the only thing it does – Speech recognition – Translation – Self-driving cars – Siri, Alexa, Cortana, Google Now

  9. Artificial General Intelligence • Sometimes referred to as Strong AI, or Human-Level AI • Computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can • Creating AGI is a much harder task than creating ANI, and we are nowhere near close to it

  10. Artificial General Intelligence (AGI) • Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to: – Reason – P lan – Solve problems – Think abstractly – Comprehend complex ideas – Learn quickly – Learn from experience”

  11. When Will AGI Arrive? • A study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results: • By 2030: 42% of respondents • By 2050: 25% • By 2100: 20% • After 2100: 10% • Never: 2%

  12. Artificial Superintelligence • Oxford philosopher and leading AI thinker and author Nick Bostrom defines super-intelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

  13. Machine Learning • Also blanket term that covers multiple technologies • Doesn’t necessarily have to actually “learn” as we think of it and doesn’t necessarily provide feedback over time just refers to a class of statistical techniques to characterize, discover, classify data • Vast majority of these have been around for many years/decades

  14. Machine Learning • As a part of A.I., machine learning refers to a wide variety of algorithms and methodologies that can also enable software to improve its performance over time as it obtains more data • "Fundamentally, all of machine learning is about recognizing trends from data or recognizing the categories that the data fit in so that when the software is presented with new data, it can make proper predictions," (Parker)

  15. Neural Networks • Neural networks are a type of machine learning, and deep learning refers to one particular kind • Neural networks -- also known as "artificial" neural networks -- are one type of machine learning that's loosely based on how neurons work in the brain, though "the actual similarity is very minor”

  16. Neural Networks • There are many kinds of neural networks, but in general they consist of systems of nodes with weighted interconnections among them • Nodes, also known as "neurons," are arranged in multiple layers, including an input layer where the data is fed into the system; an output layer where the answer is given; and one or more hidden layers, which is where the learning takes place • Typically, neural networks learn by updating the weights of their interconnections

  17. Examples Neural Netw ork Types

  18. Types of Neural Networks: Autoencoder • Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. • Relatively simple and intuitive

  19. Restricted Boltzman Machine • Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other

  20. Convolutional Neural Networks • Like Autoencoders and RBMs- translate many low-level features (e.g. user reviews or image pixels) to a compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other. • CNNs are very specifically optimal for image recognition. Most of the top-level algorithms in image recognition are somehow based on CNNs today

  21. Purpose of These Neural Networks is Dimensionality Reduction • Autoencoders and RBMs both take a vector in nn- dimensional space they translate it into an mm- dimensional one, trying to keep as much important information as possible and, at the same time, remove noise • If training of autoencoder/RBM was successful, each element of resulting vector (i.e. each hidden unit) represents something important about the object - shape of an eyebrow in an image, genre of a film, field of study in scientific article, etc. • You take lots of noisy data as an input and produce much less data in a much more efficient representation

  22. Neural Networks Used for Pretraining then Other Classifier Used • None of models mentioned here work as classification algorithms per se • Instead, they are used for pre-training - learning transformations from low-level and hard-to-consume representation (like pixels) to a high-level one • Once deep (or maybe not that deep) network is pretrained, input vectors are transformed to a better representation and resulting vectors are finally passed to real classifier (such as SVM or logistic regression)

  23. Deep Learning Neural Networks

  24. Deep Learning • Deep learning refers to what's sometimes called a "deep neural network," or one that includes a large system of neurons arranged in several hidden layers – A "shallow" neural network, by contrast, will typically have just one or two hidden layers. • The idea behind deep learning is not new, but it has been popularized more recently because we now have lots of data and fast processors that can achieve successful results on hard problems

  25. Commonly Used Machine Learning Techniques • Regression techniques • Neural networks • Support vector machines • Decision trees • Bayesian belief networks • k-nearest neighbors • Self-organizing maps • Case-based reasoning • Instance-based learning Hidd M k d l

  26. Machine Learning Vs. Data Mining • Machine learning focuses on prediction, based on known properties learned from the training data. • Data mining focuses on the discovery of (previously) unknown properties in the data

  27. Machine Learning vs. Optimization • Optimization algorithms can minimize the loss on a training set • Machine learning is concerned with minimizing the loss on unseen samples

  28. Machine Learning and Statistics and “Statistical Learning” • Machine learning and statistics are closely related fields and machine learning can be considered a statistical technique • Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein 'algorithmic model' means more or less the machine learning algorithms like Random forest • Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning

  29. What is Deep Learning? • DL consists of multiple hidden layers in an artificial neural network • This approach tries to model the way the human brain processes light and sound into vision and hearing • Two very successful applications of deep learning are computer vision and speech recognition • Falling hardware prices and the development of GPUs for personal use in the last few years have contributed to the development of the concept of Deep Learning (DL)

  30. Deep Learning vs. Machine Learning

  31. 56

  32. ImageNet Large Scale Image Recognition Challenge Started in 2010

  33. • Computers have always had trouble identifying objects in real images so it is not hard to believe that the winners of these competitions have always performed poorly compared to humans. • But all that changed in 2012 when a team from the University of Toronto in Canada entered an algorithm called SuperVision , which wiped the floor with the opposition.

  34. SuperVision • SuperVision, for example, consists of some 650,000 neurons arranged in five convolutional layers • It has around 60 million parameters that must be fine- tuned during the learning process to recognize objects in particular categories.

  35. 61

  36. Speech Recognition Deep Learning Breakthrough

  37. Human Vision: The Hardest Task for Computers? Introduced by Alan Turing in his 1950 paper “Computing Machinery and Intelligence” Opens with the words “I propose to consider the question, ‘Can machines think?” Asks whether a computer could fool a human being in another room into thinking it was a human being Modified Dr. Watson Turing Test might ask: Can a computer fool a human being into thinking it was a doctor?

  38. What’s Wrong w ith this Picture?

  39. Ultimate Challenge: Medical Imaging Scientific American June 2011 Testing for Consciousness Alternative to Turning Test Highlights for Kids “What’s Wrong w ith this Picture?” Christof Koch and Giulio Tononi

  40. Imaging May Be Ultimate/Future Frontier For “AI” Softw are

  41. Machine Learning Algorithms: Like Standards, So Many to Choose From!

  42. Can We Apply Those Incredible Advances in Object Recognition to Diagnostic Radiology? • These image challenges have used 24 big “RGB” color images with no experience with gray scale imaging in medicine • They can identify a chair but can’t tell if it’s – Broken – Something is missing – Something extra is there – Comfortable – Beautiful or ugly – Dirty or clean • Black box – Can’t explain why something is identified as abnormal • Adrenal challenge 5 th Grader– Need to know anatomy

  43. “Magic” Aspect of Deep Learning • One major challenge is that we don’t understand what’s inside black box of deep learning when it solves a visual recognition challenge • Don’t need deep learning for Tic Tac Toe or Checkers or even chess because we can use combination of brute force to look at every possible move (chess out to 20 to 30 or more moves and further at the end game) • But game like Go or playing video games, can’t do brute force but can learn by trial and error even though black box without understanding of why, like magic

  44. • No general purpose learning system for diagnostic imaging like we train our residents • Our eyes and brains have evolved to detect patterns and our knowledge of medicine, physiology, a priori likelihood of disease and recognition of trends evolved over millions of years

  45. Challenges for Machine Learning Algorithms • Which to choose from? • How do evaluate different machine learning algorithms and determine which is most efficient for a particular problem? • Black box? • How to optimize parameters? – Practical Beyesian Optimization Machine Learning • Computational Time • Very specific MLA’s do a good job at different tasks which makes it difficult to select a single one as a generalized deep AI approach for image analysis or for data analysis • Problems with High Dimensional Datasets like electronic medical record requires different approach

  46. Applications of “Machine Learning” in Medical Imaging

  47. These Machine Learning Techniques Have Been Utilized in Imaging for Decades With Tens of Thousands of Published Papers

  48. 10,000s of Narrow Machine Learning Applications in Medicine Challenge is Binding these together and can one develop general learning theory? • Fracture detection • Brain hemorrhage • Mammography • MS diagnosis and quantification • Bone age determination • Lung nodule detection • Liver mass determination • Meniscal tear • Brain segmentation and diagnosis • Bone mineral density on CT • Carotid stenosis evaluation • Coronary Artery stenosis evaluation • Cardiac function evaluation • MRI Mammography CAD

  49. Eliot Siegel, M.D. Prof. and Vice Chair University of Maryland Chief Imaging VA Maryland Healthcare System

  50. Black Box of CAD • The “black box” nature of CAD is seen as a substantial issue by many radiologists • If my residents and fellows told me they thought this right upper nodule was cancer and I asked why and they wouldn’t say why or how confident they were, I’d: – Be less confident – Be suspicious about their analysis – Be frustrated

  51. What Made You Circle the Lesion? • Lesion size • Lesion morphology (shape: smooth, spiculated) • Density distribution (solid, ground glass, partially calcified) • Location (subpleural, which lobe) • Connectedness (is it connected to vessels or other structures?)

  52. Level of Confidence and Quality of Evidence • What was your level of confidence in the magic box formula of the above that made you circle it? – Did it have to meet size, morphology, density and connectedness or even location characteristics – What database did you use to determine level of suspicious of cancer and how many cases were in it or was it based on expert opinion? • 10 • 100 • 1000 • 10,000 • More?

  53. Left Upper Lobe Lung Nodule

  54. CAD is Pretty Sure It’s There

  55. CAD is Not So Confident

  56. Where Are We Today With Clinical Use of CAD e? • Mammography is far and away the most utilized application • But what do radiologists really think of Mammography CAD?

  57. Is CAD gaining momentum in clinical practice? – It seems to be happening too slowly, more slowly so than most of us had anticipated – In cases (unlike mammography) where there is no reimbursement for CAD, the radiologists and practices are feeling that their margins are low enough and there is major pressure related to decreased reimbursement and the impression that reimbursement will continue to drop • Difficult to make business case for added expenditure for CAD to radiologists

  58. Is CAD Gaining Momentum In Clinical Practice? – There is much skepticism among my colleagues about the added value of CAD and many only use it for mammography because of the reimbursement model – Colleagues will not pay any significant amount for say, CAD lung nodule detection for chest radiography even with a hypothetical scenario of a 10 or even 20% increase in sensitivity – I believe that they would pay more for something that increased their efficiency and productivity than their accuracy

  59. M ATERIALS AND M ETHODS : • Separate links to an online survey were posted on the website of the Society of Breast Imaging and circulated to subscribers of Diagnostic Imaging.com, in order to evaluate opinions regarding CAD use and its underlying legal issues

  60. RESUL TS: Use and Reliance on CAD? • 89% indicated they always use CAD when reading screening mammograms • 4% indicated that they rarely or never use CAD

  61. Use and Reliance on CAD and Reimbursement • However the extent to which clinicians are relying on CAD to provide an accurate diagnosis is split – 2% indicated that they always rely on CAD to provide an accurate diagnosis – 49% indicated they sometimes rely on CAD – 49% clinicians rarely or never rely on CAD

  62. – It is likely that the mismatch between use of and reliance on CAD relates to the reimbursements ($12, or $1000 per approximately 83 cases or $2,400 per day for 200 screening mammograms) radiologists receive when using CAD

  63. RESUL TS: Use and Reliance on CAD? § M ost radiologists have not changed a read based on the results of CAD § Only 2% indicated they alters their opinion after CAD § 36% sometimes change interpretation based on CAD § 61.7% rarely or ever change their interpretation based on CAD

  64. Use and Reliance on CAD • 15% found that CAD was often helpful • 49% considered it sometimes helpful 36% considered it rarely or never helpful

  65. What Do I Need from Next Generation CAD Clinically? • Improve efficiency/productivity • Increases my accuracy/reliability without compromising efficiency • Affordable • Increases my confidence • Allows me to measure things I couldn’t measure otherwise such as liver or pulmonary “texture” • Provide Imaging “Physical Exam”

  66. Next Generation of CAD • The next generation of CAD will reflect the trend toward big data and personalized medicine and shift away from the current second reader approach and toward one in which CAD algorithms increasingly serve as visualization and image measurement/annotation and quantification tools – Examples of probability maps rather than just binary yes and no and FDA requirements shaped the second reader – Tracking lesions over time – Highlighting certain types of findings to draw attention to the reader

  67. CAD Tools Requirements and Challenges • CAD applications must be able to be integrated into the image acquisition, display and interpretation workflow • They will not be adopted if they constrain the throughput of the radiologist • Need high level of accuracy in a single patient, need to more than just demonstrate efficacy in separating two groups • Commercialization and U.S. Food and Drug Administration (FDA) clearance is a big hurdle and needs to be revisited

  68. Next Generation CAD Apps Store for CAD Algorithms? • Want to be able to utilize all of these on a single platform, e.g. using API specified by DICOM working group 23 • Would like to see business ecosystem such as GRID that could allow users to have a payment model for these so you could download algorithm on the fly or send images up to a web service or could get consensus from multiple CAD programs

  69. Israeli Start- Up • Platform to create and integrate a variety of algorithms to test against 12 million anonymized, indexed and catalogued imaging studies

  70. • Supports multiple coding languages and libraries, including Machine Learning Convolutional Network libraries such as Torch, Cafe and Theano, image processing libraries • All work is saved and projects can be collaborated on by several users • In addition, provide high end, dedicated GPUs and CPUs to run algorithms

  71. Second Start-up Company Creates Preliminary Report for Chest CT

  72. IBM Medical Sieve

  73. Recommendations for CAD • I believe the FDA has often limited the challenges to CAD as a second reader rather than as a tool that can be toggled on and off – Would like to see highlighted images like a spell checker that could also color code probability that a finding is real/confidence of the CAD algorithm – I’d like to see CAD to do image recognition before a study is reviewed as screening for things such as rib fractures, compression spine fractures, pneumothorax, etc. the equivalent of an imaging physical exam

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend