Concept/ Patient Representation - Following Edward Chois Ideas - - - PowerPoint PPT Presentation

concept patient representation
SMART_READER_LITE
LIVE PREVIEW

Concept/ Patient Representation - Following Edward Chois Ideas - - - PowerPoint PPT Presentation

Concept/ Patient Representation - Following Edward Chois Ideas - Introduction Diagnosis, treatments, and medication codes create thousands of dummy variables (one-hot encoding) -> Sparse matrix Statistical models usually


slide-1
SLIDE 1

Concept/ Patient Representation

  • Following Edward Choi’s Ideas -
slide-2
SLIDE 2

Concept Representation Patient Representation

  • Diagnosis, treatments, and medication codes create thousands of

dummy variables (one-hot encoding) -> Sparse matrix

  • Statistical models usually re-group(coarse classing) dummy variables.
  • NLP techniques(word embedding) for medical concepts.
  • ‘Word2Vec’ provides a few interesting features such as vector
  • peration.
  • Prediction models in general require patient level data (i.e. disease

prediction)

  • Concept representation can be transformed to patient presentation.

✓ However, summation/average of concept vectors loses temporal information as well as interpretability.

  • E. Choi tries to incorporate sequential information whilst making the

models interpretable at the same time.

Introduction

slide-3
SLIDE 3

Concept Presentation

Medical ‘Word2Vec’

slide-4
SLIDE 4

Concept Representation to Patient Presentation

No Interpretation! No Time Sequence!

Word Embedding (NLP)

slide-5
SLIDE 5

Multi-Layer Representation Learning for Medical Concept

Visit data containing medical concept Visit representation + Demographic data Predict pre and post visit data

Mr.Choi names this architecture as Med2Vec! Probably it is difficult to build a sequential model using medical concepts only. (lost of dups concepts) Let’s bring a ‘visit’ layer to the concept representation learning.

medical concept (diagnosed as gastritis)

slide-6
SLIDE 6

AutoEncoder Patient Presentation – Deep Patient

Stacked Denoising AutoEncoder

  • > Good idea, but no interpretability and no temporal info !
slide-7
SLIDE 7

Let’s have interpretability(Attention) and sequential information(RNN).

Retain – Interpretable and Predictive Model

slide-8
SLIDE 8

Retain – Interpretable and Predictive Model

Let’s unfold the RNN model.

slide-9
SLIDE 9

Retain – Interpretable and Predictive Model

slide-10
SLIDE 10

APPENDIX

slide-11
SLIDE 11

Stacked Denoising AutoEncoder