Tensor Decomposition for Healthcare Analytics Matteo Ruffini - - PowerPoint PPT Presentation

tensor decomposition for healthcare analytics
SMART_READER_LITE
LIVE PREVIEW

Tensor Decomposition for Healthcare Analytics Matteo Ruffini - - PowerPoint PPT Presentation

Tensor Decomposition for Healthcare Analytics Matteo Ruffini Laboratory for Relational Algorithmic, Complexity and Learning matteo.ruffini@estudiant.upc.edu November 5, 2017 Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5,


slide-1
SLIDE 1

Tensor Decomposition for Healthcare Analytics

Matteo Ruffini

Laboratory for Relational Algorithmic, Complexity and Learning matteo.ruffini@estudiant.upc.edu

November 5, 2017

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 1 / 36

slide-2
SLIDE 2

Overview

1

Overview

2

Clustering Mixture Model Clustering Tensor Decomposition Mixture of independent Bernoulli

3

Applications to Healthcare Analytics Data and objectives Results

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 2 / 36

slide-3
SLIDE 3

Overview

Task: to segment patients in groups with similar clinical profiles.

1 Similar patients → Similar cares. 2 Find recurrent comorbidities. 3 Assigning and planning resources: drugs and doctors. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 3 / 36

slide-4
SLIDE 4

Overview

Task: to segment patients in groups with similar clinical profiles.

1 Similar patients → Similar cares. 2 Find recurrent comorbidities. 3 Assigning and planning resources: drugs and doctors.

Data: Electronic Healthcare Records (EHR). Objective: Use these data to create clusters of patients.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 3 / 36

slide-5
SLIDE 5

Example: ICD-9 EHR

In ICD code, to each disease is associated a number 278 → Obesity, 401 → Hypertension

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 4 / 36

slide-6
SLIDE 6

Example: ICD-9 EHR

In ICD code, to each disease is associated a number 278 → Obesity, 401 → Hypertension Records: list of patients with their diseases → patient-disease matrix.

Diseases Patient 1 820, 401 Patient 2 401, 278, Patient 3 560, 820, 278 820 401 278 560 Patient 1 1 1 Patient 2 1 1 Patient 3 1 1 1

Objective: cluster the rows of the patient-disease matrix. Sparse and high dimensional data.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 4 / 36

slide-7
SLIDE 7

Clustering

Clustering: one of the fundamental tasks of Machine Learning. Objective: Dataset of N samples → partition in coherent subsets Dataset: a matrix X ∈ RN×n X (i) = (x(i)

1 , ..., x(i) n )

Group together similar rows. Standard methods: k-means, k-medioids, single linkage... Distance-based: poor performances on high dimensional sparse data.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 5 / 36

slide-8
SLIDE 8

Mixture Models

Definition (Mixture Model)

Y ∈ {1, ..., k} A latent discrete variable. X = (x1, . . . , xn) observable, depends on Y . P(X) =

k

  • i=1

P(Y = i)P(X|Y = i) xi are called features. Y x2 x1 xn . . .

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 6 / 36

slide-9
SLIDE 9

Mixture Models

Definition (Mixture Model)

Y ∈ {1, ..., k} A latent discrete variable. X = (x1, . . . , xn) observable, depends on Y . P(X) =

k

  • i=1

P(Y = i)P(X|Y = i) xi are called features. Y x2 x1 xn . . .

Generative process for one sample:

1 Draw Y , obtain Y = i ∈ {1, ..., k}. 2 Draw X ∈ Rn ≈ P(X|Y = i) Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 6 / 36

slide-10
SLIDE 10

Mixture Model Clustering

Clustering

From an outcome of X (observed) → Infer the outcome of Y (unknown) k clusters.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 7 / 36

slide-11
SLIDE 11

Mixture Model Clustering

Clustering

From an outcome of X (observed) → Infer the outcome of Y (unknown) k clusters. Parameters characterizing a mixture model:

ωh := P(Y = h), ω := (ω1, . . . , ωk)⊤, Ω := diag(ω). µi,j = E(xi|Y = j), M = (µi,j)i,j = [µ1|, ..., |µk] ∈ Rn×k

If conditional distributions and the model parameters are known:

P(Y = j|X, M, ω) ∝ P(X|Y = j, M)ωj Cluster(X) = arg max

j=1,...,k

P(Y = j|X, M, ω)

It is crucial to know the parameters of the model (M, ω).

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 7 / 36

slide-12
SLIDE 12

Mixture of Independent Bernoulli

Observables are binary and conditionally independent: xi ∈ {0, 1}. The expectations coincide with the probability of a positive outcome. µi,j = P(xi = 1|Y = j). P(Y = j|X) ∝ ωj

n

  • i=1

µxi

i,j(1 − µi,j)1−xi

Clustering Rule: Cluster(X) = arg max

j=1,...,k

ωj

n

  • i=1

µxi

i,j(1 − µi,j)1−xi

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 8 / 36

slide-13
SLIDE 13

Mixture Model Clustering: sum up

Advantages: Robust to irrelevant features: P(xi) = P(xi|Y = j) Algorithms with provable guarantees of optimality.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 9 / 36

slide-14
SLIDE 14

Mixture Model Clustering: sum up

Advantages: Robust to irrelevant features: P(xi) = P(xi|Y = j) Algorithms with provable guarantees of optimality. Disadvantages: Model assumption on the reality.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 9 / 36

slide-15
SLIDE 15

Mixture Model Clustering: sum up

Advantages: Robust to irrelevant features: P(xi) = P(xi|Y = j) Algorithms with provable guarantees of optimality. Disadvantages: Model assumption on the reality. To sum up: Two steps:

1 Estimate the parameters of the mixture. 2 Group together similar elements, using Bayes’ theorem. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 9 / 36

slide-16
SLIDE 16

Learning mixture parameters

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 10 / 36

slide-17
SLIDE 17

Maximum Likelihood Estimate

Standard method Maximum Likelihood. Find parameters Θ = (M, ω) maximizing the likelihood on X ∈ RN×n max

Θ P(X, Θ) = max Θ N

  • i=1

k

  • j=1

P(X (i)|Y = j, M)ωj Maximizing this is hard In general there are no closed form solutions.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 11 / 36

slide-18
SLIDE 18

Expectation Maximization (EM)

Iterative algorithm from [Dempster et al.(1977)]

1 Randomly initialize (M, ω) 2 Cluster the samples. 3 Use the clusters to recalculate (M, ω). 4 Iterate over steps 2 and 3 until convergence. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 12 / 36

slide-19
SLIDE 19

Expectation Maximization (EM)

Iterative algorithm from [Dempster et al.(1977)]

1 Randomly initialize (M, ω) 2 Cluster the samples. 3 Use the clusters to recalculate (M, ω). 4 Iterate over steps 2 and 3 until convergence.

Pro and cons Iteratively increases the likelihood. No guarantees of reaching global optimum. EM is slow. The quality of the results depends on the initialization: Good starting points → Good outputs

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 12 / 36

slide-20
SLIDE 20

Alternative Approach: Tensor Decomposition

A general approach, outlined in [Anandkumar et al., 2014].

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 13 / 36

slide-21
SLIDE 21

Alternative Approach: Tensor Decomposition

A general approach, outlined in [Anandkumar et al., 2014].

1 Estimate (Recall: M = [µ1|, ..., |µk], µi = E[X|Y = i] ∈ Rn).

M1 := M ω ∈ Rn M2 := M diag(ω) M⊤ ∈ Rn×n, M3 := k

i=1 ωiµi ⊗ µi ⊗ µi ∈ Rn×n×n

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 13 / 36

slide-22
SLIDE 22

Alternative Approach: Tensor Decomposition

A general approach, outlined in [Anandkumar et al., 2014].

1 Estimate (Recall: M = [µ1|, ..., |µk], µi = E[X|Y = i] ∈ Rn).

M1 := M ω ∈ Rn M2 := M diag(ω) M⊤ ∈ Rn×n, M3 := k

i=1 ωiµi ⊗ µi ⊗ µi ∈ Rn×n×n

2 Retrieve (M, ω) with a tensor decomposition algorithm A:

A(M1, M2, M3) → (M, ω)

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 13 / 36

slide-23
SLIDE 23

Alternative Approach: Tensor Decomposition

A general approach, outlined in [Anandkumar et al., 2014].

1 Estimate (Recall: M = [µ1|, ..., |µk], µi = E[X|Y = i] ∈ Rn).

M1 := M ω ∈ Rn M2 := M diag(ω) M⊤ ∈ Rn×n, M3 := k

i=1 ωiµi ⊗ µi ⊗ µi ∈ Rn×n×n

2 Retrieve (M, ω) with a tensor decomposition algorithm A:

A(M1, M2, M3) → (M, ω) Step 1: Depends on the specific properties of the mixture. Step 2: Is general (need assumptions on M).

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 13 / 36

slide-24
SLIDE 24

Example: Mixture of Independent Gaussians

Dataset X ∈ RN×n with iid rows X (i) = (x(i)

1 , ..., x(i) n ).

Model settings: x(i)

h

and x(i)

l

are conditionally independent ∀h = l. x(i)

h

conditioned to Y is a Gaussian, with known stdev σ: P(xh|Y = i) ≈ N(µh,i, σ)

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 14 / 36

slide-25
SLIDE 25

Example: Mixture of Independent Gaussians

Dataset X ∈ RN×n with iid rows X (i) = (x(i)

1 , ..., x(i) n ).

Model settings: x(i)

h

and x(i)

l

are conditionally independent ∀h = l. x(i)

h

conditioned to Y is a Gaussian, with known stdev σ: P(xh|Y = i) ≈ N(µh,i, σ)

Theorem ([Hsu et al. 2013])

Define the following three quantities:

˜ M1 = N

i=1 X (i) N ,

˜ M2 = X ⊤X

N

− σ2In ˜ M3 = N

i=1 X (i)⊗X (i)⊗X (i) N

− σ2 n

i=1( ˜

M1 ⊗ ei ⊗ ei + ei ⊗ ˜ M1 ⊗ ei + ei ⊗ ei ⊗ ˜ M1)

Then limN→∞ ˜ Mi = Mi ∀i ∈ {1, 2, 3}

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 14 / 36

slide-26
SLIDE 26

Other estimation procedures.

Similar (but more technical) procedures for many Mixture Models: Mixture of multinomial distributions (Single Topic Model) [Ruffini, Casanellas, Gavald` a (2017)] Naive Bayes Models [Anandkumar et al. 2012]. This estimation procedure can be generalized to other latent variable models (like Hidden Markov Models, Latent Dirichlet Allocation...) Given estimated ˜ M1, ˜ M2 and ˜ M3 we feed an algorithm A to recover estimated ( ˜ M, ˜ ω)

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 15 / 36

slide-27
SLIDE 27

A Tensor Decomposition Algorithm: SVTD

An algorithm A A(M1, M2, M3, k) → (M, ω) Assumptions: The centers are linearly independent (M has rank k ≤ n ) At least one feature has different conditional expectations: E(xi|Y = j) = E(xi|Y = h) ∀i = h

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 16 / 36

slide-28
SLIDE 28

A Tensor Decomposition Algorithm: SVTD

Observations (recall: Ω = diag(ω))

1 M2 = MΩM⊤ by definition. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 17 / 36

slide-29
SLIDE 29

A Tensor Decomposition Algorithm: SVTD

Observations (recall: Ω = diag(ω))

1 M2 = MΩM⊤ by definition. 2 Also M2 = UkSkU⊤

k = EkE ⊤ k with a SVD. Then

MΩ

1 2 = EkO,

for some O : OO⊤ = Ik

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 17 / 36

slide-30
SLIDE 30

A Tensor Decomposition Algorithm: SVTD

Observations (recall: Ω = diag(ω))

1 M2 = MΩM⊤ by definition. 2 Also M2 = UkSkU⊤

k = EkE ⊤ k with a SVD. Then

MΩ

1 2 = EkO,

for some O : OO⊤ = Ik

3 M3 := k

i=1 ωiµi ⊗ µi ⊗ µi so its r − th slice is:

M3,r = MΩ

1 2 diag((µr,1, ..., µr,k))Ω 1 2 M⊤

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 17 / 36

slide-31
SLIDE 31

A Tensor Decomposition Algorithm: SVTD

Observations (recall: Ω = diag(ω))

1 M2 = MΩM⊤ by definition. 2 Also M2 = UkSkU⊤

k = EkE ⊤ k with a SVD. Then

MΩ

1 2 = EkO,

for some O : OO⊤ = Ik

3 M3 := k

i=1 ωiµi ⊗ µi ⊗ µi so its r − th slice is:

M3,r = MΩ

1 2 diag((µr,1, ..., µr,k))Ω 1 2 M⊤

4 For each r

Hr = E †

k M3,r(E ⊤ k )† = Odiag((µr,1, ..., µr,k))O⊤

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 17 / 36

slide-32
SLIDE 32

A Tensor Decomposition Algorithm: SVTD

Observations (recall: Ω = diag(ω))

1 M2 = MΩM⊤ by definition. 2 Also M2 = UkSkU⊤

k = EkE ⊤ k with a SVD. Then

MΩ

1 2 = EkO,

for some O : OO⊤ = Ik

3 M3 := k

i=1 ωiµi ⊗ µi ⊗ µi so its r − th slice is:

M3,r = MΩ

1 2 diag((µr,1, ..., µr,k))Ω 1 2 M⊤

4 For each r

Hr = E †

k M3,r(E ⊤ k )† = Odiag((µr,1, ..., µr,k))O⊤

5 The singular values of Hr are the r − th row of M. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 17 / 36

slide-33
SLIDE 33

A Tensor Decomposition Algorithm: SVTD

Algorithm

1 Take as input (M1, M2, M3, k) 2 Decompose M2 as M2 = EkE ⊤

k

3 Calculate

Hr = E †

kM3,r(E ⊤ k )†

4 Recover the r − th row of M as the Singular Values of Hr. 5 Recover ω solving Mω = M1

Reference: A new Spectral Method for Latent Variable Models [Ruffini, Casanellas, Gavald` a (2017)]

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 18 / 36

slide-34
SLIDE 34

SVTD - Perturbation Theorem

SVTD(M1, M2, M3, k) → (M, ω) Small perturbations on the input → Small perturbations on the output. SVTD( ˜ M1, ˜ M2, ˜ M3, k) → ( ˜ M, ˜ ω)

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 19 / 36

slide-35
SLIDE 35

SVTD - Perturbation Theorem

SVTD(M1, M2, M3, k) → (M, ω) Small perturbations on the input → Small perturbations on the output. SVTD( ˜ M1, ˜ M2, ˜ M3, k) → ( ˜ M, ˜ ω)

Theorem ([Ruffini, Casanellas, Gavald` a (2017)])

If || ˜ M2 − M2||F < ǫ, || ˜ M3 − M3||F < ǫ, then, for sufficiently small ǫ,

||M − ˜ M||F ≤ C1ǫ + O(C2ǫ2)

for some C1 and C2 depending on the model parameters.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 19 / 36

slide-36
SLIDE 36

Putting it all together

1 Take a dataset X, with rows sampled from a given mixture model. 2 Estimate the moment tensors ˜

M1, ˜ M2, ˜ M3.

3 Retrieve estimated ( ˜

M, ˜ ω).

4 Optionally, use EM to improve the obtained ( ˜

M, ˜ ω).

5 Use Bayes’ theorem to cluster the rows of X. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 20 / 36

slide-37
SLIDE 37

Case Study: mixture of independent Bernoulli

In some cases we don’t know how to directly estimate ˜ M1, ˜ M2, ˜ M3 X a dataset with rows ≈ mixture of independent Bernoulli. Open Problem: how to efficiently estimate ˜ M2, ˜ M3? X →

  • ?

˜ M2, ˜ M3 (the issue are the diagonal entries...)

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 21 / 36

slide-38
SLIDE 38

A work-around: the three views trick (Idea)

Split the observables in three views:

X (i) = (x(i)

1 , . . . , x(i) da

  • X (i)

a

, x(i)

da+1, . . . , x(i) da+db

  • X (i)

b

, x(i)

da+db+1, . . . , x(i) da+db+dc

  • X (i)

c

), M =   Ma Mb Mc  

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 22 / 36

slide-39
SLIDE 39

A work-around: the three views trick (Idea)

Split the observables in three views:

X (i) = (x(i)

1 , . . . , x(i) da

  • X (i)

a

, x(i)

da+1, . . . , x(i) da+db

  • X (i)

b

, x(i)

da+db+1, . . . , x(i) da+db+dc

  • X (i)

c

), M =   Ma Mb Mc  

Estimate subtensors of ˜ M2, ˜

  • M3. [Anandkumar et al., 2014]

For j = a, b, c

Mj,2 := Mj diag(ω) M⊤

j ,

Mj,3 := k

i=1 ωiµ(j) i

⊗ µ(j)

i

⊗ µ(j)

i

Decompose them to get Mj Get M by concatenation.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 22 / 36

slide-40
SLIDE 40

Experiments - Mixture of independent Bernoulli

Experiment: generate a synthetic dataset and cluster its rows with SVTD and with k-means, spectral clustering and PCA clustering. Accuracy metric: Adjusted Rand Index. It is 1 if the clustering is perfect, 0 if it is bad (like random labeling).

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 23 / 36

slide-41
SLIDE 41

Clustering Patients with Tensor Decomposition

[Ruffini, Gavald` a, Lim´

  • n (2017)]

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 24 / 36

slide-42
SLIDE 42

The data

Source: Servei Catal` a de la Salut. Dataset Diagnostics of patients admitted to the hospitals in 2016. Data format Each row is a visit: up to 10 diagnostics in ICD-9 format. Two subset datasets

1 Patients having diagnostic 428 in the ICD-9 code (Heart Failure). 2 Patients with serious diseases to be treated it top hospitals.

Objective:

1 To create meaningful clusters of patients in each dataset. 2 To visualize the key characteristics of each cluster. Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 25 / 36

slide-43
SLIDE 43

Modeling strategy

Convert each dataset in a patient-disease matrix: Disease 1 Disease 2 Disease 3 ... Patient 1 1 1 ... Patient 2 1 1 ... Patient 3 1 1 ... ... ... ... ... ...

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 26 / 36

slide-44
SLIDE 44

Modeling strategy

Convert each dataset in a patient-disease matrix: Disease 1 Disease 2 Disease 3 ... Patient 1 1 1 ... Patient 2 1 1 ... Patient 3 1 1 ... ... ... ... ... ... Data are modeled as mixture of independent Bernoulli variables Latent state → Medical status of a patient. Observed diseases depend on the patient status. Once in a status, diagnostics are independent.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 26 / 36

slide-45
SLIDE 45

Modeling strategy

We have a mixture of independent Bernoulli:

1 Recover ( ˜

M, ˜ ω).

2 Improve the estimated ( ˜

M, ˜ ω) with EM.

3 Cluster the rows of X into k clusters. 4 Plot the results.

The number of clusters is manually set as an external parameter, (from expert’s considerations).

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 27 / 36

slide-46
SLIDE 46

Details of the Datasets

Heart Failure Dataset N = 23082 (23082 individual patient records). All the patients in the dataset have a Heart Failure as a diagnostic. k = 5 clusters. “Tertiary” Dataset N = 16311 individual patient records. k = 6 clusters. In both cases n = 696 registered diagnostics (columns of the datasets).

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 28 / 36

slide-47
SLIDE 47

Visual patterns: Heat-maps

Heat-maps of the two datasets: Black dots: diagnostics Background color: clusters

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 29 / 36

slide-48
SLIDE 48

Relevance

Heat-maps → patterns in the clusters. What there is inside the patterns? Find the relevant diagnostics for each cluster. Relevance: relevance(i, j) = λ log(µi,j) + (1 − λ) log( µi,j k

h=1 µi,hωh

) where µi,j = P(xi = 1|Y = j). High relevance: more frequent in a cluster than in the full dataset. Low relevance: low/high frequency everywhere.

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 30 / 36

slide-49
SLIDE 49

Heart Failure Dataset - Content of the clusters

Cluster ID: 1 2 3 4 5 Size: 7290 2915 4408 2936 5533

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 31 / 36

slide-50
SLIDE 50

Heart Failure Dataset – disease-frequency chart

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 32 / 36

slide-51
SLIDE 51

“Tertiary” Dataset - Content of the clusters

Cluster ID: 1 2 3 4 5 6 Size: 4892 3982 1043 3133 819 2442

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 33 / 36

slide-52
SLIDE 52

“Tertiary” Dataset– disease-frequency chart

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 34 / 36

slide-53
SLIDE 53

Tensor Decomposition for Healthcare Analytics

Matteo Ruffini

Laboratory for Relational Algorithmic, Complexity and Learning matteo.ruffini@estudiant.upc.edu

November 5, 2017

Matteo Ruffini (UPC) Tensor Decomposition for Healthcare November 5, 2017 35 / 36