The impact of high dimension on clustering Gilles Celeux Inria - - PowerPoint PPT Presentation

the impact of high dimension on clustering
SMART_READER_LITE
LIVE PREVIEW

The impact of high dimension on clustering Gilles Celeux Inria - - PowerPoint PPT Presentation

The impact of high dimension on clustering Gilles Celeux Inria Saclay-le-de-France, Universit Paris-Sud Cluster Analysis Cluster analysis aims to discover homogeneous clusters in a data set. Data sets (Dis)Similarity table : matrix D


slide-1
SLIDE 1

The impact of high dimension on clustering

Gilles Celeux

Inria Saclay-Île-de-France, Université Paris-Sud

slide-2
SLIDE 2

Cluster Analysis

Cluster analysis aims to discover homogeneous clusters in a data set.

Data sets

◮ (Dis)Similarity table : matrix D with dimension (n, n) ◮ Objects-variables table: matrix X with dimension (n, d) ◮ p variables measured on n objects

◮ quantitative variables : n points x1, . . . , xn in Rd ◮ qualitative variables

Large dimensions

◮ We are concerned with large n and d objects-variables

tables

◮ We restrict attention to partitions

slide-3
SLIDE 3

Outline of the talk

First, three families of methods are discussed

◮ Standard geometrical k-means-like methods (data analysis

community )

◮ Model-based clustering methods (statistics community) ◮ Spectral clustering (machine learning community)

Second, the Latent Block Model which is a specific model for summarizing large tables will be considered.

slide-4
SLIDE 4

Partitions: k-means type algorithm

◮ Within-cluster type inertia criterion :

W(C, L) =

  • k
  • i∈Ck

||xi − λk||2 where L = (λ1, . . . , λg) with λk ∈ Rp (in the standard situation).

◮ Algorithm: alterned minimisation of W ◮ It leads to a stationary sequence of partitions decreasing in

W(C, L)

◮ L can take many forms (points, axes, points and distances,

densities, ...) to lead to many algorithms.

◮ For the standard k-means algorithm, λk is the center of

cluster Ck

slide-5
SLIDE 5

Features of the k-means algorithm

k-means is simple

◮ The k-means algorithms converges (rapidly) in a finite

number of iterations.

◮ Cluster summary is parsimonious. ◮ It is the most popular clustering method.

k-means is not versatile

◮ The standard k-means algorithm has a tendency to

provide spherical clusters, with equal sizes and volumes.

◮ Many local optimal solutions. ◮ Variable selection procedures for k-means are unrealistic

and poor: a variable has to be relevant or independent of the clustering.

slide-6
SLIDE 6

Model-based clustering

Finite Mixture Model

The general form of a mixture model with g components is f(x) =

  • k

πkfk(x)

◮ πk : mixing proportions ◮ fk(.): densities of components

Each mixture component is associated to a cluster

◮ The parametrisation of the cluster densities depends of the

nature of the data. Typically:

◮ quantitative data: multivariate Gaussian mixture, ◮ qualitative data: multinomial latent class model.

slide-7
SLIDE 7

Quantitative data: multivariate Gaussian Mixture (MGM)

Multidimensional observations x = (x1, . . . , xn) in Rd are assumed to be a sample from a probability distribution with density f(xi|θ) =

  • k

πkφ(xi|µk, Σk) where

◮ πk : mixing proportions ◮ φ(.|µk, Σk) : Gaussian density with mean µk and variance

matrix Σk. This is the most popular model for clustering of quantitative data.

slide-8
SLIDE 8

Qualitative Data: latent class model (LCM)

◮ Observations to be classified are described with d

qualitative variables.

◮ Each variable j has mj response levels.

Data x = (x1, . . . , xn) are defined by xi = (xjh

i ; j = 1, . . . , d; h = 1, . . . , mj)

with

  • xjh

i

= 1 if i has response level h for variable j xjh

i

= 0

  • therwise.
slide-9
SLIDE 9

The standard latent class model (LCM)

Data are supposed to arise from a mixture of g multivariate multinomial distributions with pdf f(xi; θ) =

  • k

πkmk(xi; αk) =

  • k

πk

  • j,h

(αjh

k )xjh

i

where θ = (π1, . . . , πg, α11

1 , . . . , αdmd g

) is the parameter of the latent class model to be estimated:

◮ αjh k : probability that variable j has level h in cluster k, ◮ πk : mixing proportions

Latent class model is assuming that the variables are conditionnally independent knowing the latent clusters.

slide-10
SLIDE 10

The advantages of model-based clustering

Model-based clustering provides a solid ground to answer to the cluster analysis problems.

◮ Many efficient algorithms to estimate the model

parameters.

◮ Choosing the number of clusters can be achieved with

relevant penalized information criteria (BIC, ICL)

◮ Those criteria are also helpful to choose a relevant model

with a fixed number of clusters.

◮ Defining the possible roles of the variables can be achieved

properly (relevant, redundant and independent variables).

◮ Efficient softwares: http://www.mixmod.org ◮ Specific situations can be dealt with efficiently. Examples:

◮ taking missing data into account ◮ robust analysis ◮ hidden Markov models for depending data

slide-11
SLIDE 11

EM algorithm (maximum likelihood estimation)

Algorithm

◮ Initial Step : initial solution θ0 ◮ E step: Compute the conditional probabilities tik that

  • bservation i arises from the kth component for the current

value of the mixture parameters: tm

ik =

πm

k ϕk(xi; αm k )

  • ℓ πm

ℓ ϕℓ(xi; αm ℓ ) ◮ M step: Update the mixture parameter estimates

maximising the expected value of the completed likelihood. It leads to weight the observation i for group k with the conditional probability tik.

◮ πm+1

k

= 1

n

  • i tm

ik

◮ αm+1

k

: Solving the Likelihood Equations

slide-12
SLIDE 12

Features of EM

◮ EM is increasing the likelihood at each iteration ◮ Under regularity conditions, convergence towards the

unique consistent solution of likelihood equations

◮ Easy to program ◮ Good practical behaviour ◮ Slow convergence situations (especially for mixtures with

  • verlapping components)

◮ Many local maxima or even saddle points ◮ Quite popular: see the McLachlan and Krishnan book

(1997)

slide-13
SLIDE 13

Classification EM

The CEM algorithm, clustering version of EM, estimate both the mixture parameters and the labels by maximising the completed likelihood L(θ; x, z) =

  • k,i

zik log πkf(xi; αk)

Algorithm

◮ E step: Compute the conditional probabilities tik that

  • bservation i arises from the kth component for the current

value of the mixture parameters.

◮ C step: Assign each observation i to the component

maximising the conditional probability tik (MAP principle)

◮ M step: Update the mixture parameter estimates

maximising the completed likelihood.

slide-14
SLIDE 14

Features of CEM

◮ CEM aims maximising the complete likelihood where the

component label of each sample point is included in the data set.

◮ Contrary to EM, CEM converges in a finite number of

iterations

◮ CEM provides biased estimates of the mixture parameters. ◮ CEM is a K-means-like algorithm.

slide-15
SLIDE 15

Model-based clustering via EM

Relevant clustering can be deduced from EM

◮ Estimating the mixture parameters with EM ◮ Computing of tik, conditional probability that observation xi

comes from cluster k using the estimated parameters.

◮ Assigning each observation to the cluster maximising tik

(MAP : Maximum a posteriori) This strategy could be preferred since CEM provides biased estimates of the mixture parameters. But CEM is doing the job for well-separated mixture components.

slide-16
SLIDE 16

Penalized Likelihood Selection Criteria

The BIC criterion

BIC(m) = log p(x|m, ˆ θm) − νm 2 log(n). BIC works well to choose a model in a density estimation context

The ICL criterion

ICL(m) = BIC(m) −

  • k,i

tm

ik log tm ik .

ICL is focussing on the clustering purpose and favoring mixtures with well separated components.

slide-17
SLIDE 17

Drawbacks of Model-based Cluster Analysis

Model-based clustering is not tailored to deal with large data sets.

◮ MBC makes use of versatile models which are too complex

for large dimensions

◮ Algorithmic difficulties increase dramatically with the

dimension

◮ Since all models are wrong, penalised likelihood criteria as

BIC become inefficient for large sample sizes.

◮ Choosing a model cannot be independent of the modelling

purpose Solutions exist to attenuate these problems:

◮ restrict attention to parsimonious models ◮ prefer CEM to EM algorithm ◮ prefer ICL to BIC to select a model

slide-18
SLIDE 18

An antinomic approach: Spectral Clustering

Spectral Clustering is based on non directed similarity graph G = (V, E), (sij) such that

◮ The vertices V are the objects. ◮ There is an edge between two objects i and j if sij > 0. ◮ A weighted adjacency matrix W wij is associated to sij.

We define

◮ the degree of edge i as di = j wij, ◮ D is the diagonal matrix (di, i = 1, . . . , n), ◮ for A ∈ V, |A| = cardA, vol(A) = i∈A di.

The connected components of G define a partition of V.

slide-19
SLIDE 19

Which similarities ?

◮ all points whose pairwise distances are smaller threshold ε

are connected, then wij = 1.

◮ the connected points are k nearest neighbor symmetrised,

then wij = sij.

◮ the connected points are mutual k nearest neighbor,

thenwij = sij.

◮ a Gaussian similarity

sij = exp[−||xi − xj||2 2σ2 ]. is chosen. The tuning parameters ε, k or σ are sensitive. . . as the choice

  • f g.
slide-20
SLIDE 20

Laplacian graphs

◮ Non normalised Laplacian graph :

L = D − W

◮ Symetrised Laplacian graph

Ls = D−1/2LD−1/2 = I − D−1/2WD−1/2

◮ Random walk Laplacian graph

Lr = D−1L = I − D−1W

slide-21
SLIDE 21

Spectral clustering Algorithm

Input The similarity matrix S and the number of clusters g.

◮ Construct the similarity graph and the related matrix W. ◮ Compute the chosen Laplacian matrix L. ◮ Compute the first g eigenvectors of L, Lu = λu. ◮ Let U the matrix of the g first eigenvectors. ◮ For i = 1, . . . , n, let yi ∈ Rg be the vector corresponding to

the row i of U.

◮ Cluster the yis with k-means in g clusters C1, . . . , Cg.

Output The g clusters C1, . . . , Cg with Cℓ = {i / yi ∈ Cℓ}. Spectral clustering is attractive for large tables because efficient programs are available to find the eigenvectors of large sparse matrices.

slide-22
SLIDE 22

An example: the data

!1.0 !0.5 0.0 0.5 1.0 !1.0 !0.5 0.0 0.5 1.0 r$x[, 1] r$x[, 2]

slide-23
SLIDE 23

Representation before k means with σ = 300

!2.5 !2.0 !1.5 !1.0 !0.5 0.0 0.5 !1.0 !0.5 0.0 0.5 1.0 1.5 X[, pc1] X[, pc2]

slide-24
SLIDE 24

Representation before k means with σ = 3

!2.5 !2.0 !1.5 !1.0 !0.5 0.0 0.5 !1.0 !0.5 0.0 0.5 1.0 1.5 X[, pc1] X[, pc2]

slide-25
SLIDE 25

Representation before k means with σ = 3000

5 10 !8 !6 !4 !2 2 4 6 X[, pc1] X[, pc2]

slide-26
SLIDE 26

Block clustering setting

Block clustering of data

◮ Let x = {(xij); i ∈ I, j ∈ J} be a dimension n × d matrix,

where I is a set n objets and J a set of d variables

◮ Block clustering of x is aiming to find a clustering structure

  • n I × J.

A dramatically parsimonious data representation

◮ Block clustering is summarizing a data set of nd numbers

with gm numbers with g << n and m << d.

◮ This representation is appealing to deal with huge data

sets arising in recommendation systems, genomic data analysis, text mining,. . .

slide-27
SLIDE 27

An illustration with binary data

<

Initial data 10 20 30 40 50 60 20 40 60 80 100 Reorganized data 10 20 30 40 50 60 20 40 60 80 100 Reorganized and averaged data Summary

slide-28
SLIDE 28

Model-based clustering framework

◮ Assume that the data are arising from a finite mixture of

parametrised densities.

◮ A cluster is made by observations arising from the same

density.

◮ In a block clustering model, clusters are defined on blocks

∈ I × J.

◮ In a block clustering model, data of a block are modelled

by the same unidimensional density.

slide-29
SLIDE 29

Latent block mixture model

Density of the observed data is supposed to be f(x|g, m, φ, α) =

  • u∈U

p(u|g, m, φ)f(x|g, m, u, α) where u is the indicator block vector. It is assumed that uijb = zikwjℓ, z (resp.w) being the row (resp. column) cluster indicator vector. Assuming that the n × d variables Yij are conditionnally independent knowing z and w leads to the model f(x|g, m, π, ρ, α) =

  • z,w∈Z×W
  • i,k

πzik

k

  • j,ℓ

ρ

wjℓ ℓ

  • i,j,k,ℓ

ϕ(yij|g, m, αkℓ)

slide-30
SLIDE 30

An exemple: Bernoulli latent block model

Mixing proportions

For fixed g, the mixing proportions for the row are π1, . . . , πg. For fixed m, the mixing proportions for the col. are ρ1, . . . , ρm.

The Bernoulli density per block

ϕ(yij; αkℓ) = (αkℓ)yij(1 − αkℓ)1−yij where αkℓ ∈ (0, 1). The mixture density is f(x|g, m, π, ρ, α) =

  • z,w∈Z×W
  • i,k

πzik

k

  • j,ℓ

ρ

wjℓ ℓ

  • i,j,k,ℓ

(αkℓ)yij(1−αkℓ)1−yij. The g + m + gm parameters to be estimated are the πs, the ρs and the αs.

slide-31
SLIDE 31

Maximum likelihood estimation

The EM algorithm is making use of the conditional expectation

  • f the complete loglikelihood

Q(θ|θ(c)) =

  • i,k

s(c)

ik log πk+

  • j,ℓ

t(c)

jℓ log ρℓ+

  • i,j,k,ℓ

e(c)

i,j,k,ℓ log ϕ(xij; αkℓ)

where s(c)

ik

= P(Zik = 1|θ(c), x), t(c)

jℓ

= P(Wjℓ = 1|θ(c), x) and e(c)

i,j,k,ℓ = P(ZikWjℓ = 1|θ(c), x).

֒ → Difficulty to compute e(c)

i,j,k,ℓ... Approximations are needed.

slide-32
SLIDE 32

Variational approximation of EM (VEM)

It is assumed that e(c)

i,j,k,ℓ = s(c) ik w(c) jℓ

Thus the VEM algorithm is

Govaert and Nadif (2008)

  • 1. E step:

1.1 computing sik with fixed w(c)

jl

and θ(c) 1.2 computing wjl with fixed s(c+1)

ik

and θ(c) ֒ → s(c+1) and w(c+1)

  • 2. M step: Updating θ(c+1)
slide-33
SLIDE 33

Some features of VEM

◮ The optimised free energy F(qzw, θ) is a lower bound of

the observed loglikelihood.

◮ The parameter maximising the free energy could be

expected to be a good, if not consistent, approximation of the maximum likelihood estimator.

◮ Since VEM is minimising KL(qzw||p(z, w|x; θ)) rather than

KL(p(z, w|x; θ)||qzw), it is expected to be sensitive to starting values.

slide-34
SLIDE 34

The SEM-Gibbs algorithm

SEM

The SEM algorithm: After the E step, a S step is introduced to simulate the missing data according to the distribution p(z, w|x; θ(c)). A difficulty for the latent block model is to simulate p(z, w| <; θ).

Gibbs sampling

The distribution p(z, w|x; θ(c)) is simulated using a Gibbs

  • sampler. Repeat

Simulate z(c+1) according to p(z|x, w(c); θ(c)) Simulate w(c+1) according to p(w|x, z(c+1); θ(c)) ֒ → The stationary distribution of the Markov chain is p(z, w|x; θ(c))

slide-35
SLIDE 35

SEM features

◮ SEM is not increasing the loglikelihood at each iteration. ◮ SEM is generating an irreductible Markov chain with a

unique stationary distribution.

◮ The parameter estimates fluctuate around the ml estimate

֒ → A natural estimator of θ, z, w is the mean of (θ(c), z(c), w(c); c = B, . . . , B + C) get after a burn-in period.

slide-36
SLIDE 36

Discussion: VEM vs. SEM

Numerical comparisons lead to the conclusions

◮ VEM leads rapidly to reasonable parameter estimates

when its initial position is near enough the ml estimation.

◮ VEM is quite sensitive to starting values. ◮ SEM-Gibbs is (essentially) insensitive to starting values.

֒ → Coupling SEM and VEM is beneficial to derive sensible ml estimates for the latent block model.

slide-37
SLIDE 37

Model selection

Choosing relevant number of clusters in a latent block model is

  • f crucial importance.

Two good news

◮ The integrated completed likelihood ICL is closed form. ◮ For n and d large enough, the entropy of the couple of

partitions (z, w) is close to 0. Thus BIC ≈ ICL.

slide-38
SLIDE 38

Choosing the missing labels

Let ˆ θ be the ml estimate of the LBM parameter derived from the SEM-VEM algorithm.

◮ The missing labels are replaced by

(ˆ z, ˆ w) = arg max

(z,w) p(z, w|x; g, m, ˆ

θ).

◮ An alternating optimisation algorithm is used to compute

(ˆ z, ˆ w): Repeat until convergence

◮ z(c) = arg maxz p(z|x; g, m, ˆ

θ, w(c−1))

◮ w(c) = arg maxw p(w|x; g, m, ˆ

θ, z(c)).

slide-39
SLIDE 39

Discussion on the Latent Block Model

Interest of LBM

◮ LBM is a parsimonious but crude model. ◮ To be efficient for large table, g and m have to large. ◮ The point is to summarize large tables in small tables

Difficulties with the LBM

◮ In this perspective, empty clusters is a severe issue. ◮ Bayesian regularisation through Variational Bayes

inference could be relevant.

◮ In a full Bayesian analysis, dealing with the label switching

problem for large g and m remains challenging.