revision part i i
play

Revision (Part I I ) Ke Chen Revision slides are going to summarise - PowerPoint PPT Presentation

Revision (Part I I ) Ke Chen Revision slides are going to summarise all you have learnt from Part II, which should be helpful for you to prepare your exam in January along with those non-assessed exercises also available from the teaching page.


  1. Revision (Part I I ) Ke Chen Revision slides are going to summarise all you have learnt from Part II, which should be helpful for you to prepare your exam in January along with those non-assessed exercises also available from the teaching page. COMP24111 Machine Learning

  2. Generative Models and Naïve Bayes • Probabilistic Classifiers – discriminative vs. generative classifiers – Bayesian rule used to convert generative to discriminative, MAP for decision making • Naïve Bayesian Assumption – Conditionally independent assumption on input attributes • Naïve Bayes classification algorithm: discrete vs. continuous features – Estimate conditional probabilities for each attribute given a class label and prior probabilities for each class label (training phase) – MAP rule for decision making (test phase) • Relevant issues – Zero conditional probability due to short of training examples – Applicability to problems violating the naïve Bayes assumption COMP24111 Machine Learning 2

  3. Clustering Analysis Basics • Clustering Analysis Task – discover the “natural” clustering number – properly grouping objects into “sensible” clusters • Data type and representation – Data type: continuous vs. discrete (binary, ranking, …) – Data matrix and distance matrix • Distance Metric – Minkowski distance (Manhattan, Euclidean …) for continuous – Cosine measure for nonmetric – Distance for binary: contingency table, symmetric vs. asymmetric • Major Clustering Approach – Partitioning, hierarchical, density-based, spectral, ensemble, … COMP24111 Machine Learning 3

  4. K-Means Clustering • Principle – A typical partitioning clustering approach with an iterative process to minimise the square distance in each cluster • K-means algorithm 1) Initialisation: choose K centroids (seed points) 2) Assign each data object to the cluster whose centroid is nearest 3) Re-calculate the mean for each cluster to get a updated centroid 4) Repeat 2) and 3) until no new assignment • Application: K-means based image segmentation • Relevant issues – Efficiency: O( tkn ) where t, k < < n – Sensitive to initialisation and converge to local optimum – Other weakness and limitations COMP24111 Machine Learning 4

  5. Hierarchical and Ensemble Clustering • Hierarchical clustering – Principle: partitioning data set sequentially – Strategy: divisive (top-down) vs. agglomerative (bottom-up) • Cluster Distance – Single-link, complete-link and averaging-link • Agglomerative algorithm 1) Convert object attributes to distance matrix 2) Repeat until number of cluster is one Merge two closest clusters o Update distance matrix with cluster distance o • Key concepts and technique in heretical clustering – Dendrogram tree, life-time of clusters, K life-time – Inferring the number of clusters with maximum K life-time • Clustering ensemble based on evidence accumulation – Multiple k-means clustering with different initialisation, resulting in different partitions; – Accumulating the “evidence” from all partitions to form a “collective distance” matrix; – Apply agglomerative algorithm to the “collective distances” and decide K using maximum K life-time COMP24111 Machine Learning 5

  6. Cluster Validation • Cluster Validation – Evaluate the results of clustering in a quantitative and objective fashion – Performance evaluation, clustering comparison, find cluster num. • Two different types of cluster validation methods – Internal indexes • No ground truth available and sometimes named “relative index” • Defined based on “common sense” or “a priori knowledge” • Scatter-based validity indexes • Application: finding the “proper” number of clusters, … – External indexes • Ground truth known or reference given • Rand Index: understanding the idea in addressing the permutation/inconsistency issues • Application: performance evaluation of clustering, clustering comparison... – Weighted clustering ensemble • Key idea: using multiple “meaningful” validity indexes to weight different partitions before evidence accumulation to diminish the effect of trivial partitions COMP24111 Machine Learning 6

  7. Examination I nformation • Three Sections (total 60 marks) – Section A (30 marks) o 30 multiple choice questions totally ( online ) o Q1-15 for Part I & Q16-30 for Part II – Section B (15 marks) o Compulsory questions relevant to Part I – Section C (15 marks) o Compulsory questions relevant to Part II • Length: two hours • Calculator (without memory) allowed COMP24111 Machine Learning 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend