classification classification and prediction
play

Classification Classification and Prediction Classification: - PowerPoint PPT Presentation

Classification Classification and Prediction Classification: predict categorical class labels Build a model for a set of classes/concepts Classify loan applications (approve/decline) Prediction: model continuous-valued functions


  1. Classification

  2. Classification and Prediction • Classification: predict categorical class labels – Build a model for a set of classes/concepts – Classify loan applications (approve/decline) • Prediction: model continuous-valued functions – Predict the economic growth in 2015 Jian Pei: CMPT 741/459 Classification (1) 2

  3. Classification: A 2-step Process • Model construction: describe a set of predetermined classes – Training dataset: tuples for model construction • Each tuple/sample belongs to a predefined class – Classification rules, decision trees, or math formulae • Model application: classify unseen objects – Estimate accuracy of the model using an independent test set – Acceptable accuracy à apply the model to classify tuples with unknown class labels Jian Pei: CMPT 741/459 Classification (1) 3

  4. Model Construction Classification Algorithms Training Data Classifier Name Rank Years Tenured (Model) Mike Ass. Prof 3 No Mary Ass. Prof 7 Yes Bill Prof 2 Yes Jim Asso. Prof 7 Yes IF rank = ‘ professor ’ Dave Ass. Prof 6 No OR years > 6 Anne Asso. Prof 3 No THEN tenured = ‘ yes ’ Jian Pei: CMPT 741/459 Classification (1) 4

  5. Model Application Classifier Testing Unseen Data Data (Jeff, Professor, 4) Name Rank Years Tenured Tenured? Tom Ass. Prof 2 No Merlisa Asso. Prof 7 No George Prof 5 Yes Joseph Ass. Prof 7 Yes Jian Pei: CMPT 741/459 Classification (1) 5

  6. Supervised/Unsupervised Learning • Supervised learning (classification) – Supervision: objects in the training data set have labels – New data is classified based on the training set • Unsupervised learning (clustering) – The class labels of training data are unknown – Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data Jian Pei: CMPT 741/459 Classification (1) 6

  7. Data Preparation • Data cleaning – Preprocess data in order to reduce noise and handle missing values • Relevance analysis (feature selection) – Remove the irrelevant or redundant attributes • Data transformation – Generalize and/or normalize data Jian Pei: CMPT 741/459 Classification (1) 7

  8. Measurements of Quality • Prediction accuracy • Speed and scalability – Construction speed and application speed • Robustness: handle noise and missing values • Scalability: build model for large training data sets • Interpretability: understandability of models Jian Pei: CMPT 741/459 Classification (1) 8

  9. Decision Tree Induction • Decision tree representation • Construction of a decision tree • Inductive bias and overfitting • Scalable enhancements for large databases Jian Pei: CMPT 741/459 Classification (1) 9

  10. Decision Tree • A node in the tree – a test of some attribute • A branch: a possible value of the attribute • Classification Outlook – Start at the root Sunny Overcast Rain – Test the attribute – Move down the tree branch Humidity Yes Wind High Normal Strong Weak No Yes No Yes Jian Pei: CMPT 741/459 Classification (1) 10

  11. Training Dataset Outlook Temp Humid Wind PlayTennis Sunny Hot High Weak No Sunny Hot High Strong No Overcast Hot High Weak Yes Rain Mild High Weak Yes Rain Cool Normal Weak Yes Rain Cool Normal Strong No Overcast Cool Normal Strong Yes Sunny Mild High Weak No Sunny Cool Normal Weak Yes Rain Mild Normal Weak Yes Sunny Mild Normal Strong Yes Overcast Mild High Strong Yes Overcast Hot Normal Weak Yes Rain Mild High Strong No Jian Pei: CMPT 741/459 Classification (1) 11

  12. Appropriate Problems • Instances are represented by attribute-value pairs – Extensions of decision trees can handle real- valued attributes • Disjunctive descriptions may be required • The training data may contain errors or missing values Jian Pei: CMPT 741/459 Classification (1) 12

  13. Basic Algorithm ID3 • Construct a tree in a top-down recursive divide- and-conquer manner – Which attribute is the best at the current node? – Create a node for each possible attribute value – Partition training data into descendant nodes • Conditions for stopping recursion – All samples at a given node belong to the same class – No attribute remained for further partitioning • Majority voting is employed for classifying the leaf – There is no sample at the node Jian Pei: CMPT 741/459 Classification (1) 13

  14. Which Attribute Is the Best? • The attribute most useful for classifying examples • Information gain and gini index – Statistical properties – Measure how well an attribute separates the training examples Jian Pei: CMPT 741/459 Classification (1) 14

  15. Entropy • Measure homogeneity of examples c Entropy ( S ) p log p ∑ ≡ − i 2 i i 1 = – S is the training data set, and pi is the proportion of S belong to class i • The smaller the entropy, the purer the data set Jian Pei: CMPT 741/459 Classification (1) 15

  16. Information Gain • The expected reduction in entropy caused by partitioning the examples according to an attribute | S | v Gain ( S , A ) Entropy ( S ) Entropy ( S ) ∑ ≡ − v | S | v Values ( A ) ∈ Value(A) is the set of all possible values for attribute A , and S v is the subset of S for which attribute A has value v Jian Pei: CMPT 741/459 Classification (1) 16

  17. PlayTenni Outlook Temp Humid Wind s Example Sunny Hot High Weak No Sunny Hot High Strong No Overcast Hot High Weak Yes Rain Mild High Weak Yes Rain Cool Normal Weak Yes Rain Cool Normal Strong No Overcast Cool Normal Strong Yes Sunny Mild High Weak No Sunny Cool Normal Weak Yes Rain Mild Normal Weak Yes 9 9 5 5 Sunny Mild Normal Strong Yes Entropy ( S ) log log = − − Overcast Mild High Strong Yes 2 2 14 14 14 14 Overcast Hot Normal Weak Yes Rain Mild High Strong No 0 . 94 = | S | v Gain ( S , Wind ) Entropy ( S ) Entropy ( S ) ∑ = − v | S | v { Weak , Strong } ∈ 8 6 Entropy ( S ) Engropy ( S ) Engropy ( S ) = − − Weak Strong 14 14 8 6 0 . 94 0 . 811 1 . 00 0 . 048 = − × − × = 14 14 Jian Pei: CMPT 741/459 Classification (1) 17

  18. Hypothesis Space Search in Decision Tree Building • Hypothesis space: the set of possible decision trees • ID3: simple-to-complex, hill-climbing search – Evaluation function: information gain Jian Pei: CMPT 741/459 Classification (1) 18

  19. Capabilities and Limitations • The hypothesis space is complete • Maintains only a single current hypothesis • No backtracking – May converge to a locally optimal solution • Use all training examples at each step – Make statistics-based decisions – Not sensitive to errors in individual example Jian Pei: CMPT 741/459 Classification (1) 19

  20. Natural Bias • The information gain measure favors attributes with many values • An extreme example – Attribute “ date ” may have the highest information gain – A very broad decision tree of depth one – Inapplicable to any future data Jian Pei: CMPT 741/459 Classification (1) 20

  21. Alternative Measures • Gain ratio: penalize attributes like date by incorporating split information c | S | | S | – SplitInfor mation ( S , A ) i log i ∑ ≡ − 2 S | S | | | i 1 = • Split information is sensitive to how broadly and uniformly the attribute splits the data Gain ( S , A ) – GainRatio ( S , A ) ≡ SplitInfor mation ( S , A ) • Gain ratio can be undefined or very large – Only test attributes with over average gain Jian Pei: CMPT 741/459 Classification (1) 21

  22. Measuring Inequality Lorenz Curve X-axis: quintiles Y-axis: accumulative share of income earned by the plotted quintile Gap between the actual lines and the mythical line: the degree of inequality Gini Gini = 0, even distribution index Gini = 1, perfectly unequal The greater the distance, the more unequal the distribution Jian Pei: CMPT 741/459 Classification (1) 22

  23. Gini Index (Adjusted) • A data set S contains examples from n classes n 2 gini ( T ) 1 p j = − ∑ j 1 = – p j is the relative frequency of class j in S • A data set S is split into two subsets S 1 and S 2 with sizes N 1 and N 2 respectively N N ( T ) 1 gini ( ) 2 gini ( ) gini split = T + T 1 2 N N • The attribute provides the smallest ginisplit(T) is chosen to split the node Jian Pei: CMPT 741/459 Classification (1) 23

  24. Extracting Classification Rules • Classification rules can be extracted from a decision tree • Each path from the root to a leaf à an IF- THEN rule – All attribute-value pair along a path form a conjunctive condition – The leaf node holds the class prediction – IF age = “ <=30 ” AND student = “ no ” THEN buys_computer = “ no ” • Rules are easy to understand Jian Pei: CMPT 741/459 Classification (1) 24

  25. Inductive Bias • The set of assumptions that, together with the training data, deductively justifies the classification to future instances – Preferences of the classifier construction • Shorter trees are preferred over longer trees • Trees that place high information gain attributes close to the root are preferred Jian Pei: CMPT 741/459 Classification (1) 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend