7 catchup session very short intro to other classifiers
play

7: Catchup Session & very short intro to other classifiers - PowerPoint PPT Presentation

7: Catchup Session & very short intro to other classifiers Machine Learning and Real-world Data (MLRD) Paula Buttery Lent 2018 What happens in a catchup session? Lecture and practical session as normal. New material is non-examinable.


  1. 7: Catchup Session & very short intro to other classifiers Machine Learning and Real-world Data (MLRD) Paula Buttery Lent 2018

  2. What happens in a catchup session? Lecture and practical session as normal. New material is non-examinable. Time for you to catch-up or attempt some starred ticks. Demonstrators help as per usual.

  3. Naive Bayes is a probabilistic classifier Given a set of input features a probabilistic classifier provide a distribution over classess. That is, for a set of observed features O and classes c 1 ...c n ∈ C gives P ( c i | O ) for all c i ∈ C For us O was the set all the words in a review { w 1 , w 2 , ..., w n } where w i is the i th word in a review, C = { POS , NEG } We decided on a single class by choosing the one with the highest probability given the features: ˆ c = argmax P ( c | O ) c ∈ C

  4. An SVM is a popular non-probabilistic classifier A Support Vector Machine (SVM) is a non-probabilistic binary linear classifier SVMs assign new examples to one category or the other SVMs can reduce the amount of labeled data required to gain good accuracy A linear-SVM can be considered to be a base-line for non-probabilistic approaches SVMs can be efficiently adapted to perform a non-linear classification

  5. SVMs find hyper-planes that separate classes Our classes exist in a multidimensional feature space A linear classifier will separate the points with a hyper-plane

  6. SVMs find a maximum-margin hyperplane in noisy data There are many possible hyperplanes SVMs find the best hyperplane such that the distance from it to the nearest data point from each class is maximised i.e. the hyperplane that passes through the widest possible gap (hopefully helps to avoid over-fitting)

  7. SVMs can be very efficient and effective Efficient when learning from a large number of features (good for text) Effective even with relatively small amounts of labelled data (we only need points close to the plane to calculate it) We can choose how many points to involve (size of margin) when calculating the plane (tuning vs. over-fitting) Can separate non-linear boundaries by increasing the feature space (using a kernal function)

  8. Choice of classifier will depend on the task Comparison of a SVM and Naive Bayes on the same task: 2000 imdb movie reviews, 400 kept for testing preprocess with improved tokeniser (lowercased, removed uninformative words, dealt with punctuation, lemmatised words) SVM Naive Bayes Accuracy on train 0.98 0.96 Accuracy on test 0.84 0.80 But from Naive Bayes I know that character , good , story , great , ... are informative features SVMs are more difficult to interpret

  9. Decision tree can be used to visually represent classifications bad <= 0.0154 entropy = 1.0 samples = 1600 value = [799, 801] class = pos True False waste <= 0.0218 bad <= 0.0458 entropy = 0.9457 entropy = 0.8469 samples = 1001 samples = 599 value = [364, 637] value = [435, 164] class = pos class = neg boring <= 0.0343 job <= 0.0186 waste <= 0.008 bill <= 0.0433 entropy = 0.915 entropy = 0.7532 entropy = 0.9274 entropy = 0.4817 samples = 927 samples = 74 samples = 426 samples = 173 value = [306, 621] value = [58, 16] value = [280, 146] value = [155, 18] class = pos class = neg class = neg class = neg performance <= 0.0098 really <= 0.0216 strong <= 0.0251 know <= 0.0145 perfect <= 0.0347 work <= 0.0157 put <= 0.0435 entropy = 0.7219 entropy = 0.8974 entropy = 0.5917 entropy = 0.5197 entropy = 0.9403 entropy = 0.9682 entropy = 0.4138 entropy = 0.4138 samples = 899 samples = 28 samples = 60 samples = 14 samples = 354 samples = 72 samples = 168 samples = 5 value = [1, 4] value = [282, 617] value = [24, 4] value = [53, 7] value = [5, 9] value = [214, 140] value = [66, 6] value = [154, 14] class = pos class = pos class = neg class = neg class = pos class = neg class = neg class = neg excellent <= 0.0153 decent <= 0.0102 entropy = 0.0 entropy = 0.9852 say <= 0.019 entropy = 0.9852 entropy = 0.0 entropy = 0.8631 great <= 0.024 entropy = 0.0 entropy = 0.0 even <= 0.0133 flick <= 0.0299 entropy = 0.7219 entropy = 0.9653 entropy = 0.7355 entropy = 0.3138 entropy = 0.9525 entropy = 0.795 entropy = 0.3328 samples = 21 samples = 7 samples = 7 samples = 7 samples = 7 samples = 13 samples = 47 samples = 5 samples = 522 samples = 377 samples = 53 samples = 341 samples = 25 samples = 163 value = [204, 318] value = [78, 299] value = [21, 0] value = [3, 4] value = [50, 3] value = [3, 4] value = [0, 7] value = [5, 2] value = [214, 127] value = [0, 13] value = [47, 0] value = [19, 6] value = [153, 10] value = [1, 4] class = neg class = pos class = pos class = pos class = neg class = pos class = neg class = pos class = pos class = pos class = neg class = neg class = neg class = neg potential <= 0.0297 short <= 0.0188 poorly <= 0.0106 director <= 0.0049 show <= 0.0172 stupid <= 0.0328 word <= 0.0201 late <= 0.0234 despite <= 0.0081 even <= 0.018 entropy = 0.0 entropy = 0.9544 entropy = 0.9785 entropy = 0.3228 entropy = 0.6697 entropy = 0.9784 samples = 41 entropy = 0.8113 entropy = 0.905 entropy = 0.9357 samples = 8 entropy = 0.3228 entropy = 0.1841 entropy = 0.8813 samples = 488 samples = 34 samples = 348 samples = 29 samples = 12 samples = 287 samples = 54 samples = 17 samples = 143 samples = 20 value = [41, 0] value = [3, 5] value = [202, 286] value = [2, 32] value = [61, 287] value = [17, 12] value = [9, 3] value = [195, 92] value = [19, 35] value = [16, 1] value = [139, 4] value = [14, 6] class = neg class = pos class = pos class = pos class = pos class = neg class = neg class = neg class = pos class = neg class = neg class = neg ludicrous <= 0.0146 present <= 0.0134 adam <= 0.0287 movie <= 0.0224 decent <= 0.0334 together <= 0.0167 overall <= 0.0336 take <= 0.0191 every <= 0.0241 character <= 0.017 entropy = 0.0 entropy = 0.971 entropy = 0.0 entropy = 1.0 entropy = 0.0 entropy = 0.0 entropy = 0.0 entropy = 0.7219 entropy = 0.0 entropy = 0.65 entropy = 0.9657 entropy = 0.5586 entropy = 0.6228 entropy = 0.8905 entropy = 0.469 entropy = 0.6292 entropy = 0.9327 entropy = 0.8427 entropy = 0.7219 entropy = 0.3712 samples = 29 samples = 5 samples = 6 samples = 6 samples = 23 samples = 6 samples = 12 samples = 5 samples = 123 samples = 6 samples = 465 samples = 23 value = [0, 29] value = [2, 3] samples = 335 samples = 13 samples = 10 samples = 19 value = [6, 0] value = [3, 3] samples = 264 value = [23, 0] samples = 48 value = [6, 0] value = [12, 0] value = [4, 1] value = [123, 0] samples = 20 value = [1, 5] samples = 14 value = [182, 283] value = [20, 3] value = [52, 283] value = [9, 4] value = [1, 9] value = [16, 3] value = [172, 92] value = [13, 35] value = [16, 4] value = [13, 1] class = pos class = pos class = neg class = neg class = neg class = neg class = neg class = neg class = neg class = pos class = pos class = neg class = pos class = neg class = pos class = neg class = neg class = pos class = neg class = neg entropy = 0.9556 entropy = 0.0 entropy = 0.0 entropy = 0.971 entropy = 0.5682 entropy = 0.9403 entropy = 0.0 entropy = 0.9183 entropy = 0.7219 entropy = 0.0 entropy = 0.0 entropy = 0.971 entropy = 0.9 entropy = 0.3712 entropy = 0.9871 entropy = 0.0 entropy = 0.0 entropy = 0.9852 entropy = 0.7219 entropy = 0.0 samples = 454 samples = 11 samples = 18 samples = 5 samples = 321 samples = 14 samples = 7 samples = 6 samples = 5 samples = 5 samples = 14 samples = 5 samples = 250 samples = 14 samples = 30 samples = 18 samples = 13 samples = 7 samples = 5 samples = 9 value = [171, 283] value = [11, 0] value = [18, 0] value = [2, 3] value = [43, 278] value = [9, 5] value = [7, 0] value = [2, 4] value = [1, 4] value = [0, 5] value = [14, 0] value = [2, 3] value = [171, 79] value = [1, 13] value = [13, 17] value = [0, 18] value = [13, 0] value = [3, 4] value = [4, 1] value = [9, 0] class = pos class = neg class = neg class = pos class = pos class = neg class = neg class = pos class = pos class = pos class = neg class = pos class = neg class = pos class = pos class = pos class = neg class = pos class = neg class = neg Simple to interpret Can mix numerical and categorical data You specify the parameters of the tree (maximum depth, number of items at leaf nodes—both change accuracy) But finding the optimal decision tree can be np-complete

  10. Information gain can be used to decide how to split Information gain is defined in terms of entropy H Entropy of tree node: � H ( n ) = − p i log 2 p i p where p ’s are the fraction of each class at node n Information gain I is used to decide which feature to split on at each step in building the tree Information gain: I ( n, D ) = H ( n ) − H ( n | D ) where H ( n | D ) is the weighted entropy of the daughter nodes.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend