keep the decision tree and estimate the class
play

Keep the Decision Tree and Estimate the Class Probabilities Using its - PDF document

Keep the Decision Tree and Estimate the Class Probabilities Using its Decision Boundary Isabelle Alvarez (1 , 2) Stephan Bernard (2) Guillaume Deffuant (2) (1) LIP6, Paris VI University (2) Cemagref, LISC (2) Cemagref, LISC 4 place Jussieu


  1. Keep the Decision Tree and Estimate the Class Probabilities Using its Decision Boundary Isabelle Alvarez (1 , 2) Stephan Bernard (2) Guillaume Deffuant (2) (1) LIP6, Paris VI University (2) Cemagref, LISC (2) Cemagref, LISC 4 place Jussieu F-63172 Aubiere, France 63172 Aubiere, France 75005 Paris, France stephan.bernard@cemagref.fr guillaume.deffuant@cemagref.fr isabelle.alvarez@lip6.fr Abstract (except smoothing) induce a drastic change in the fundamen- tal properties of the tree: Either the structure of the tree as a This paper proposes a new method to estimate the model is modified, or its main objective, or its intelligibility. class membership probability of the cases classified The method we propose here aims at improving the class by a Decision Tree. This method provides smooth probability estimate without modifying the tree itself, in or- class probabilities estimate, without any modifica- der to preserve its intelligibility and other use. Besides the tion of the tree, when the data are numerical. It ap- attributes of the cases, we consider a new feature, the dis- plies a posteriori and doesn’t use additional train- tance from the decision boundary induced by the DT (the ing cases. It relies on the distance to the deci- boundary of the inverse image of the different class labels). sion boundary induced by the decision tree. The We propose to use this new feature (which can be seen as distance is computed on the training sample. It the margin of the DT) to estimate the posterior probabilities, is then used as an input for a very simple one- as we expect the class membership probability to be closely dimension kernel-based density estimator, which related to the distance from the decision boundary. It is the provides an estimate of the class membership prob- case for other geometric methods, like Support Vector Ma- ability. This geometric method gives good results chines (SVM). A SVM defines a unique hyperplane in the even with pruned trees, so the intelligibility of the feature space to classify the data (in the original input space tree is fully preserved. the corresponding decision boundary can be very complex). The distance from this hyperplane can be used to estimate the posterior probabilities, see [Platt, 2000] for the details in the 1 Introduction two-class problem. In the case of DT, the decision boundary Decision Tree (DT) algorithms are very popular and widely consists in several pieces of hyperplanes instead of a unique used for classification purpose, since they provide relatively hyperplane. We propose to compute the distance to this deci- easily an intelligible model of the data, contrary to other sion boundary for the training cases. Adapting an idea from learning methods. Intelligibility is a very desirable property [Smyth et al. , 1995], we then train a kernel-based density es- in artificial intelligence, considering the interactions with the timator (KDE), not on the attributes of the cases but on this end-user, all the more when the end-user is an expert. On the single new feature. other hand, the end-user of a classification system needs addi- The paper is organized as follows: Section 2 discusses re- tional information rather than just the output class, in order to lated work on probability estimate for DT. Section 3 presents asses the result: This information consists generally in con- in detail the distance-based estimate of the posterior proba- fusion matrix, accuracy, specific error rates (like specificity, bilities. Section 4 reports the experiment performed on the sensitivity, likelihood ratios, including costs, which are com- numerical databases of the UCI repository, the comparison monly used in diagnosis applications). In the context of de- between the distance-based method and smoothing methods. cision aid system, the most valuable information is the class Section 5 discusses the use of geometrically defined subsets membership probability. Unfortunately, DT can only provide of the training set in order to enhance the probability estimate. piecewise constant estimates of the class posterior probabil- We make further comments about the use of the distance in ities, since all the cases classified by a leaf share the same the concluding section. posterior probabilities. Moreover, as a consequence of their main objective, which is to separate the different classes, the 2 Estimating Class Probabilities with a DT raw estimate at the leaf is highly biased. On the contrary, methods that are highly suitable for probability estimate pro- Decision Trees (DT) posterior probabilities are piecewise duce generally less intelligible models. A lot of work aims at constant over the leaves. They are also inaccurate. Thus they improving the class probability estimate at the leaf: Smooth- are of limited use (for ranking examples, or to evaluate the ing methods, specialized trees, combined methods (decision risk of the decision). This is the reason why a lot of work has tree combined with other algorithms), fuzzy methods, ensem- been done to improve the accuracy of the posterior probabili- ble methods (see section 2). Actually, most of these methods ties and to build better trees in this concern.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend