mathematical models of supervised learning and their
play

Mathematical Models of Supervised Learning and their Application to - PowerPoint PPT Presentation

Genomic, Proteomic and Transcriptomic Lab High Performance Computing and Networking Institute National Research Council, Italy Mathematical Models of Supervised Learning and their Application to Medical Diagnosis Mario Rosario Guarracino


  1. Genomic, Proteomic and Transcriptomic Lab High Performance Computing and Networking Institute National Research Council, Italy Mathematical Models of Supervised Learning and their Application to Medical Diagnosis Mario Rosario Guarracino January 9, 2007 1/12/2007 6:53 PM

  2. Acknowledgements � prof. Franco Giannessi – U. of Pisa, � prof. Panos Pardalos – CAO UFL, � Onur Seref – CAO UFL, � Claudio Cifarelli – HP. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 2

  3. Agenda � Mathematical models of supervised learning � Purpose of incremental learning � Subset selection algorithm � Initial points selection � Accuracy results � Conclusion and future work Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 3

  4. Introduction � Supervised learning refers to the capability of a system to learn from examples ( training set ). � The trained system is able to provide an answer ( output ) for each new question ( input ). � S upervised means the desired output for the training set is provided by an external teacher. � Binary classification is among the most successful methods for supervised learning. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 4

  5. Applications � Many applications in biology and medicine: � Tissues that are prone to cancer can be detected with high accuracy. � Identification of new genes or isoforms of gene expressions in large datasets. � New DNA sequences or proteins can be tracked down to their origins. � Analysis and reduction of data spatiality and principal characteristics for drug design. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 5

  6. Problem characteristics � Data produced in biomedical application will exponentially increase in the next years. � Gene expression data contain tens of thousand characteristics. � In genomic/proteomic application, data are often updated, which poses problems to the training step. � Current classification methods can over-fit the problem, providing models that do not generalize well. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 6

  7. Linear discriminant planes � Consider a binary classification task with points in two linearly separable sets. – There exists a plane that classifies all points in the two sets B B A A � There are infinitely many planes that correctly classify the training data. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 7

  8. SVM classification � A different approach, yielding the same solution, is to maximize the margin between support planes – Support planes leave all points of a class on one side B B A A � Support planes are pushed apart until they “bump” into a small set of data points ( support vectors ). Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 8

  9. SVM classification � Support Vector Machines are the state of the art for the existing classification methods. � Their robustness is due to the strong fundamentals of statistical learning theory. � The training relies on optimization of a quadratic convex cost function, for which many methods are available. – Available software includes SVM-Lite and LIBSVM. � These techniques can be extended to the nonlinear discrimination, embedding the data in a nonlinear space using kernel functions . Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 9

  10. A different religion � Binary classification problem can be formulated as a generalized eigenvalue problem (GEPSVM). � Find x’w 1 = γ 1 the closer to A and the farther from B : B B A A O. Mangasarian et al. , (2006) IEEE Trans. PAMI Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 10

  11. ReGEC technique Let [ w 1 γ 1 ] and [ w m γ m ] be eigenvectors associated to min and max eigenvalues of Gx= λ Hx : � a ∈ A ⇔ closer to x'w 1 - γ 1 = 0 than to x'w m - γ m = 0 , � b ∈ B ⇔ closer to x'w m - γ m = 0 than to x'w 1 - γ 1 = 0 . M.R. Guarracino et al ., (2007) OMS. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 11

  12. Nonlinear classification � When classes cannot be linearly separated, nonlinear discrimination is needed. .5 2 .5 1 .5 0 .5 1 .5 2 .5 −2 −1 0 1 2 � Classification surfaces can be very tangled. � This model accurately describes original data, but does not generalize to new data ( over-fitting ). Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 12

  13. How to solve the problem? Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 13

  14. Incremental classification � A possible solution is to find a small and robust subset of the training set that provides comparable accuracy results. � A smaller set of points: – reduces the probability of over-fitting the problem, – is computationally more efficient in predicting new points. � As new points become available, the cost of retraining the algorithm decreases if the influence of the new points is only evaluated with respect to the small subset. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 14

  15. I-ReGEC: Incremental learning algorithm 1: Γ 0 = C \ C 0 2: { M 0 , Acc 0 } = Classify ( C; C 0 ) 3: k = 1 4: while | Γ k | > 0 do 5: x k = x : max x ∈ {Mk ∩ Γ k-1} { dist ( x , P class(x) ) } { M k , Acc k } = Classify ( C; {C k- 1 ∪ { x k }} ) 6: 7: if Acc k > Acc k- 1 then C k = C k- 1 ∪ { x k } 8: 9: k = k + 1 10: end if Γ k = Γ k-1 \ { x k } 11: 12: end while Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 15

  16. I-ReGEC overfitting ReGEC accuracy=84.44 I-ReGEC accuracy=85.49 .5 .5 2 2 .5 .5 1 1 .5 .5 0 0 .5 .5 1 1 .5 .5 2 2 .5 .5 −2 −1 0 1 2 −2 −1 0 1 2 � When ReGEC algorithm is trained on all points, surfaces are affected by noisy points ( left ). � I-ReGEC achieves clearly defined boundaries, preserving accuracy ( right ). � Less then 5% of points needed for training! Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 16

  17. Initial points selection � Unsupervised clustering techniques can be adapted to select initial points. � We compare the classification obtained with k randomly selected starting points for each class, and k points determined by k-means method. � Results show higher classification accuracy and a more consistent representation of the training set, when k-means method is used instead of random selection. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 17

  18. Initial points selection � Starting points C i chosen: � randomly (top), � k-means (bottom). � For each kernel produced by C i , a set of evenly distributed points x is classified. � The procedure is repeated 100 times. � Let y i ∈ {1; -1} be the classification based on C i . � y = | ∑ y i | estimates the probability x is classified in one class. � random acc=84.5 std = 0.05 � k-means acc=85.5 std = 0.01 Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 18

  19. Initial points selection � Starting points C i chosen: � randomly (top), � k-means (bottom). � For each kernel produced by C i , a set of evenly distributed points x is classified. � The procedure is repeated 100 times. � Let y i ∈ {1; -1} be the classification based on C i . � y = | ∑ y i | estimates the probability x is classified in one class. � random acc=72.1std = 1.45 � k-means acc=97.6std = 0.04 Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 19

  20. Initial point selection � Effect of increasing initial points k with k-means on Chessboard dataset. 1 0.9 0.8 0.7 0.6 0.5 10 20 30 40 50 60 70 80 90 100 � The graph shows the classification accuracy versus the total number of initial points 2k from both classes. � This result empirically shows that there is a minimum k , for which maximum accuracy is reached. Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 20

  21. Initial point selection � Bottom figure shows k vs. the number of additional points included in the incremental dataset. 1 0.9 0.8 0.7 0.6 0.5 10 20 30 40 50 60 70 80 90 100 12 10 8 6 4 2 0 10 20 30 40 50 60 70 80 90 100 Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 21

  22. Dataset reduction � Experiments on real and I-ReGEC synthetic datasets confirm Dataset chunk % of train training data reduction. Banana 15.7 3.92 German 29.09 4.15 Diabetis 16.63 3.55 Haberman 7.59 2.76 Bupa 15.28 4.92 Votes 25.9 6.62 WPBC 4.215 4.25 Thyroid 12.40 8.85 Flare-solar 9.67 1.45 Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 22

  23. Accuracy results � Classification ReGEC I-ReGEC SVM accuracy with Dataset train acc chunk k acc acc incremental Banana 400 84.44 15.70 5 85.49 89.15 techniques German 700 70.26 29.09 8 73.5 75.66 well compare with standard Diabetis 468 74.56 16.63 5 74.13 76.21 methods Haberman 275 73.26 7.59 2 73.45 71.70 Bupa 310 59.03 15.28 4 63.94 69.90 Votes 391 95.09 25.90 10 93.41 95.60 WPBC 99 58.36 42.15 2 60.27 63.60 Thyroid 140 92.76 12.40 5 94.01 95.20 Flare- solar 666 58.23 9.67 3 65.11 65.80 Workshop on Mathematics and Medical Diagnosis January 9, 2007 -- Pg. 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend