projection based transfer learning
play

Projection based transfer learning Christian Poelitz Dortmund - PowerPoint PPT Presentation

Projection based transfer learning Christian Poelitz Dortmund Technical University Christian Poelitz Dortmund Technical University Projection based transfer learning Transfer Learning We want to reuse a trained model or information from


  1. Projection based transfer learning Christian Poelitz Dortmund Technical University Christian Poelitz Dortmund Technical University Projection based transfer learning

  2. Transfer Learning We want to reuse a trained model or information from different data sources to classify a new data set. We assume to have labelled data from data source S and want to learn a classifier on a unlabelled data source T . We use kernel methods in order leverage different high-dimensional features for a classification task. Christian Poelitz Dortmund Technical University Projection based transfer learning

  3. Transfer Learning on Subspaces We assume that the different data sources share similarities in low dimensional subspaces. These subspaces are invariant across the data sources and contain the information that are characteristic in both sources. Using only this information a classifier trained on source S might also perform well on source T . Christian Poelitz Dortmund Technical University Projection based transfer learning

  4. Distances in Hilbert Spaces We want to project onto a subspace such that the maximum mean discrepancy measure (Gretton et al. [GBR + 08]) is minimized. MMD ( F , S , T ) = sup f ∈ F ( 1 f ( x ) − 1 � � f ( x )) > | S | | T | x ∈ S x ∈ T MMD P ( F , S , T ) = sup f ∈ P ◦ F ( 1 f ( x ) − 1 � � f ( x )) | S | | T | x ∈ S x ∈ T Christian Poelitz Dortmund Technical University Projection based transfer learning

  5. Subspace Methods i φ ( x i ) · φ ( x i ) T for { x i ∈ T ∪ S } . Kernel PCA: K = n · C = � An eigenvalue decomposition on C results in a set of eigenvalues { λ i } and eigenvectors { v i } such that λ i · v i = C · v i . The projection onto the first k eigenvalues: P U ( φ ( x )) = ( � j α j , 1 < φ ( x i ) , φ ( x ) >, · · · , � j α j , k < φ ( x i ) , φ ( x ) > ) 1 with α i , j = ( √ λ i · v i ) j . Christian Poelitz Dortmund Technical University Projection based transfer learning

  6. Other Subspace Methods Subspace Alignment as proposed by Feranando et al. [FHST13] cannot be used since in kernel methods the projections must be in the sample (kernel defined sub) space. Hence, our projections must be expansions of the data samples. The cross kernel must be used to project all examples from both sources into the same Hilbert space. The approach by Zhang et al. [ZZW + 13] via surrogate kernels might be applicable and will be investigated in the future. Christian Poelitz Dortmund Technical University Projection based transfer learning

  7. Efficiency Kernel methods scale quadratic or even cubic in the number of examples. We want to select only those examples that are close to the invariant subspace. This reduces the size of the kernel. Christian Poelitz Dortmund Technical University Projection based transfer learning

  8. Greedy Selection Distance based (Shawe Taylor et al. [STC04]): x t +1 = argmin x ∈ S −{ x 1 , ··· , x t } � P U T ( φ ( x )) � 2 Herding based (Chen et al. [CWS12]): x t +1 = argmax x ∈ S −{ x 1 , ··· , x t } < w t , φ ( x ) > w t +1 = w t + E p T [ φ ( x )] − φ ( x t +1 ) Iteratively add examples and project all data onto the spanned subspace. If MMD between the different sources does increase rapidly, stop. This will be further investigated in the future. Christian Poelitz Dortmund Technical University Projection based transfer learning

  9. Experiments Method E → D E → B E → K D → E D → B D → K kPCA 75.9 73.9 81.3 74 77.7 75 KMM 68.7 70.7 81.8 70.7 74.3 74.1 TCA 64.7 65.2 80.3 73.7 69.5 77.2 kPCA+ 74,2 72.1 80.6 73.2 76 74.4 kPCA µ 74.9 68.4 81.2 70.6 76.2 72.5 Method B → E B → D B → K K → E K → D K → B kPCA 71.9 77.5 72.7 84.4 79.8 76 KMM 68 71.2 69.6 83.9 73.5 74.6 TCA 73 69 73.8 76.7 67.8 63.7 kPCA+ 71.7 75.1 70.2 82.9 79 76.5 kPCA µ 67.5 76.1 70.6 82.1 78 77.3 Table: This table shows the accuracies on target domains using training data from different source domains, Source → Target . Methods: Kernel Mean Matching (KMM), kernel PCA, Distance Based (kPCA+) and Kernel Herding Based (kPCA µ ). Christian Poelitz Dortmund Technical University Projection based transfer learning

  10. Experiments Figure: Results on the target data domain for the different categories. We compare random samples with our greedy selection strategy for sampling. Christian Poelitz Dortmund Technical University Projection based transfer learning

  11. Issues tackled in the future Choose U T and U T ∪ S ′ w.r.t. distribution of the eigenvalues of K T , resp. K T ∪ S ′ Investigate which kernels to use There are kernels for which E p T [ φ ( x )] cannot be efficiently computed Comparison to other (non-greedy) approaches (for instance Gong et al. [GGS13]) Investigation on stopping criteria Further experiments including significance tests Convergence bounds Christian Poelitz Dortmund Technical University Projection based transfer learning

  12. (Far) Future Work Extension to multi kernel settings Christian Poelitz Dortmund Technical University Projection based transfer learning

  13. Questions? Christian Poelitz Dortmund Technical University Projection based transfer learning

  14. Questions? Thanks for your attantion! Christian Poelitz Dortmund Technical University Projection based transfer learning

  15. Yutian Chen, Max Welling, and Alex J. Smola. Super-samples from kernel herding. CoRR , abs/1203.3472, 2012. Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV , 2013. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch¨ olkopf, and Alexander J. Smola. A kernel method for the two-sample problem. CoRR , abs/0805.2368, 2008. Boqing Gong, Kristen Grauman, and Fei Sha. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In ICML (1) , volume 28 of JMLR Proceedings , pages 222–230. JMLR.org, 2013. John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis . Cambridge University Press, New York, NY, USA, 2004. Kai Zhang, Vincent Zheng, Qiaojun Wang, James Kwok, Qiang Yang, and Ivan Marsic. Covariate shift in hilbert space: A solution via sorrogate kernels. In Sanjoy Dasgupta and David Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13) , volume 28, pages 388–395. JMLR Workshop and Conference Proceedings, May 2013. Christian Poelitz Dortmund Technical University Projection based transfer learning

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend