for image classification
play

for Image Classification Qilong Wang ( ) Dalian University of - PowerPoint PPT Presentation

Codebook-free Single Gaussian for Image Classification Qilong Wang ( ) Dalian University of Technology http://ice.dlut.edu.cn/PeihuaLi/ Image Model for Classification Scene Object Image Model Fine-grained Texture Face


  1. Codebook-free Single Gaussian for Image Classification Qilong Wang ( 王旗龙 ) Dalian University of Technology http://ice.dlut.edu.cn/PeihuaLi/

  2. Image Model for Classification …… Scene Object Image Model Fine-grained Texture Face

  3. Outline  Modeling Methods in Image Classification  Towards Effective Codebook-free Model  Robust Approximate Infinite Dimensional Gaussian  Future Work and Conclusion

  4. Outline  Modeling Methods in Image Classification  Towards Effective Codebook-free Model  Robust Approximate Infinite Dimensional Gaussian  Future Work and Conclusion

  5. Modeling Methods in Image Classification Collecting the Extracting a set set of features of (raw) features to form final from dense grid representation Image Representation

  6. Modeling Methods in Image Classification Collecting the Extracting a set set of features of (raw) features to form final from dense grid representation Image Representation  Histogram(Codebook)-based Modeling Methods  Codebook-free Modeling Methods

  7. Histogram-based Modeling Methods [R,G,B] [L, a, b] … Color Histogram [IJCV 1991] Gradient Histogram Gradient [I x , I y ] GIST [IJCV 2001] More effective Higher dimension Image HoG [CVPR 2006] BoW-VQ Methods SIFT [IJCV 2004] Representation Histogram-based Image Modeling (Local) Feature

  8. Histogram of HD Local Feature – BoW Codebook Matching Images Different sizes Fix-length of local features representations

  9. Limitations of BoW  The codebook brings quantization error. [Boiman et al. CVPR08]  Soft-assignment coding methods • Visual Word Ambiguity [PAMI10], SC [CVPR 09], LLC[CVPR10],LSAC [ICCV 11]  Dictionary enhancement • Huge size of dictionary [PAMI15], GMM [IJCV13], Affine subspace [CVPR15] and DL.  Usage of first order and second order information • VLAD[CVPR10], SV[ECCV10], FV[IJCV13], E-VLAD[ECCV14], LASC[CVPR15].  An all-purpose codebook is unavailable. • It is difficult to handle online problem, e.g., increasing number of classes.

  10. Usage of Codebook-free Model Codebook Matching Images Different sizes Fix-length of local features representations

  11. Codebook-free Models Mean Signature [IJCV 2000] [ECCV 2006] Covariance [ECCV 2012] Matrix [PAMI 2015] Gaussian [ICCV 2003] Single [CVPR 2010] Mixture [ICCV 2011] Gaussian [ICCV 2013] Model Single Model Mixture Model Above models showed underperformances than BoW model for image classification.

  12. Codebook-free Models Mean Signature [IJCV 2000] Why ? [ECCV 2006] Covariance [ECCV 2012] Matrix [PAMI 2015] What can we do ? Gaussian [ICCV 2003] Single [CVPR 2010] Mixture [ICCV 2011] Gaussian [ICCV 2013] Model Single Model Mixture Model Above models showed underperformances than BoW model for image classification.

  13. Selection of Codebook-free Model Mean First Order Signature [IJCV 2000] Combination of first and second [ECCV 2006] Covariance order brings better performances. Second Order [ECCV 2012] Matrix [PAMI 2015] Single [CVPR 2010] Gaussian First Order + Second Order Gaussian [ICCV 2003] [ICCV 2011] Mixture [ICCV 2013] Model

  14. Selection of Codebook-free Model Mean First Order Signature [IJCV 2000] 1. Cross-Bin metric is needed. [ECCV 2006] Covariance 2. They are difficult to model high Second Order [ECCV 2012] Matrix [PAMI 2015] dimensional features. Single [CVPR 2010] Gaussian First Order + Second Order Gaussian [ICCV 2003] [ICCV 2011] Mixture [ICCV 2013] Model

  15. Codebook-free Single Gaussian for Image Modelling Image Features Gaussian

  16. Outline  Modeling Methods in Image Classification  Towards Effective Codebook-free Model  Robust Approximate Infinite Dimensional Gaussian  Future Work and Conclusion

  17. Metric between Gaussians How to compute distance between Mean Gaussians efficiently and effectively ? First Order [CVPR 2010] Signature Ad-linear efficient & not effective Ct-linear efficient & not effective KL-divergence not efficient & effective Covariance Second Order Matrix Mapping manifold of Gaussian Single into the space of SPD matrices: Gaussian First Order + Second Order Gaussian Mixture Model Log-Euclidean Metric on SPD matrices Peihua Li, Qilong Wang, Lei Zhang: A Novel Earth Mover’s Distance Methodology for Image Matching with Gaussian Mixture Models. ICCV, 2013.

  18. Pipeline of Proposed Method mean Our Covariance       Σ 2 μμ T μ   *  2 μ T Codebookless   1 ˆ   T     L * G 0,1 , 0 Model (CLM) Features Classifier e.g. Image Embedding Gaussian Compacting CLM extraction SVM Joint learning of low-rank Image modeling transformation and SVM classifier 1. Local (hand-crafted) features extraction. 2. Computing Gaussian and matching them with Embedding 3. Compacting CLM Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. Towards effective codebookless model for image classification. Pattern Recognition, 2016 (in press).

  19. Comparison with the FV [IJCV13] Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. Towards effective codebookless model for image classification. Pattern Recognition, 2016 (in press).

  20. Effect of Local Features Caltech101 Caltech256 VOC2007 CUB200- FMD KTH-TIPS- Scene15 Sports8 2011 2b 80.87+0.3 47.47+0.1 61.8 25.8 58.37+1.0 69.37+1.0 88.17+0.2 91.37+1.3 FV+ SIFT FV+ 83.77+0.3 50.17+0.3 60.8 27.3 58.97+1.7 71.37+3.1 89.47+0.2 90.47+1.2 eSIFT 84.97+0.1 48.97+0.2 55.8 18.6 51.67+1.2 71.87+3.1 88.17+0.4 88.87+1.0 CLM + SIFT 86.37+0.3 53.67+0.2 60.4 28.1 57.77+1.6 75.27+2.6 89.47+0.4 91.57+1.2 CLM + eSIFT CLM + 82.57+0.3 48.67+0.3 56.6 19.1 62.47+1.5 72.27+3.3 88.37+0.6 88.37+1.3 L 2 ECM 84.77+0.2 53.27+0.1 61.7 28.6 64.27+1.0 73.67+2.6 89.27+0.5 90.77+0.7 CLM + eL 2 ECM Peihua Li, Qilong Wang, Local log-Euclidean covariance matrix (L 2 ECM) for image representation and its applications, in ECCV, 2012. Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. Towards effective codebookless model for image classification. Pattern Recognition, 2016 (in press).

  21. Comparison with counterparts Scene15 Sports8 GG (ad-linear) 79.8 80.2 [CVPR2010] GG (ct-linear) 82.3 82.9 [CVPR2010] GG (KL-kernel) 86.1 84.4 [CVPR2010] CLM (SIFT) 88.1 88.8 Metric between Gaussian models is very important. Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. Towards effective codebookless model for image classification. Pattern Recognition, 2016 (in press).

  22. Some key findings  Our work has clearly shown that single Gaussian is a very competitive alternative to the mainstream BoW model.  Comparison with BoW model, our method is more efficient with no requirement of dictionary. Meanwhile, it avoid aforementioned limitations of BoW model.  Our method is more suit for texture or material images.  More powerful local descriptors can bring more improvement for our method than BoF model. Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. Towards effective codebookless model for image classification. Pattern Recognition, 2016 (in press).

  23. Outline  Modeling Methods in Image Classification  Towards Effective Codebook-free Model  Robust Approximate Infinite Dimensional Gaussian  Future Work and Conclusion

  24. More Powerful Local Features  Features from deep Convolutional Neural Network.  Fully-connected layer • MOP-CNN [ECCV 2014], SCFVC [NIPS2014], …  Convolutional layer • SPP-Net [ECCV 2014], FV- CNN [CVPR2015], …  Infinite dimensional descriptors can provide richer and more discriminative information than their low dimensional counterparts.  Mapping local features into (approximated) RKHS • [CVPR2014], [NIPS2014], [ICASSP2015]

  25. Approximate Infinite Dimensional Gaussian Computing infinite dimensional Gaussian with the features from deep Goal: Convolutional Neural Network.

  26. Approximate Infinite Dimensional Gaussian Computing infinite dimensional Gaussian with the features from deep Goal: Convolutional Neural Network.

  27. Approximate Infinite Dimensional Gaussian Computing infinite dimensional Gaussian with the features from deep Goal: Convolutional Neural Network. Two explicit feature mappings: Our (1) (2) solution:

  28. Robust Estimation of Approximate Infinite Dimensional Gaussian We face to estimation of covariance in high dimensional problems with Problem: a small number of samples. It is well known that conventional Maximum Likelihood Estimation (MLE) is not robust to this condition. Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. RAID-G: Robust Estimation of Approximate Infinite Dimensional Gaussian with Application to Material Recognition, In CVPR, 2016

  29. Robust Estimation of Approximate Infinite Dimensional Gaussian We face to estimation of covariance in high dimensional problems with Problem: a small number of samples. It is well known that conventional Maximum Likelihood Estimation (MLE) is not robust to this condition. where Classical MLE Qilong Wang, Peihua Li, Wangmeng Zuo, and Lei Zhang. RAID-G: Robust Estimation of Approximate Infinite Dimensional Gaussian with Application to Material Recognition, In CVPR, 2016

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend