deep visual learning on hypersphere
play

Deep Visual Learning on Hypersphere Weiyang Liu*, Zhen Liu* - PowerPoint PPT Presentation

Deep Visual Learning on Hypersphere Weiyang Liu*, Zhen Liu* College of Computing Georgia Institute of Technology 1 Outline Why Learning on Hypersphere Loss Design - Large-Margin Learning on Hypersphere Convolution Operator - Deep


  1. Deep Visual Learning on Hypersphere Weiyang Liu*, Zhen Liu* College of Computing Georgia Institute of Technology 1

  2. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 2 • Conclusion

  3. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 3 • Conclusion

  4. Why Learning on Hypersphere • An empirical observation • Setting the output feature dimension as 2 in CNN • Directly visualizing the features without using T-SNE Deep features are naturally distributed over a sphere! 4

  5. Why Learning on Hypersphere • Euclidean distance is not suitable for high-dimensional data More specifically, In high-dimensional space, vectors tend to be orthogonal to each other, then this reduces to 5

  6. Why Learning on Hypersphere • Learning features on Hypersphere can well regularize the feature space. In deep metric learning, features have to be normalized before entering the loss function. 6 Schroff et al. FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR 2015

  7. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 7 • Conclusion

  8. Large-Margin Learning on Hypersphere • Standard CNN usually uses the softmax loss as the learning objective. How to incorporate margin on hypersphere? 8

  9. Large-Margin Learning on Hypersphere • The intuition (from binary classification) If x belongs to class 1, original Softmax requires: We want to make the classification more rigorous in order to produce a decision margin: 9

  10. Large-Margin Learning on Hypersphere Original Softmax Loss Large-Margin Imposing large Softmax Loss margin Normalizing classifier weights 10 Angular Softmax Loss

  11. Learned Feature Visualization • 2D Feature Visualization on MNIST • 3D Feature Visualization on CASIA Face Dataset 11 m=1 m=2 m=3 m=4

  12. Experimental Results • Face Verification LFW and YTF dataset 12 SphereFace uses the angular large-margin softmax loss, achieving the state-of-the-art performance with only 0.5M training data.

  13. Experimental Results • Million-scale Face Recognition Challenge MegaFace Challenge 13 SphereFace ranked No.1 from 2016.12 to 2017.4, and the current No. 1 entry is also developed based on SphereFace.

  14. Demo 14

  15. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 15 • Conclusion

  16. SphereNet • Traditional Convolution • HyperSpherical Convolution (SphereConv) SphereConv normalizes each local patch of a feature map and each weight vector . 16

  17. SphereNet - Intuition from Fourier Transform • Semantic information is mostly preserved with corrupted magnitude but not corrupted phase (angular information) 17

  18. Decoupled Convolution Observation: The final feature is naturally decoupled, where the magnitude represents the intra-class variation. 18

  19. Decoupled Convolution General Framework - Decoupled Convolution Magnitude Angle (intra-class variation) (semantic difference) • Decoupling angle and magnitude of feature vectors • Allowing different designs of convolution operators for different tasks 19

  20. Example Choices of Magnitude • SphereConv • BallConv • TanhConv • LinearConv 20

  21. Example Choices of Angle • Linear • Cosine • Squared Cosine 21

  22. Generalization With SphereConv, the top-1 accuracy of CNNs on ImageNet can be improved by ~1%. Plain-CNN-9 Plain-CNN-12 ResNet-27 Baseline 58.31 61.42 65.54 SphereNet 59.23 62.27 66.49 Top-1 Accuracy (center crop) of baseline and SphereNet on ImageNet. * Different from the original NeurIPS paper: 1) In ResNet, we use fully connected layer instead of average pooling to obtain the final feature. We found it to be crucial for SphereNet. 22 2) We add L2 decay, which slows down the optimization but results in better performance.

  23. Adversarial Robustness and Optimization • • * Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Mądry. 23

  24. Optimization Without BatchNorm • Without BatchNorm, decoupled convolutions outperform the baseline. • The bounded TanhConv can be optimized while unbounded ones fail. Accuracies of different convolution operators on Plain-CNN-9 without BatchNorm. N/C indicates ‘not converged’. 24

  25. Adversarial Robustness Bounded convolution operators have better robustness against both fast gradient sign method (FGSM) attack and the multi-step version of FGSM. Naturally Training Adversarial Training 25

  26. Adversarial Robustness It requires larger norm to attack decoupled convolution with bounded magnitude. L2 and L_inf norms needed to attack models on samples in the test set. 26

  27. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 27 • Conclusion

  28. Minimum Hyperspherical Energy Intuition: Better generalization More diversity of neurons Less redundancy Paper [1] shows that, in one-hidden-layer network, maximizing diversity can eliminate spurious local minima. If two weight vectors in one layer are close to each other, there is probably more redundancy. [1] Bo Xie, Yingyu Liang, and Le Song. Diverse neural network learns true target 28 functions. arXiv preprint arXiv:1611.03131, 2016.

  29. Minimum Hyperspherical Energy Proposed regularization: add repulsion forces between any pair of weight vectors (in one layer) It connects to Thomson problem - to find a minimal configuration of electrons of an atom. 29

  30. Minimum Hyperspherical Energy Loss function: This optimization problem is generally non-trivial. With s = 2, the problem is actually NP-hard. 30

  31. Minimum Hyperspherical Energy Although orthonormal loss seems similar, it does not yield ideal configuration of weights even in 3D case. 31

  32. Minimum Hyperspherical Energy MHE Loss is compatible with weight decay: - MHE regularizes the angles of weights - Weight decay regularizes the magnitude of weights 32

  33. Minimum Hyperspherical Energy Co-linearity Issue: In this toy example, optimizing the original MHE results in colinear weight vectors Half-space MHE: Optimizing on pairwise angles between lines (instead of vectors). 33

  34. MHE - Ablation Study MHE on 9 layer Plain CNN on CIFAR-10/100 dataset. 34

  35. MHE - Ablation Study • MHE consistently improve the performance of networks. • In cases that the network is hard to optimize due to redundancy of neurons (small width/large depth), MHE helps more. MHE with different depths of network on CIFAR-100. 35

  36. MHE - Ablation Study • MHE consistently improve the performance of networks. • In cases that the network is hard to optimize due to redundancy of neurons (small width/large depth), MHE helps more. MHE with different widths of network on CIFAR-100. 36

  37. MHE Application - Image Recognition MHE can improve performance of networks on ImageNet. Top-1 error (center crop) of models on ImageNet. 37

  38. MHE Application - Face Recognition We add MHE loss to the angular softmax loss in SphereFace. We call the resulted model SphreFace+ . Synergy: • Angular softmax loss - intra-class compactness • MHE loss - inter-class separability. 38

  39. MHE Application - Face Recognition Comparison between SphereFace and SphereFace+. Comparison to State-of-the-art results. 39

  40. MHE Application - Class Imbalanced Recognition Applying MHE to the final classifier enforces the prior that all categories have the same importance and thus improves performance. Results on class imbalanced recognition on CIFAR-10. * Single - Reduce the number of samples in only one category by 90%. 40 Multiple - Reduce the number of samples in multiple categories with different weights. Details are shown in the paper.

  41. MHE Application - Class Imbalanced Recognition The category with less data tends to be ignored Visualization for the final CNN feature. 41

  42. MHE Application - GAN With MHE added to the discriminator, the inception score of spectral GAN can be improved from 7.42 to 7.68. 42

  43. Outline • Why Learning on Hypersphere • Loss Design - Large-Margin Learning on Hypersphere • Convolution Operator - Deep Hyperspherical Learning and Decoupled Networks • Weight Regularization - Minimum Hyperspherical Energy for Regularizing Neural Networks 43 • Conclusion

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend