binary deep neural network
play

Binary Deep Neural Network 20183385 Huisu Yun 27 November 2018 - PowerPoint PPT Presentation

Do et al. (ECCV 2016), Learning to Hash with Binary Deep Neural Network 20183385 Huisu Yun 27 November 2018 CS688 Fall 2018 Student Presentation Review: Song et al. (BMVC 2016) Fine-grained sketch-based image retrieval (SBIR)


  1. Do et al. (ECCV 2016), “Learning to Hash with Binary Deep Neural Network” 20183385 Huisu Yun 27 November 2018 CS688 Fall 2018 Student Presentation

  2. Review: Song et al. (BMVC 2016)  Fine-grained sketch-based image retrieval (SBIR) – Retrieval by fine-grained ranking (main task): triplet ranking – Attribute prediction (auxiliary task): predict semantic attributes that belong to sketches and images – Attribute-level ranking (another auxiliary task): compare attributes Image reproduced from Song et al. 2016. “Deep multi -task attribute-driven ranking for fine-grained sketch-based 2 image retrieval”

  3. Motivation  Raw feature vectors are very long ( cf. PA2) – ...which is why we want to use specialized binary codes  Binary codes for image search ( cf. lecture slides) – ...should be of reasonable length – ...and provide faithful representation  Important criteria – Independence: bits should be independent to each other – Balance: each bit should divide the dataset into equal halves 3

  4. Background: Supervised codes (1/3)  Liu et al. (CVPR 2016): pairwise supervision Similar images — similar codes Pairwise loss function Dissimilar images — different codes (Hamming distance approximated using Euclidean distance) Regularization ( +1 or − 1 ) Image reproduced from Liu et al. 2016. “Deep supervised hashing for fast image retrieval” 4

  5. Background: Supervised codes (2/3)  Lai et al. (CVPR 2015): triplet supervision Triplet ranking loss Image reproduced from Lai et al. 2015. “Simultaneous feature learning and hash coding with deep neural networks” 5

  6. Background: Supervised codes (3/3)  Jain et al. (ICCV 2017): point-wise supervision, quantized output Image reproduced from Jain et al. 2017. “SuBiC: A supervised, structured binary code for image search” 6

  7. Background: Deep Hashing  Liong et al. (CVPR 2015) – Fully connected layers – Binary hash code B is constructed from the output value of the last layer, H ( n ) , as follows: B = sgn H ( n ) – Note that “binary” means ± 1 here Quantization loss Balance loss Independence loss Regularization loss 7

  8. Introduction  Binary Deep Neural Network (BDNN) – Real binary codes (how?) – Real independence loss (not relaxed/approximated) – Real balance loss (again, not relaxed/approximated) – Reconstruction loss (like autoencoders!)  Unsupervised (UH-) and supervised (SH-) variants 8

  9. Overview  “Unsupervised Hashing with BDNN (UH - BDNN)” Sigmoid activation for layers 1 through n −2 Identity activation for layers n −1 and n Image reproduced from Do et al. 2016. “Learning to Hash with Binary Deep Neural Network” 9

  10. Optimization  Alternating optimization with respect to ( W , c ) and B – Network parameters (weight W (·) , bias c (·) ) using L-BFGS – Binary code ( B ) using discrete cyclic coordinate descent  Note that, ideally, H ( n −1) should be equal to B Reconstruction loss Regularization loss Equality loss Independence loss Balance loss 10

  11. Deep Hashing vs. UH-BDNN Quantization loss Balance loss Independence loss Regularization loss Reconstruction loss Regularization loss Equality loss Independence loss Balance loss 11

  12. Using class labels  “Supervised Hashing with BDNN (SH- BDNN)” – No reconstruction layer – Uses pairwise label matrix  Hamming distance between binary codes should correlate with the pairwise label matrix S Classification loss Regularization loss Equality loss Independence loss Balance loss 12

  13. Results (1/2) Image reproduced from Do et al. 2016. “Learning to Hash with Binary Deep Neural Network” 13

  14. Results (2/2) Image reproduced from Do et al. 2016. “Learning to Hash with Binary Deep Neural Network” 14

  15. Discussion  The framework’s capability of generating both unsupervised and supervised binary codes using nearly identical architectures would be useful for many applications  The fact that the optimization algorithms used in BDNN (especially L-BFGS) do not fully benefit from the amount of parallelism available on modern machines might result in suboptimal utilization of computing resources 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend