on learning sets of symmetric elements
play

On Learning Sets of Symmetric Elements [1] [2] [1,3] [3] Haggai - PowerPoint PPT Presentation

ICML 2020 On Learning Sets of Symmetric Elements [1] [2] [1,3] [3] Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya [1] Nvidia Research [2] Stanford University [3] Bar-Ilan University Motivation and Overview Set Symmetry Previous


  1. ICML 2020 On Learning Sets of Symmetric Elements [1] [2] [1,3] [3] Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya [1] Nvidia Research [2] Stanford University [3] Bar-Ilan University

  2. Motivation and Overview

  3. Set Symmetry Previous work (DeepSets, PointNet) targeted training a deep network over sets { } x 1 x 1 x 1 x 2 x 2 x 2 Deep x m , ⋮ x m , x m , … ⋮ ⋮ Net Input set

  4. Set+Elements symmetry Both the set and its elements have symmetries. { } Deep Net , , … Input set Main challenge: What architecture is optimal when elements of the set have their own symmetries?

  5. Deep Symmetric sets { } Input image set Output

  6. Set symmetry: Order invariance/equivariance { } =

  7. Set symmetry: Order invariance/equivariance { } = { }

  8. Set symmetry: Order invariance/equivariance { } = { }

  9. Element symmetry: Translation invariance/equivariance { }

  10. Element symmetry: Translation invariance/equivariance { } { }

  11. Element symmetry: Translation invariance/equivariance { } { }

  12. Applications Modalities 1D signals 2D images 3D pointclouds Graph

  13. This paper A principled approach for learning sets of complex elements (graphs, point clouds, images) Characterize maximally expressive linear layers that respect the symmetries ( DSS layers ) Prove universality results Experimentally demonstrate that DSS networks outperform baselines

  14. Previous work

  15. Deep sets [Zaheer et al. 2017]

  16. Deep sets [Zaheer et al. 2017] Siamese CNN CNN CNN

  17. Deep sets [Zaheer et al. 2017] Siamese Features CNN CNN CNN

  18. Deep sets [Zaheer et al.] Siamese Features Deeps sets block CNN Deep CNN Sets CNN

  19. Previous work: information sharing Information sharing layer CNN Aittala and Durand, ECCV 2018 CNN Sridhar et al., NeuriPS 2019 CNN CNN Liu et al., ICCV 2019 CNN CNN

  20. Our approach

  21. Invariance Many Learning tasks are invariant to natural transformations (symmetries) More formally. Let be a subgroup: H ≤ S n τ f : ℝ n → ℝ is invariant if , for all f ( τ ⋅ x ) = f ( x ) τ ∈ H e.g. image classification f f “Cat”

  22. Equivariance Let be a subgroup: H ≤ S n Equivariant if , f ( τ ⋅ x ) = τ ⋅ f ( x ) τ e.g. edge detection f f τ

  23. Invariant neural networks • Invariant by construction ⋯ Equivariant Invariant FC

  24. Deep Symmetric Sets G x 1 , …, x n ∈ ℝ d with symmetry group G ≤ S d Want to be invariant/equivariant to both and the ordering G Formally the symmetry group is H = S n × G ≤ S nd

  25. Main challenges • What is the space of linear equivariant layers for specific ? H = S N × G

  26. Main challenges • What is the space of linear equivariant layers for a given ? H = S N × G • Can we compute these operators e ffi ciently?

  27. Main challenges • What is the space of linear equivariant layers for a given ? H = S N × G • Can we compute these operators e ffi ciently? -invariant networks H • Do we lose expressive power? -invariant continuous functions H Continuous functions

  28. Main challenges • What is the space of linear equivariant layers for a given ? H = S N × G • Can we compute these operators e ffi ciently? -invariant networks H Gap? • Do we lose expressive power? -invariant continuous functions H Continuous functions

  29. -equivariant layers H L : ℝ n × d → ℝ n × d Theorem : Any linear − equivariant layer is of the form S N × G 1 ( x i ) + ∑ L ( X ) i = L G L G 2 ( x j ) j ≠ i L G 1 , L G where are linear -equivariant functions G 2 We call these layers Deep Sets for Symmetric elements layers (DSS)

  30. DSS for images Single DSS layer are images x 1 , …, x n is the group of circular translations G 2 D -equivariant layers are convolutions G

  31. DSS for images Single DSS layer CONV are images x 1 , …, x n CONV is the group of circular translations G 2 D CONV -equivariant layers are convolutions G

  32. DSS for images Single DSS layer CONV are images x 1 , …, x n CONV is the group of circular translations G 2 D CONV -equivariant layers are convolutions G +

  33. DSS for images Single DSS layer CONV are images x 1 , …, x n CONV is the group of circular translations G 2 D CONV -equivariant layers are convolutions G + CONV

  34. DSS for images Single DSS layer CONV CONV Siamese part Information sharing part CONV + + CONV

  35. Expressive power Theorem If G-equivariant networks are universal appoximators for G-equivariant functions, then so are DSS networks for -equivariant functions. S N × G

  36. Expressive power Theorem If G-equivariant networks are universal appoximators for G-equivariant functions, then so are DSS networks for -equivariant functions. S N × G • Main tool: • Noether’s Theorem (Invariant theory) • For any finite group ℝ [ x 1 , . . . , x n ] H , the ring of invariant polynomials is finitely generated. H • Generators can be used to create continuous unique encodings for elements in ℝ n × d / H

  37. Results

  38. Signal classification

  39. Image selection

  40. Shape selection

  41. Conclusions A general framework for learning sets of complex elements Generalizes many previous works Expressivity results Works well in many tasks and data types

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend