harmonic analysis of deep convolutional neural networks
play

Harmonic Analysis of Deep Convolutional Neural Networks Helmut B - PowerPoint PPT Presentation

Harmonic Analysis of Deep Convolutional Neural Networks Helmut B olcskei Department of Information Technology and Electrical Engineering October 2017 joint work with Thomas Wiatowski and Philipp Grohs ImageNet ImageNet ski rock plant


  1. Vertical translation invariance Theorem (Wiatowski and HB, 2015) Assume that the filters, non-linearities, and poolings satisfy B n ≤ min { 1 , L − 2 n R − 2 ∀ n ∈ N . n } , Let the pooling factors be S n ≥ 1 , n ∈ N . Then, � � � t � ||| Φ n ( T t f ) − Φ n ( f ) ||| = O , S 1 . . . S n for all f ∈ L 2 ( R d ) , t ∈ R d , n ∈ N . ⇒ applies to general filters, non-linearities, and poolings

  2. Philosophy behind invariance results Mallat’s “horizontal” translation invariance [ Mallat, 2012 ]: ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d J →∞ ||| Φ W ( T t f ) − Φ W ( f ) ||| = 0 , lim “Vertical” translation invariance: n →∞ ||| Φ n ( T t f ) − Φ n ( f ) ||| = 0 , ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d lim

  3. Philosophy behind invariance results Mallat’s “horizontal” translation invariance [ Mallat, 2012 ]: ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d J →∞ ||| Φ W ( T t f ) − Φ W ( f ) ||| = 0 , lim - features become invariant in every network layer, but needs J → ∞ “Vertical” translation invariance: n →∞ ||| Φ n ( T t f ) − Φ n ( f ) ||| = 0 , ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d lim - features become more invariant with increasing network depth

  4. Philosophy behind invariance results Mallat’s “horizontal” translation invariance [ Mallat, 2012 ]: ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d J →∞ ||| Φ W ( T t f ) − Φ W ( f ) ||| = 0 , lim - features become invariant in every network layer, but needs J → ∞ - applies to wavelet transform and modulus non-linearity without pooling “Vertical” translation invariance: n →∞ ||| Φ n ( T t f ) − Φ n ( f ) ||| = 0 , ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d lim - features become more invariant with increasing network depth - applies to general filters, general non-linearities, and general poolings

  5. Non-linear deformations Non-linear deformation ( F τ f )( x ) = f ( x − τ ( x )) , where τ : R d → R d For “small” τ :

  6. Non-linear deformations Non-linear deformation ( F τ f )( x ) = f ( x − τ ( x )) , where τ : R d → R d For “large” τ :

  7. Deformation sensitivity for signal classes Consider ( F τ f )( x ) = f ( x − τ ( x )) = f ( x − e − x 2 ) f 1 ( x ) , ( F τ f 1 )( x ) x f 2 ( x ) , ( F τ f 2 )( x ) x For given τ the amount of deformation induced can depend drastically on f ∈ L 2 ( R d )

  8. Philosophy behind deformation stability/sensitivity bounds Mallat’s deformation stability bound [ Mallat, 2012 ]: � � 2 − J � τ � ∞ + J � Dτ � ∞ + � D 2 τ � ∞ ||| Φ W ( F τ f ) − Φ W ( f ) ||| ≤ C � f � W , for all f ∈ H W ⊆ L 2 ( R d ) - The signal class H W and the corresponding norm � · � W depend on the mother wavelet (and hence the network) Our deformation sensitivity bound: ||| Φ( F τ f ) − Φ( f ) ||| ≤ C C � τ � α ∀ f ∈ C ⊆ L 2 ( R d ) ∞ , - The signal class C (band-limited functions, cartoon functions, or Lipschitz functions) is independent of the network

  9. Philosophy behind deformation stability/sensitivity bounds Mallat’s deformation stability bound [ Mallat, 2012 ]: � � 2 − J � τ � ∞ + J � Dτ � ∞ + � D 2 τ � ∞ ||| Φ W ( F τ f ) − Φ W ( f ) ||| ≤ C � f � W , for all f ∈ H W ⊆ L 2 ( R d ) - Signal class description complexity implicit via norm � · � W Our deformation sensitivity bound: ||| Φ( F τ f ) − Φ( f ) ||| ≤ C C � τ � α ∀ f ∈ C ⊆ L 2 ( R d ) ∞ , - Signal class description complexity explicit via C C - L -band-limited functions: C C = O ( L ) - cartoon functions of size K : C C = O ( K 3 / 2 ) - M -Lipschitz functions C C = O ( M )

  10. Philosophy behind deformation stability/sensitivity bounds Mallat’s deformation stability bound [ Mallat, 2012 ]: � � 2 − J � τ � ∞ + J � Dτ � ∞ + � D 2 τ � ∞ ||| Φ W ( F τ f ) − Φ W ( f ) ||| ≤ C � f � W , for all f ∈ H W ⊆ L 2 ( R d ) Our deformation sensitivity bound: ||| Φ( F τ f ) − Φ( f ) ||| ≤ C C � τ � α ∀ f ∈ C ⊆ L 2 ( R d ) ∞ , - Decay rate α > 0 of the deformation error is signal-class- specific (band-limited functions: α = 1 , cartoon functions: α = 1 2 , Lipschitz functions: α = 1 )

  11. Philosophy behind deformation stability/sensitivity bounds Mallat’s deformation stability bound [ Mallat, 2012 ]: � � 2 − J � τ � ∞ + J � Dτ � ∞ + � D 2 τ � ∞ ||| Φ W ( F τ f ) − Φ W ( f ) ||| ≤ C � f � W , for all f ∈ H W ⊆ L 2 ( R d ) - The bound depends explicitly on higher order derivatives of τ Our deformation sensitivity bound: ||| Φ( F τ f ) − Φ( f ) ||| ≤ C C � τ � α ∀ f ∈ C ⊆ L 2 ( R d ) ∞ , - The bound implicitly depends on derivative of τ via the 1 condition � Dτ � ∞ ≤ 2 d

  12. Philosophy behind deformation stability/sensitivity bounds Mallat’s deformation stability bound [ Mallat, 2012 ]: � � 2 − J � τ � ∞ + J � Dτ � ∞ + � D 2 τ � ∞ ||| Φ W ( F τ f ) − Φ W ( f ) ||| ≤ C � f � W , for all f ∈ H W ⊆ L 2 ( R d ) - The bound is coupled to horizontal translation invariance ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d J →∞ ||| Φ W ( T t f ) − Φ W ( f ) ||| = 0 , lim Our deformation sensitivity bound: ||| Φ( F τ f ) − Φ( f ) ||| ≤ C C � τ � α ∀ f ∈ C ⊆ L 2 ( R d ) ∞ , - The bound is decoupled from vertical translation invariance n →∞ ||| Φ n ( T t f ) − Φ n ( f ) ||| = 0 , ∀ f ∈ L 2 ( R d ) , ∀ t ∈ R d lim

  13. CNNs in a nutshell CNNs used in practice employ potentially hundreds of layers and 10 , 000 s of nodes!

  14. CNNs in a nutshell CNNs used in practice employ potentially hundreds of layers and 10 , 000 s of nodes! e.g.: Winner of the ImageNet 2015 challenge [ He et al., 2015 ] - Network depth : 152 layers - average # of nodes per layer: 472 - # of FLOPS for a single forward pass: 11 . 3 billion

  15. CNNs in a nutshell CNNs used in practice employ potentially hundreds of layers and 10 , 000 s of nodes! e.g.: Winner of the ImageNet 2015 challenge [ He et al., 2015 ] - Network depth : 152 layers - average # of nodes per layer: 472 - # of FLOPS for a single forward pass: 11 . 3 billion Such depths (and breadths) pose formidable computational challenges in training and operating the network!

  16. Topology reduction Determine how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers

  17. Topology reduction Determine how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers Guarantee trivial null-space for feature extractor Φ

  18. Topology reduction Determine how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers Guarantee trivial null-space for feature extractor Φ Specify the number of layers needed to have “most” of the input signal energy be contained in the feature vector

  19. Topology reduction Determine how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers Guarantee trivial null-space for feature extractor Φ Specify the number of layers needed to have “most” of the input signal energy be contained in the feature vector For a fixed (possibly small) depth, design CNNs that capture “most” of the input signal energy

  20. Building blocks Basic operations in the n -th network layer g λ ( k ) | · | ↓ S n . . f . g λ ( r ) | · | ↓ S n Filters: Semi-discrete frame Ψ n := { χ n } ∪ { g λ n } λ n ∈ Λ n Non-linearity: Modulus | · | Pooling: Sub-sampling with pooling factor S ≥ 1

  21. Demodulation effect of modulus non-linearity Components of feature vector given by | f ∗ g λ n | ∗ χ n +1 g λ n ( ω ) � χ n +1 ( ω ) � 1 � f ( ω ) · · · · · · ω

  22. Demodulation effect of modulus non-linearity Components of feature vector given by | f ∗ g λ n | ∗ χ n +1 g λ n ( ω ) � χ n +1 ( ω ) � 1 � f ( ω ) · · · · · · ω � 1 f ( ω ) · � g λ n ( ω ) ω

  23. Demodulation effect of modulus non-linearity Components of feature vector given by | f ∗ g λ n | ∗ χ n +1 g λ n ( ω ) � χ n +1 ( ω ) � 1 � f ( ω ) · · · · · · ω � 1 f ( ω ) · � g λ n ( ω ) ω Modulus squared : | f ∗ g λ n ( x ) | 2 R � g λn ( ω ) f · �

  24. Demodulation effect of modulus non-linearity Components of feature vector given by | f ∗ g λ n | ∗ χ n +1 g λ n ( ω ) � χ n +1 ( ω ) � 1 � f ( ω ) · · · · · · ω � 1 f ( ω ) · � g λ n ( ω ) ω Φ( f ) 1 via � � χ n +1 | f ∗ g λ n | ( ω ) ω

  25. Do all non-linearities demodulate? High-pass filtered signal: F ( f ∗ g λ ) ω − 2 R 2 R 2 R

  26. Do all non-linearities demodulate? High-pass filtered signal: F ( f ∗ g λ ) ω − 2 R 2 R 2 R Modulus : Yes ! |F ( | f ∗ g λ | ) | ω − 2 R 2 R ... but (small) tails!

  27. Do all non-linearities demodulate? High-pass filtered signal: F ( f ∗ g λ ) ω − 2 R 2 R 2 R Modulus squared : Yes, and sharply so ! |F ( | f ∗ g λ | 2 ) | ω − 2 R 2 R ... but not Lipschitz-continuous!

  28. Do all non-linearities demodulate? High-pass filtered signal: F ( f ∗ g λ ) ω − 2 R 2 R 2 R Rectified linear unit : No ! |F ( ReLU ( f ∗ g λ )) | ω − 2 R 2 R

  29. First goal: Quantify feature map energy decay W 2 ( f ) || f ∗ g λ ( k ) 1 | ∗ g λ ( l ) 2 | || f ∗ g λ ( p ) 1 | ∗ g λ ( r ) 2 | · ∗ χ 3 · ∗ χ 3 W 1 ( f ) | f ∗ g λ ( k ) 1 | | f ∗ g λ ( p ) 1 | · ∗ χ 2 · ∗ χ 2 f · ∗ χ 1

  30. Assumptions (on the filters) i) Analyticity : For every filter g λ n there exists a (not necessarily canonical) orthant H λ n ⊆ R d such that supp ( � g λ n ) ⊆ H λ n . ii) High-pass : There exists δ > 0 such that � g λ n ( ω ) | 2 = 0 , | � a.e. ω ∈ B δ (0) . λ n ∈ Λ n

  31. Assumptions (on the filters) i) Analyticity : For every filter g λ n there exists a (not necessarily canonical) orthant H λ n ⊆ R d such that supp ( � g λ n ) ⊆ H λ n . ii) High-pass : There exists δ > 0 such that � g λ n ( ω ) | 2 = 0 , | � a.e. ω ∈ B δ (0) . λ n ∈ Λ n ⇒ Comprises various contructions of WH filters, wavelets, ridgelets, ( α )-curvelets, shearlets ω 2 e.g.: analytic band-limited curvelets: ω 1

  32. Input signal classes Sobolev functions of order s ≥ 0 : � � � � � R d (1 + | ω | 2 ) s | � H s ( R d ) = f ∈ L 2 ( R d ) f ( ω ) | 2 d ω < ∞ �

  33. Input signal classes Sobolev functions of order s ≥ 0 : � � � � � R d (1 + | ω | 2 ) s | � H s ( R d ) = f ∈ L 2 ( R d ) f ( ω ) | 2 d ω < ∞ � H s ( R d ) contains a wide range of practically relevant signal classes

  34. Input signal classes Sobolev functions of order s ≥ 0 : � � � � � R d (1 + | ω | 2 ) s | � H s ( R d ) = f ∈ L 2 ( R d ) f ( ω ) | 2 d ω < ∞ � H s ( R d ) contains a wide range of practically relevant signal classes - square-integrable functions L 2 ( R d ) = H 0 ( R d )

  35. Input signal classes Sobolev functions of order s ≥ 0 : � � � � � R d (1 + | ω | 2 ) s | � H s ( R d ) = f ∈ L 2 ( R d ) f ( ω ) | 2 d ω < ∞ � H s ( R d ) contains a wide range of practically relevant signal classes - square-integrable functions L 2 ( R d ) = H 0 ( R d ) - L -band-limited functions L 2 L ( R d ) ⊆ H s ( R d ) , ∀ L > 0 , ∀ s ≥ 0

  36. Input signal classes Sobolev functions of order s ≥ 0 : � � � � � R d (1 + | ω | 2 ) s | � H s ( R d ) = f ∈ L 2 ( R d ) f ( ω ) | 2 d ω < ∞ � H s ( R d ) contains a wide range of practically relevant signal classes - square-integrable functions L 2 ( R d ) = H 0 ( R d ) - L -band-limited functions L 2 L ( R d ) ⊆ H s ( R d ) , ∀ L > 0 , ∀ s ≥ 0 - cartoon functions [ Donoho, 2001 ] C CART ⊆ H s ( R d ) , ∀ s ∈ [0 , 1 2 ) Handwritten digits from MNIST database [ LeCun & Cortes, 1998 ]

  37. Exponential energy decay Theorem Let the filters be wavelets with mother wavelet supp ( � ψ ) ⊆ [ r − 1 , r ] , r > 1 , or Weyl-Heisenberg (WH) filters with prototype function supp ( � g ) ⊆ [ − R, R ] , R > 0 . Then, for every f ∈ H s ( R d ) , there exists β > 0 such that � � a − n (2 s + β ) W n ( f ) = O , 2 s + β +1 where a = r 2 +1 r 2 − 1 in the wavelet case, and a = 1 2 + 1 R in the WH case.

  38. Exponential energy decay Theorem Let the filters be wavelets with mother wavelet supp ( � ψ ) ⊆ [ r − 1 , r ] , r > 1 , or Weyl-Heisenberg (WH) filters with prototype function supp ( � g ) ⊆ [ − R, R ] , R > 0 . Then, for every f ∈ H s ( R d ) , there exists β > 0 such that � � a − n (2 s + β ) W n ( f ) = O , 2 s + β +1 where a = r 2 +1 r 2 − 1 in the wavelet case, and a = 1 2 + 1 R in the WH case. ⇒ decay factor a is explicit and can be tuned via r, R

  39. Exponential energy decay Exponential energy decay: � � a − n (2 s + β ) W n ( f ) = O 2 s + β +1

  40. Exponential energy decay Exponential energy decay: � � a − n (2 s + β ) W n ( f ) = O 2 s + β +1 - β > 0 determines the decay of � f ( ω ) (as | ω | → ∞ ) according to 2 + 1 4 + β | � f ( ω ) | ≤ µ (1 + | ω | 2 ) − ( s 4 ) , ∀ | ω | ≥ L, for some µ > 0 , and L acts as an “effective bandwidth”

  41. Exponential energy decay Exponential energy decay: � � a − n (2 s + β ) W n ( f ) = O 2 s + β +1 - β > 0 determines the decay of � f ( ω ) (as | ω | → ∞ ) according to 2 + 1 4 + β | � f ( ω ) | ≤ µ (1 + | ω | 2 ) − ( s 4 ) , ∀ | ω | ≥ L, for some µ > 0 , and L acts as an “effective bandwidth” - smoother input signals (i.e., s ↑ ) lead to faster energy decay

  42. Exponential energy decay Exponential energy decay: � � a − n (2 s + β ) W n ( f ) = O 2 s + β +1 - β > 0 determines the decay of � f ( ω ) (as | ω | → ∞ ) according to 2 + 1 4 + β | � f ( ω ) | ≤ µ (1 + | ω | 2 ) − ( s 4 ) , ∀ | ω | ≥ L, for some µ > 0 , and L acts as an “effective bandwidth” - smoother input signals (i.e., s ↑ ) lead to faster energy decay - pooling through sub-sampling f �→ S 1 / 2 f ( S · ) leads to decay factor a S

  43. Exponential energy decay Exponential energy decay: � � a − n (2 s + β ) W n ( f ) = O 2 s + β +1 - β > 0 determines the decay of � f ( ω ) (as | ω | → ∞ ) according to 2 + 1 4 + β | � f ( ω ) | ≤ µ (1 + | ω | 2 ) − ( s 4 ) , ∀ | ω | ≥ L, for some µ > 0 , and L acts as an “effective bandwidth” - smoother input signals (i.e., s ↑ ) lead to faster energy decay - pooling through sub-sampling f �→ S 1 / 2 f ( S · ) leads to decay factor a S What about general filters? ⇒ polynomial energy decay!

  44. ... our second goal ... trivial null-space for Φ Why trivial null-space? Feature space w : � w, Φ( f ) � > 0 : � w, Φ( f ) � < 0

  45. ... our second goal ... trivial null-space for Φ Why trivial null-space? Feature space w : � w, Φ( f ) � > 0 : � w, Φ( f ) � < 0 Φ( f ∗ ) Non -trivial null-space: ∃ f ∗ � = 0 such that Φ( f ∗ ) = 0 ⇒ � w, Φ( f ∗ ) � = 0 for all w ! ⇒ these f ∗ become unclassifiable !

  46. ... our second goal ... Trivial null-space for feature extractor: � � � � f ∈ L 2 ( R d ) | Φ( f ) = 0 = 0 Feature extractor Φ( · ) = � ∞ n =0 Φ n ( · ) shall satisfy 2 ≤ ||| Φ( f ) ||| 2 ≤ B � f � 2 A � f � 2 ∀ f ∈ L 2 ( R d ) , 2 , for some A, B > 0 .

  47. “Energy conservation” Theorem For the frame upper { B n } n ∈ N and frame lower bounds { A n } n ∈ N , define B := � ∞ n =1 max { 1 , B n } and A := � ∞ n =1 min { 1 , A n } . If 0 < A ≤ B < ∞ , then 2 ≤ ||| Φ( f ) ||| 2 ≤ B � f � 2 A � f � 2 ∀ f ∈ L 2 ( R d ) . 2 ,

  48. “Energy conservation” Theorem For the frame upper { B n } n ∈ N and frame lower bounds { A n } n ∈ N , define B := � ∞ n =1 max { 1 , B n } and A := � ∞ n =1 min { 1 , A n } . If 0 < A ≤ B < ∞ , then 2 ≤ ||| Φ( f ) ||| 2 ≤ B � f � 2 A � f � 2 ∀ f ∈ L 2 ( R d ) . 2 , - For Parseval frames (i.e., A n = B n = 1 , n ∈ N ), this yields ||| Φ( f ) ||| 2 = � f � 2 2

  49. “Energy conservation” Theorem For the frame upper { B n } n ∈ N and frame lower bounds { A n } n ∈ N , define B := � ∞ n =1 max { 1 , B n } and A := � ∞ n =1 min { 1 , A n } . If 0 < A ≤ B < ∞ , then 2 ≤ ||| Φ( f ) ||| 2 ≤ B � f � 2 A � f � 2 ∀ f ∈ L 2 ( R d ) . 2 , - For Parseval frames (i.e., A n = B n = 1 , n ∈ N ), this yields ||| Φ( f ) ||| 2 = � f � 2 2 - Connection to energy decay: n − 1 � ||| Φ k ( f ) ||| 2 + W n ( f ) � f � 2 2 = � �� � k =0 → 0

  50. ... and our third goal ... For a given CNN, specify the number of layers needed to capture “most” of the input signal energy

  51. ... and our third goal ... For a given CNN, specify the number of layers needed to capture “most” of the input signal energy How many layers n are needed to have at least ((1 − ε ) · 100)% of the input signal energy be contained in the feature vector , i.e., n � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 ∀ f ∈ L 2 ( R d ) . 2 ≤ 2 , k =0

  52. Number of layers needed Theorem Let the frame bounds satisfy A n = B n = 1 , n ∈ N . Let the input signal f be L -band-limited, and let ε ∈ (0 , 1) . If � � �� L n ≥ log a (1 − √ 1 − ε ) , then n � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 2 ≤ 2 . k =0

  53. Number of layers needed Theorem Let the frame bounds satisfy A n = B n = 1 , n ∈ N . Let the input signal f be L -band-limited, and let ε ∈ (0 , 1) . If � � �� L n ≥ log a (1 − √ 1 − ε ) , then n � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 2 ≤ 2 . k =0 ⇒ also guarantees trivial null-space for � n k =0 Φ k ( f )

  54. Number of layers needed Theorem Let the frame bounds satisfy A n = B n = 1 , n ∈ N . Let the input signal f be L -band-limited, and let ε ∈ (0 , 1) . If � � �� L n ≥ log a (1 − √ 1 − ε ) , then n � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 2 ≤ 2 . k =0 - lower bound depends on - description complexity of input signals (i.e., bandwidth L ) - decay factor (wavelets a = r 2 +1 r 2 − 1 , WH filters a = 1 2 + 1 R )

  55. Number of layers needed Theorem Let the frame bounds satisfy A n = B n = 1 , n ∈ N . Let the input signal f be L -band-limited, and let ε ∈ (0 , 1) . If � � �� L n ≥ log a (1 − √ 1 − ε ) , then n � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 2 ≤ 2 . k =0 - lower bound depends on - description complexity of input signals (i.e., bandwidth L ) - decay factor (wavelets a = r 2 +1 r 2 − 1 , WH filters a = 1 2 + 1 R ) - similar estimates for Sobolev input signals and for general filters (polynomial decay!)

  56. Number of layers needed Numerical example for bandwidth L = 1 : (1 − ε ) 0.25 0.5 0.75 0.9 0.95 0.99 wavelets ( r = 2 ) 2 3 4 6 8 11 WH filters ( R = 1 ) 2 4 5 8 10 14 general filters 2 3 7 19 39 199

  57. Number of layers needed Numerical example for bandwidth L = 1 : (1 − ε ) 0.25 0.5 0.75 0.9 0.95 0.99 wavelets ( r = 2 ) 2 3 4 6 8 11 WH filters ( R = 1 ) 2 4 5 8 10 14 general filters 2 3 7 19 39 199

  58. Number of layers needed Numerical example for bandwidth L = 1 : (1 − ε ) 0.25 0.5 0.75 0.9 0.95 0.99 wavelets ( r = 2 ) 2 3 4 6 8 11 WH filters ( R = 1 ) 2 4 5 8 10 14 general filters 2 3 7 19 39 199 Recall: Winner of the ImageNet 2015 challenge [ He et al., 2015 ] - Network depth : 152 layers - average # of nodes per layer: 472 - # of FLOPS for a single forward pass: 11 . 3 billion

  59. ... our fourth and last goal ... For a fixed (possibly small) depth N , design scattering networks that capture “most” of the input signal energy

  60. ... our fourth and last goal ... For a fixed (possibly small) depth N , design scattering networks that capture “most” of the input signal energy Recall : Let the filters be wavelets with mother wavelet supp ( � ψ ) ⊆ [ r − 1 , r ] , r > 1 , or Weyl-Heisenberg filters with prototype function supp ( � g ) ⊆ [ − R, R ] , R > 0 .

  61. ... our fourth and last goal ... For a fixed (possibly small) depth N , design scattering networks that capture “most” of the input signal energy For fixed depth N , want to choose r in the wavelet and R in the WH case so that N � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 ∀ f ∈ L 2 ( R d ) . 2 ≤ 2 , k =0

  62. Depth-constrained networks Theorem Let the frame bounds satisfy A n = B n = 1 , n ∈ N . Let the input signal f be L -band-limited, and fix ε ∈ (0 , 1) and N ∈ N . If, in the wavelet case, � κ + 1 1 < r ≤ κ − 1 , or, in the WH case, � 1 0 < R ≤ , κ − 1 2 � � 1 N , then L where κ := (1 −√ 1 − ε ) N � ||| Φ k ( f ) ||| 2 ≤ � f � 2 (1 − ε ) � f � 2 2 ≤ 2 . k =0

  63. Depth-width tradeoff Spectral supports of wavelet filters: g 1 � g 2 � g 3 � � ψ 1 ω r r 1 1 r 2 r 3 L

  64. Depth-width tradeoff Spectral supports of wavelet filters: g 1 � g 2 � g 3 � � ψ 1 ω r r 1 1 r 2 r 3 L Smaller depth N ⇒ smaller “bandwidth” r of mother wavelet

  65. Depth-width tradeoff Spectral supports of wavelet filters: g 1 � g 2 � g 3 � � ψ 1 ω r r 1 1 r 2 r 3 L Smaller depth N ⇒ smaller “bandwidth” r of mother wavelet ⇒ larger number of wavelets ( O (log r ( L )) ) to cover the spectral support [ − L, L ] of input signal

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend