constructive universal high dimensional distribution
play

Constructive universal high-dimensional distribution generation - PowerPoint PPT Presentation

Constructive universal high-dimensional distribution generation through deep ReLU networks Dmytro Perekrestenko July 2020 joint work with Stephan M uller and Helmut B olcskei Motivation Deep neural networks are widely used as generative


  1. Constructive universal high-dimensional distribution generation through deep ReLU networks Dmytro Perekrestenko July 2020 joint work with Stephan M¨ uller and Helmut B¨ olcskei

  2. Motivation Deep neural networks are widely used as generative models for complex data as images and natural language. Many generative network architectures are based on the transformation of low-dimensional distributions to high-dimensional ones, e.g., Variational Autoencoder, Wasserstein Autoencoder, etc. This talk answers the question of whether there exists a fundamental limitation in going from low dimension to a higher one.

  3. Our contribution This talk will show that there is no such limitation.

  4. Generation of multi-dimensional distributions from U [0 , 1] Classical approaches - transforming distributions of the same dimension , e.g., the Box-Muller method [Box and Muller, 1958]. [Bailey and Telgarsky, 2018] show that deep ReLU networks can transport U [0 , 1] to U [0 , 1] d .

  5. Neural networks A map Φ : R N 0 → R N L given by Φ := W L ◦ ρ ◦ W L − 1 ◦ ρ ◦ · · · ◦ ρ ◦ W 1 is called a neural network (NN) . Affine maps: W ℓ = A ℓ x + b ℓ : R N ℓ − 1 → R N ℓ , ℓ ∈ { 1 , 2 , . . . , L } Non-linearity or activation function: ρ acts component-wise Network connectivity: M (Φ) – total number of non-zero parameters in W ℓ Depth of network or number of layers: L (Φ) := L We denote by N d,d ′ the set of all ReLU networks with input dimension N 0 = d and output dimension N L = d ′ .

  6. Histogram distributions Histogram distribution E [0 , 1] 1 n , d = 1 , n = 5 . Histogram distribution E [0 , 1] 2 n , d = 2 , n = 4 .

  7. Our goal Transport U [0 , 1] to an approximation of any given distribution supported on [0 , 1] d . For illustration purposes we look at d = 2 .

  8. ReLU networks and histograms Takeaway message For any histogram distribution there exists a ReLU net that generates it from a uniform input. This net realizes an inverse cumulative distribution function (cdf − 1 ).

  9. The key ingredient to dimension increase Sawtooth function g : [0 , 1] → [0 , 1] , � if x < 1 2 x, 2 , g ( x ) = if x ≥ 1 2(1 − x ) , 2 , let g 1 ( x ) = g ( x ) , and define the “sawtooth” function of order s as the s -fold composition of g with itself according to g s := g ◦ g ◦ · · · ◦ g s ≥ 2 . , � �� � s NN realize sawtooth as g ( x ) = 2 ρ ( x ) − 4 ρ ( x − 1 / 2) + 2 ρ ( x − 1) .

  10. Related work Theorem ([Bailey and Telgarsky, 2018, Th. 2.1], case d = 2 ) There exists a ReLU network Φ : x → ( x, g s ( x )) , Φ ∈ N 1 ,d with connectivity M (Φ) ≤ Cs for some constant C > 0 , and of depth L (Φ) ≤ s + 1 , such that √ 2 W (Φ# U [0 , 1] , U [0 , 1] 2 ) ≤ 2 s . Main proof idea - space-filling property of sawtooth function.

  11. Generalization of the space-filling property

  12. Approximating 2 D distributions M : x → ( x, f ( g s ( x ))) Generating a histogram distribution via the transport map ( x, f ( g s ( x ))) . Left—the function f ( x ) , center— f ( g 4 ( x )) , right—a heatmap of the resulting histogram distribution.

  13. Approximating 2 D distributions con’t � � n − 1 � M : x → f marg ( x ) , f i ( g s ( nf marg ( x ) − i )) i =0 Generating a general 2 -D histogram distribution. Left—the function � � �� f 1 = f 3 , center— � 3 i =0 f i g 3 4 x − i ) , right—a heatmap of the resulting histogram distribution. The function f 0 = f 2 is depicted on the left in Figure 3.

  14. Generating histogram distributions with NNs Theorem For every distribution p X,Y ( x, y ) in E [0 , 1] 2 n , there exists a Ψ ∈ N 1 , 2 with connectivity M (Ψ) ≤ C 1 n 2 + C 2 ns , for some constants C 1 , C 2 > 0 , and of depth L (Ψ) ≤ s + 3 , such that √ W (Φ# U [0 , 1] , p X,Y ) ≤ 2 2 n 2 s . Error decays exponentially with depth and linearly in n Connectivity is in O ( n 2 ) which is of the same order as the n ’s parameters ( n 2 − 1 ). number of E [0 , 1] 2 Special case n = 1 coincides with [Bailey and Telgarsky, 2018, Th. 2.1].

  15. Histogram approximation Theorem Let p X,Y be a 2 -dimensional Lipschitz-continuous pdf of finite differential entropy on its support [0 , 1] 2 . Then, for every n > 0 , p X,Y ∈ E [0 , 1] 2 there exists a ˜ n such that √ p X,Y ) ≤ 1 p X,Y � L 1 ([0 , 1] 2 ) ≤ L 2 W ( p X,Y , ˜ 2 � p X,Y − ˜ 2 n .

  16. Universal approximation Theorem Let p X,Y be an L -Lipschitz continuous pdf supported on [0 , 1] 2 . Then, for every n > 0 , there exists a Φ ∈ N 1 , 2 with connectivity M (Φ) ≤ C 1 n 2 + C 2 ns for some constants C 1 , C 2 > 0 , and of depth L (Φ) ≤ s + 3 , such that √ √ W (Φ# U [0 , 1] , p X,Y ) ≤ L 2 n + 2 2 2 n 2 s . Takeaway message ReLU networks have no fundamental limitation in going from low dimension to a higher one.

  17. References I Bailey, B. and Telgarsky, M. J. (2018). Size-noise tradeoffs in generative networks. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31 , pages 6489–6499. Curran Associates, Inc. Box, G. E. P. and Muller, M. E. (1958). A note on the generation of random normal deviates. Ann. Math. Statist. , 29(2):610–611.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend