neural networks with euclidean symmetry for physical
play

Neural Networks with Euclidean Symmetry for Physical Sciences 3D - PowerPoint PPT Presentation

Neural Networks with Euclidean Symmetry for Physical Sciences 3D rotation- and translation-equivariant convolutional neural networks (for points, meshes, images, ...) Tess Smidt 2018 Alvarez Fellow CSA Summer Series in Computing Sciences


  1. Neural Networks with Euclidean Symmetry for Physical Sciences 3D rotation- and translation-equivariant convolutional neural networks (for points, meshes, images, ...) Tess Smidt 2018 Alvarez Fellow CSA Summer Series in Computing Sciences 2020.07.01

  2. Neural Networks with Euclidean Symmetry for Physical Sciences Talk Takeaways 3D rotation- and 1. First a deep learning primer! translation-equivariant 2. Different types of neural networks encode assumptions about specific data types. convolutional neural networks 3. Data types in the physical sciences are (for points, meshes, images, ...) geometry and geometric tensors. 4. Neural networks with Euclidean symmetry can natural handle these data types. a. How they work b. What they can do Tess Smidt 2018 Alvarez Fellow CSA Summer Series in Computing Sciences 2020.07.02

  3. Skip? A brief primer on deep learning deep learning ⊂ machine learning ⊂ artificial intelligence model | deep learning | data | cost function | way to update parameters | conv. nets 3

  4. A brief primer on deep learning model (“neural network”): Function with learnable parameters. model | deep learning | data | cost function | way to update parameters | conv. nets 4

  5. A brief primer on deep learning model (“neural network”): Ex: "Fully-connected" Function with learnable parameters. network Learned Parameters Element-wise Linear nonlinear transformation function model | deep learning | data | cost function | way to update parameters | conv. nets 5

  6. A brief primer on deep learning model (“neural network”): Ex: "Fully-connected" Function with learnable parameters. network Learned Parameters Neural networks with multiple layers can learn more complicated functions. model | deep learning | data | cost function | way to update parameters | conv. nets 6

  7. A brief primer on deep learning model (“neural network”): Ex: "Fully-connected" Function with learnable parameters. network Learned Parameters Neural networks with multiple layers can learn more complicated functions. model | deep learning | data | cost function | way to update parameters | conv. nets 7

  8. A brief primer on deep learning deep learning: Add more layers. model | deep learning | data | cost function | way to update parameters | conv. nets 8

  9. A brief primer on deep learning data: Want lots of it. Model has many parameters. Don't want to easily overfit. https://en.wikipedia.org/wiki/Overfitting model | deep learning | data | cost function | way to update parameters | conv. nets 9

  10. A brief primer on deep learning cost function: A metric to assess how well the model is performing. The cost function is evaluated on the output of the model. Also called the loss or error . model | deep learning | data | cost function | way to update parameters | conv. nets 10

  11. A brief primer on deep learning way to update parameters: Construct a model that is differentiable Easiest to do with differentiable programming frameworks: e.g. Torch, TensorFlow, JAX, ... Take derivatives of the cost function (loss or error) wrt to learnable parameters. This is called backpropogation (aka the chain rule). error model | deep learning | data | cost function | way to update parameters | conv. nets 11

  12. A brief primer on deep learning convolutional neural networks: Used for images. In each layer, scan over image with learned filters. http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution model | deep learning | data | cost function | way to update parameters | conv. nets 12

  13. A brief primer on deep learning Back convolutional neural networks: Used for images. In each layer, scan over image with learned filters. http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/ model | deep learning | data | cost function | way to update parameters | conv. nets 13

  14. Neural networks are specially designed for different data types. W x Assumptions about the data type are built into how the network operates. 14

  15. Neural networks are specially designed for different data types. W x Assumptions about the data type are built into how the network operates. Arrays ⇨ Dense NN 2D images ⇨ Convolutional NN Text ⇨ Recurrent NN Components are independent. The same features can be found Sequential data. Next anywhere in an image. Locality. input/output depends on input/output that has come before. 15

  16. Neural networks are specially designed for different data types. W x Assumptions about the data type are built into how the network operates. Arrays ⇨ Dense NN 2D images ⇨ Convolutional NN Text ⇨ Recurrent NN Components are independent. The same features can be found Sequential data. Next anywhere in an image. Locality. input/output depends on input/output that has come before. What are our data types in the physical sciences? How do we build neural networks for these data types? 16

  17. Given a molecule and a rotated copy, we want the predicted forces to be the same up to rotation. (Predicted forces are equivariant to rotation.) Additionally, we should be able to generalize to molecules with similar motifs. 17

  18. Primitive unit cells, conventional unit cells, and supercells of the same crystal should produce the same output (assuming periodic boundary conditions) . 18

  19. We want the networks to be able to predict molecular Hamiltonians in any orientation from seeing a single example. H H O 1s 2s 2p 1s 2s 2p 1s 2s 2s 2p 2p 3d 19

  20. What our our data types? 3D geometry and geometric tensors... ...which transform predictably under 3D rotation, translation, and inversion. These data types assume Euclidean symmetry. ⇨ Thus, we need neural networks that preserve Euclidean symmetry. 20

  21. Analogous to... the laws of (non-relativistic) physics have Euclidean symmetry, even if systems do not. The network is our model of “physics” . The input to the network is our system . q B q q q q 21

  22. A Euclidean symmetry preserving network produces outputs that preserve the subset of symmetries induced by the input . 3D rotations and 2D rotation and Discrete rotations Discrete rotations, inversions mirrors along and mirrors mirrors, and translations cone axis O(3) SO(2) + O h Pm-3m mirrors (221) (C ∞v ) 22

  23. Geometric tensors take many forms. They are a general data type beyond materials. 23

  24. Geometric tensors take many forms. They are a general data type beyond materials. Scalars Atomic orbitals ● Energy ● Mass m ● Isotropic * Vectors ● Force ● Velocity Output of ● Acceleration Angular ● Polarization Fourier Transforms Pseudovectors ● Angular momentum ● Magnetic fields Matrices, Tensors, … Vector fields on ● Moment of Inertia spheres ● Polarizability (e.g. B-modes ● Interaction of multipoles of the Cosmic ● Elasticity tensor (rank 4) Microwave 24 Background)

  25. Geometric tensors only permit specific operations. (More about these later -- scalar operations, direct sums, direct products) Neural networks that only use these operations are equivariant to 3D translations, rotations, and inversion. Equivariant vs. Invariant? Examples for a vector. The magnitude of a vector is invariant to rotation and translation. The direction of a vector is invariant to translation and equivariant to The location of a vector in rotation. space is equivariant to translation and equivariant to rotation. 25

  26. Why limit yourself to equivariant functions? You can substantially shrink the space of functions you need to optimize over. This means you need less data to constrain your function. All learnable functions All learnable All learnable functions equivariant constrained functions by your data. Functions you actually wanted to learn. 26

  27. Why not limit yourself to invariant functions? You have to guarantee that your input features already contain any necessary equivariant interactions (e.g. cross-products) . All learnable All invariant equivariant functions functions constrained by your data. Functions you actually wanted to learn. All learnable invariant OR functions. 27

  28. Building Euclidean Neural Networks 28

  29. The input to our network is geometry and features on that geometry. 29

  30. The input to our network is geometry and features on that geometry. We categorize our features by how they transform under rotation. Features have “angular frequency” L Frequency where L is a positive integer. Doesn’t change Scalars with rotation Changes with same frequency Vectors as rotation 3x3 Matrices 30

  31. The input to our network is geometry and features on that geometry. We categorize our features by how they transform under rotation. Features have “angular frequency” L Frequency where L is a positive integer. Doesn’t change Scalars with rotation Changes with same frequency Vectors as rotation 3x3 Matrices 31

  32. Euclidean Neural Networks are similar to convolutional neural networks, EXCEPT with special filters and tensor algebra! Convolutional filters based on learned radial functions and spherical harmonics. =

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend