invariances in gaussian processes
play

Invariances in Gaussian processes And how to learn them ST John - PowerPoint PPT Presentation

Invariances in Gaussian processes And how to learn them ST John PROWLER.io Outline 1. What are invariances? 2. Why do we want to make use of them? 3. How can we construct invariant GPs? 4. Where invariant GPs are actually crucial 5.


  1. Invariances in Gaussian processes And how to learn them ST John PROWLER.io

  2. Outline 1. What are invariances? 2. Why do we want to make use of them? 3. How can we construct invariant GPs? 4. Where invariant GPs are actually crucial 5. How can we figure out what invariances to employ? 2/53

  3. What are invariances? Function does not change under some transformation i.e. for Can be discrete or continuous - Translation - Rotation - Reflection - Permutation 3/53

  4. Invariance under discrete translation Periodic functions 4/53

  5. Invariance under discrete translation 1 2 (2, 3) (3, 4) Density( ) = Density( ) 3 4 5/53 1 2 3 4 5

  6. Invariance under discrete rotation Density of water molecules as a function of (x, y) point in plane 1/6th of the plane already predicts the function value everywhere 6/53

  7. Invariance under reflection Solar elevation measured as function of azimuth (for different days) Left half already predicts right half 7/53

  8. Invariance under permutation [100, 200, 1, 1, 1] 8/53

  9. Invariance under permutation [1, 200, 1, 100, 1] 9/53

  10. Invariance under permutation f ( ) = f ( ) f (100, 200, 1, 1, 1) = f (1, 200, 1, 100, 1) Different inputs but same function value 10/53

  11. Invariance under permutation 2 2 E ( ) = E ( ) 1 3 3 1 11/53

  12. Discrete symmetries 12/53

  13. Invariance under continuous transformations Translation Rotation 13/53

  14. Example: image classification Class label as a function of image pixel matrix Label ( ) = “cat” 14/53

  15. Example: image classification Class label as a function of image pixel matrix Label ( ) = “cat” 15/53

  16. Example: image classification Class label as a function of image pixel matrix Label ( ) = “cat” 16/53

  17. Example: image classification Class label as a function of image pixel matrix Label ( ) = “cat” 17/53

  18. Example: image classification Class label as a function of image pixel matrix 8 Label ( ) = “8” 18/53

  19. Example: molecular energy E ( ) = E ( ) 19/53

  20. Approximately invariant… 20/53

  21. Approximately invariant… 6 9 21/53

  22. 2. Why do we want to use invariances? - Incorporate prior knowledge about the behaviour of a system - Physical symmetries, e.g. modelling total energy (and gradients, i.e. forces) of a set of atoms - Helps generalisation - Improved accuracy vs number of training points 22/53

  23. Toy example 23/53

  24. Toy example 24/53

  25. Toy example 25/53

  26. Toy example 26/53

  27. Constructing invariant GPs We want a prior over functions that obey the chosen symmetry. Symmetrise the function: Can do this by a) appropriate mapping to invariant space b) sum over transformations 27/53

  28. Permutation-invariant GPs: mapping construction 28/53

  29. Permutation-invariant GPs: sum construction : 29/53

  30. Invariant sum kernel 30/53

  31. Samples from the prior 31/53

  32. How can we generalise this? 32/53

  33. Symmetry group Transformations can be composed : Set of all compositions of transformations is a group ; corresponds to symmetries 33/53

  34. Orbit of x : all points reachable by transformations Example: Permutation in 2D 34/53

  35. Examples of orbits: permutation invariance Orbit size = 2 35/53

  36. Examples of orbits: six-fold rotation invariance Orbit size = 6 36/53

  37. Examples of orbits: permutation and six-fold rotation Orbit size = 12 37/53

  38. Examples of orbit: continuous rotation symmetry Uncountably infinite 38/53

  39. Orbit of a periodic function in 1D Countably infinite … … 39/53

  40. Constructing invariant GPs: sum revisited 40/53

  41. Applications 41/53

  42. Molecular modelling Time-evolution of the configuration (position of all atoms) of a system of atoms/molecules Need Potential Energy Surface (PES)! Gradients = forces (easy with GPs) 42/53

  43. Potential Energy Surface 43/53

  44. Modelling Potential Energy Surface Approximate as sum over k-mers (many-body expansion) Invariance to rotation/translation of local environment/k-mer Invariance under permutation of equivalent atoms 44/53

  45. Modelling Potential Energy Surface Many-body expansion, sum over k-mers: 45/53

  46. Modelling Potential Energy Surface Invariance to rotation/translation of local environment/k-mer: Map to interatomic distances 46/53

  47. Modelling Potential Energy Surface Invariance under permutation of equivalent atoms: sum over them! 47/53

  48. How can we find out if an invariance is helpful? - As usual (like another kernel hyperparameter): marginal likelihood - Unlike “regular” likelihood (equivalent to training-set RMSE): - Less overfitting - Related to generalisation 48/53

  49. Marginal likelihood and generalisation Measures how well part of the training set predicts the other training points: = how accurately the model generalises during inference, similar to cross-validation (but differentiable) 49/53

  50. Marginal likelihood 50/53

  51. Summary: we have seen… How to constrain GPs to give invariant functions When invariance improves a model's generalisation When invariance increases the marginal likelihood That invariances exist in real-world problems 51/53

  52. Questions? Next up: how to learn invariances… 52/53

  53. Snowflake prior 53/53

  54. Why not just data augmentation? Used in deep learning… Invariances are better: 1. Cubic scaling with number of data points vs linear scaling with invariances in prior 2. Data augmentation results in same predictive mean, but not variance 3. Invariances in the GP prior give us invariant samples 54/53

  55. Learning Invariances with the Marginal Likelihood Mark van der Wilk PROWLER.io

  56. We discussed… How to constrain GPs to When invariance improves 1 2 give invariant functions a model's generalisation When invariance increases That invariances exist in 3 4 the marginal likelihood real-world problems

  57. From known invariances to learning them We previously saw that known invariances were useful to modelling . How do we exploit invariances in a problem, if we don't know them a-priori ? ● Can we learn a useful invariance from the data ? ●

  58. Model selection Invariances in a GP are expressed in the kernel ● We use the marginal likelihood to select models ● Parameterising the orbit is all that is left ●

  59. Parameterising orbits is hard Strict invariance requires: which we can obtain using the construction I don't know how to parameterise orbits!

  60. From orbits to distributions We sum over an arbitrary set of points ● Take the infinite limit ● Find kernel ● I do know how to parameterise distributions!

  61. Insensitivity We lose exact invariance… but this may be a blessing! ● &

  62. Insensitivity We lose exact invariance… but this may be a blessing! ● &

  63. What we will do Parameterise a distribution that describes the insensitivity ● Use this distribution to define a kernel ● Find invariance in the kernel by optimising the hyperparameters ●

  64. Obstacles to inference For large datasets, the matrix operations of K ff becomes infeasible (O(N 3 ) time complexity) 1. 2. We may have non-Gaussian likelihoods (classification!) 3. We can't even evaluate the kernel!

  65. Variational inference For large datasets, the matrix operations of K ff becomes infeasible (O(N 3 ) time complexity) 1. 2. We may have non-Gaussian likelihoods (classification!) 3. We can't even evaluate the kernel! Still needed for K uu and k u n

  66. Interdomain inducing variables & Variational posterior is constructed by conditioning ● Gaussian conditioning requires covariances ●

  67. Interdomain inducing variables & Variational posterior is constructed by conditioning ● Gaussian conditioning requires covariances ●

  68. Interdomain inducing variables & Variational posterior is constructed by conditioning ● Gaussian conditioning requires covariances ●

  69. Interdomain inducing variables & Variational posterior is constructed by conditioning ● Gaussian conditioning requires covariances ●

  70. Unbiased estimation of the kernel 2 , σ n 2 , give unbiased estimate of the ELBO! Unbiased estimates of μ n , μ n

  71. Unbiased estimation of the kernel

  72. Unbiased estimation of the kernel } sample (We only need to sample one set from p θ ( x a | x ), see paper for details)

  73. What we did Parameterise a distribution that describes the insensitivity ● Use this distribution to define a kernel ● Approximate the marginal likelihood using the variational evidence lower bound (ELBO) ● Find an unbiased ELBO approximation , using unbiased estimates of the kernel ● Optimise the hyperparameters , using the gradients of the ELBO ●

  74. Results MNIST Single model tunes ● itself automatically to multiple datasets Fire off optimisation ● Rotated MNIST and watch it go

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend