neural networks for machine learning lecture 11a hopfield
play

Neural Networks for Machine Learning Lecture 11a Hopfield Nets - PowerPoint PPT Presentation

Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets But John Hopfield (and others) A Hopfield net is composed of


  1. Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed

  2. Hopfield Nets • But John Hopfield (and others) • A Hopfield net is composed of realized that if the connections binary threshold units with recurrent connections between are symmetric , there is a global them. energy function. • Recurrent networks of non- – Each binary “ configuration ” linear units are generally very of the whole network has an hard to analyze. They can energy. behave in many different ways: – The binary threshold – Settle to a stable state decision rule causes the – Oscillate network to settle to a minimum of this energy – Follow chaotic trajectories that cannot be predicted far function. into the future.

  3. The energy function • The global energy is the sum of many contributions. Each contribution depends on one connection weight and the binary states of two neurons: ∑ b i − ∑ E = − s i s i s j w ij i i < j • This simple quadratic energy function makes it possible for each unit to compute locally how it’s state affects the global energy: ∑ Energy gap = Δ E i = E ( s i = 0 ) − E ( s i = 1 ) b i + s j w ij = j

  4. Settling to an energy minimum -4 • To find an energy minimum in this 0 ? 1 net, start from a random state and then update units one at a time in random order. 3 2 3 3 – Update each unit to whichever of its two states gives the -1 -1 0 0 1 lowest global energy. – i.e. use binary threshold units. - E = goodness = 3

  5. Settling to an energy minimum -4 • To find an energy minimum in this 0 1 net, start from a random state and then update units one at a time in random order. 3 2 3 3 – Update each unit to whichever of its two states gives the -1 -1 0 0 ? 1 lowest global energy. – i.e. use binary threshold units. - E = goodness = 3

  6. Settling to an energy minimum -4 • To find an energy minimum in this 0 1 net, start from a random state and then update units one at a time in random order. 3 2 3 3 – Update each unit to whichever of its two states gives the -1 -1 1 0 0 ? 1 lowest global energy. – i.e. use binary threshold units. - E = goodness = 3 - E = goodness = 4

  7. A deeper energy minimum • The net has two triangles in which the -4 three units mostly support each other. 0 1 – Each triangle mostly hates the other triangle. 3 2 3 3 • The triangle on the left differs from the one on the right by having a weight of 2 -1 -1 where the other one has a weight of 3. 0 1 1 – So turning on the units in the triangle on the right gives the deepest - E = goodness = 5 minimum.

  8. Why do the decisions need to be sequential? • If units make simultaneous -100 decisions the energy could go up. 0 0 +5 +5 • With simultaneous parallel updating we can get oscillations. – They always have a period of 2. At the next parallel step, both • If the updates occur in parallel but units will turn on. This has with random timing, the oscillations very high energy, so then are usually destroyed. they will both turn off again.

  9. A neat way to make use of this type of computation • Hopfield (1982) proposed that • Using energy minima to memories could be energy represent memories gives a content-addressable memory: minima of a neural net. – An item can be accessed – The binary threshold decision by just knowing part of its rule can then be used to content. “ clean up ” incomplete or • This was really amazing corrupted memories. in the year 16 BG. – It is robust against • The idea of memories as energy hardware damage. minima was proposed by I. A. – It’s like reconstructing a Richards in 1924 in “Principles of dinosaur from a few bones. Literary Criticism”.

  10. Storing memories in a Hopfield net • If we use activities of 1 and -1, Δ w ij = s i s j we can store a binary state vector by incrementing the This is a very simple rule weight between any two units by the product of their activities. that is not error-driven. That is both its strength – We treat biases as weights from a permanently on unit. and its weakness • With states of 0 and 1 the rule is 1 1 w 4 ( s ) ( s ) Δ = − − ij i j slightly more complicated. 2 2

  11. Neural Networks for Machine Learning Lecture 11b Dealing with spurious minima in Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed

  12. The storage capacity of a Hopfield net N 2 • Using Hopfield ’ s storage rule • The net has weights and the capacity of a totally biases. connected net with N units is • After storing M memories, only about 0.15N memories. each connection weight has an integer value in the range – At N bits per memory this is 0.15 N 2 only bits. [–M, M]. – This does not make • So the number of bits required efficient use of the bits to store the weights and biases required to store the is: N 2 log(2 M + 1) weights.

  13. Spurious minima limit capacity • Each time we memorize a configuration, we hope to create a new energy minimum. – But what if two nearby minima merge to create a minimum at an intermediate location? – This limits the capacity of a The state space is the corners of a Hopfield net. hypercube. Showing it as a 1-D continuous space is a misrepresentation.

  14. Avoiding spurious minima by unlearning • Hopfield, Feinstein and Palmer • Crick and Mitchison proposed suggested the following unlearning as a model of what strategy: dreams are for. – Let the net settle from a – That ’ s why you don ’ t random initial state and then remember them (unless you do unlearning. wake up during the dream) – This will get rid of deep, • But how much unlearning should spurious minima and we do? increase memory capacity. – Can we derive unlearning as the right way to minimize • They showed that this worked. some cost function? – But they had no analysis.

  15. Increasing the capacity of a Hopfield net • Physicists love the idea that the • Instead of trying to store vectors in math they already know might one shot, cycle through the training explain how the brain works. set many times. – Many papers were published in – Use the perceptron physics journals about Hopfield convergence procedure to train nets and their storage capacity. each unit to have the correct • Eventually, Elizabeth Gardiner state given the states of all the figured out that there was a much other units in that vector. better storage rule that uses the • Statisticians call this technique full capacity of the weights. “pseudo-likelihood”.

  16. Neural Networks for Machine Learning Lecture 11c Hopfield Nets with hidden units Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed

  17. A different computational role for Hopfield nets hidden units • Instead of using the net to store memories, use it to construct interpretations of sensory input. – The input is represented by the visible units. – The interpretation is represented by the states of the hidden units. – The badness of the interpretation visible units is represented by the energy.

  18. What can we infer about 3-D edges from 2-D lines in an image? • A 2-D line in an image could have been caused by many different 3-D edges in the world. • If we assume it’s a straight 3-D edge, the information that has been lost in the image is the 3-D depth of each end of the 2-D line. You can only see one – So there is a family of 3-D edges of these 3-D edges at that all correspond to the same a time because they 2-D line. occlude one another.

  19. An example: Interpreting a line drawing • Use one “ 2-D line ” unit for each 3-D lines J o i possible line in the picture. n i n 3 - D a r t i g h t – Any particular picture will a n g l e only activate a very small subset of the line units. • Use one “ 3-D line ” unit for each possible 3-D line in the scene. J o i n i n 3 - D – Each 2-D line unit could be the projection of many possible 3-D lines. Make these 3-D lines compete. • Make 3-D lines support each 2-D lines other if they join in 3-D. • Make them strongly support each other if they join at right picture angles.

  20. Two difficult computational issues • Using the states of the hidden units to represent an interpretation of the input raises two difficult issues: – Search (lecture 11) How do we avoid getting trapped in poor local minima of the energy function? • Poor minima represent sub- optimal interpretations. – Learning (lecture 12) How do we learn the weights on the connections to the hidden units and between the hidden units?

  21. Neural Networks for Machine Learning Lecture 11d Using stochastic units to improve search Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed

  22. No isy networks find better energy minima • A Hopfield net always makes decisions that reduce the energy. – This makes it impossible to escape from local minima. • We can use random noise to escape from poor minima. – Start with a lot of noise so its easy to cross energy barriers. – Slowly reduce the noise so that the system ends up in a deep minimum. This is “ simulated annealing ” (Kirkpatrick et.al. 1981) A B C

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend