RBM, DBN, and DBM M. Soleymani Sharif University of Technology - - PowerPoint PPT Presentation

rbm dbn and dbm
SMART_READER_LITE
LIVE PREVIEW

RBM, DBN, and DBM M. Soleymani Sharif University of Technology - - PowerPoint PPT Presentation

RBM, DBN, and DBM M. Soleymani Sharif University of Technology Fall 2017 Slides are based on Salakhutdinov lectures, CMU 2017 and Hugo Larochelle s class on Neural Networks: https://sites.google.com/site/deeplearningsummerschool2016/.


slide-1
SLIDE 1

RBM, DBN, and DBM

  • M. Soleymani

Sharif University of Technology Fall 2017 Slides are based on Salakhutdinov lectures, CMU 2017 and Hugo Larochelle’s class on Neural Networks: https://sites.google.com/site/deeplearningsummerschool2016/.

slide-2
SLIDE 2

Energy based models

  • Gibbs distribution

2

slide-3
SLIDE 3

Boltzmann Machine

slide-4
SLIDE 4

How a causal model generates data

  • In a causal model we generate data in

two sequential steps:

– First pick the hidden states from p(h). – Then pick the visible states from p(v|h)

  • The probability of generating a visible

vector, v, is computed by summing

  • ver all possible hidden states.

This slide has been adopted from Hinton lectures, “Neural Networks for Machine Learning”, coursera, 2015.

slide-5
SLIDE 5

How a Boltzmann Machine generates data

  • It is not a causal generative model.
  • Instead, everything is defined in terms of the energies of joint

configurations of the visible and hidden units.

  • The energies of joint configurations are related to their probabilities

– We can simply define the probability to be 𝑞 𝑤, ℎ = 𝑓−𝐹 𝑤,ℎ

slide-6
SLIDE 6

Restricted Boltzmann Machines

A Restricted Boltzmann Machine (RBM) is an undirected graphical model with hidden and visible layers. Learnable parameters are 𝑐, 𝑑 which are linear weight vectors for 𝑤, ℎ and 𝑋 which models interaction between them.

𝐹 𝑤, ℎ = −𝑤𝑈𝑋ℎ − 𝑑𝑈𝑤 − 𝑐𝑈ℎ = −

𝑗,𝑘

𝑥𝑗𝑘𝑤𝑗ℎ𝑘 −

𝑗

𝑑𝑗𝑤𝑗 −

𝑘

𝑐

𝑘ℎ𝑘

slide-7
SLIDE 7

Restricted Boltzmann Machines

7

All hidden units are conditionally independent given the visible units and vice versa.

slide-8
SLIDE 8

Restricted Boltzmann Machines

RBM probabilities: 𝑞 𝑤|ℎ =

𝑗

𝑞 𝑤𝑗|ℎ 𝑞 ℎ|𝑤 =

𝑘

𝑞 ℎ𝑘|𝑤 𝑞 𝑤𝑗 = 1|ℎ = 𝜏 𝑋

𝑗 𝑈ℎ + 𝑐𝑗

𝑞 ℎ𝑘 = 1|𝑤 = 𝜏 𝑋

𝑘𝑤 + 𝑑 𝑘

slide-9
SLIDE 9

Probabilistic Analog of Autoencoder

  • Autoencoder
slide-10
SLIDE 10

RBM: Image input

𝒘

𝒘

slide-11
SLIDE 11

MNIST: Learned features

Larochelle et al., JMLR 2009

slide-12
SLIDE 12

Restricted Boltzmann Machines

The effect of the latent variables can be appreciated by considering the marginal distribution over the visible units:

12

𝐹 𝑤, ℎ = −𝑤𝑈𝑋ℎ − 𝑑𝑈𝑤 − 𝑐𝑈ℎ = −

𝑗,𝑘

𝑥𝑗𝑘𝑤𝑗ℎ𝑘 −

𝑗

𝑑𝑗𝑤𝑗 −

𝑘

𝑐

𝑘ℎ𝑘

slide-13
SLIDE 13

Marginal distribution

𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘

slide-14
SLIDE 14

Marginal distribution

𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘 𝒘

slide-15
SLIDE 15

Marginal distribution

𝒘 𝒘 𝒘 𝒘 𝒘 𝒘

slide-16
SLIDE 16

Marginal distribution

𝒘 𝒘 𝒘 𝒘 𝒘 𝒘

slide-17
SLIDE 17

Model Learning

slide-18
SLIDE 18

RBM learning: Stochastic gradient descent

  • Second

term: intractable due to exponential number

  • f

configurations. 𝒘(𝑜) 𝒘(𝑜) 𝒘

𝒘

𝜖 𝜖𝜄 log 𝑄(𝒘(𝑜)) = 𝜖 𝜖𝜄 log

exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ − 𝜖 𝜖𝜄 log 𝑎

Positive phase Negative phase

𝑎 =

𝑤 ℎ

exp 𝑤𝑈𝑋ℎ + 𝑑𝑈𝑤 + 𝑐𝑈ℎ

slide-19
SLIDE 19

Positive phase

𝜖 𝜖𝑋 log

exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ = 𝜖 𝜖𝑋 ℎ exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ ℎ exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ = ℎ ℎ𝑤 𝑜 𝑈 exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ ℎ exp 𝑤(𝑜)𝑈𝑋ℎ + 𝑑𝑈𝑤(𝑜) + 𝑐𝑈ℎ = 𝐹ℎ~𝑞 𝑤𝑜,ℎ ℎ𝑤 𝑜 𝑈

slide-20
SLIDE 20

RBM learning: Stochastic gradient descent

Maximize with respect to

20

𝜖 𝜖𝑋

𝑗𝑘

log 𝑄(𝑤(𝑜)) = 𝐹ℎ𝑘 𝑤𝑗ℎ𝑘|𝑤 = 𝑤(𝑜) − 𝐹𝑤𝑗,ℎ𝑘 𝑤𝑗ℎ𝑘 𝜖 𝜖𝑐

𝑘

log 𝑄(𝑤(𝑜)) = 𝐹ℎ𝑘 ℎ𝑘|𝑤 = 𝑤(𝑜) − 𝐹ℎ𝑘 ℎ𝑘 𝜖 𝜖𝑑𝑗 log 𝑄(𝑤(𝑜)) = 𝐹𝑤𝑗 𝑤𝑗|𝑤 = 𝑤(𝑜) − 𝐹𝑤𝑗 𝑤𝑗

slide-21
SLIDE 21

RBM learning: Stochastic gradient descent

𝜖 𝜖𝑋

𝑗𝑘

log 𝑄(𝑤(𝑜)) = 𝐹 𝑤𝑗ℎ𝑘|𝑤 = 𝑤(𝑜) − 𝐹 𝑤𝑗ℎ𝑘 𝐹 𝑤𝑗ℎ𝑘|𝑤 = 𝑤(𝑜) = 𝐹 ℎ𝑘|𝑤 = 𝑤(𝑜) 𝑤𝑗

𝑜

= 𝑤𝑗

𝑜

1 + exp − 𝑗 𝑋

𝑗𝑘𝑤𝑗 𝑜 + 𝑐 𝑘

  • Note that to compute 𝐹 𝑤𝑗ℎ𝑘 (negative statistic) we ideally need to integrate (however, a

sampler over time can be used to get an estimate of gradients).

Positive statistic

slide-22
SLIDE 22

Approximate Learning

  • Replace the average over all possible input configurations by samples.

𝐹𝒘,𝒊 − 𝜖𝐹 𝒊, 𝒘 𝜖𝜄 =

𝒊,𝒘

𝑞 𝒊, 𝒘 𝒊𝒘𝑈

  • Run MCMC chain (Gibbs sampling) starting from the observed

examples.

slide-23
SLIDE 23

Model Learning

slide-24
SLIDE 24

Model Learning

slide-25
SLIDE 25

RBM learning: Contrastive divergence

Getting an unbiased sample of the second term is very difficult. It can be done by starting at any random state of the visible units and performing Gibbs sampling for a very long time. Block-Gibbs MCMC

25

Initialize v0 = v Sample h0 from P(h|v0) For t=1:T Sample vt from P(v|ht-1) Sample ht from P(h|vt )

slide-26
SLIDE 26

Negative statistic

𝐹[𝑤𝑗ℎ𝑘] ≈ 1 𝑁

𝑛=1 𝑁

𝑤𝑗

(𝑛)ℎ𝑘 (𝑛)

𝑤(𝑛)ℎ(𝑛)~𝑄 𝑤, ℎ

  • Initializing N independent Markov chain each at a data point and running

until convergence: 𝐹[𝑤𝑗ℎ𝑘] ≈ 1 𝑂

𝑜=1 𝑂

𝑤𝑗

𝑜 ,𝑈ℎ𝑘 𝑜 ,𝑈

𝑤 𝑜 ,0 = 𝑤 𝑜 ℎ 𝑜 ,𝑙~𝑄(ℎ|𝑤 = 𝑤 𝑜 ,𝑙) for 𝑙 ≥ 0 𝑤 𝑜 ,𝑙~𝑄(𝑤|ℎ = ℎ 𝑜 ,𝑙−1) for 𝑙 ≥ 1

slide-27
SLIDE 27

Contrastive Divergence

𝒘 𝒘

𝒘(𝑜) 𝒘𝑙 = 𝒘 𝒘1 𝒘 𝒘 𝒘(𝑜)

𝑞 𝑤|ℎ =

𝑗

𝑞 𝑤𝑗|ℎ 𝑞 ℎ|𝑤 =

𝑘

𝑞 ℎ𝑘|𝑤 𝑞 𝑤𝑗 = 1|ℎ = 𝜏 𝑋

𝑗 𝑈ℎ + 𝑐𝑗

𝑞 ℎ𝑘 = 1|𝑤 = 𝜏 𝑋

𝑘𝑤 + 𝑑 𝑘

slide-28
SLIDE 28

CD-k Algorithm

  • CD-k: contrastive divergence with k iterations of Gibbs sampling
  • In general, the bigger k is, the less biased the estimate of the gradient

will be

  • In practice, k=1 works well for learning good features and for pre-

training

slide-29
SLIDE 29

RBM inference: Block-Gibbs MCMC

29

slide-30
SLIDE 30

CD-k Algorithm

  • Repeat until stopping criteria

– For each training sample 𝑤(𝑜)

  • Generate a negative sample

𝑤 using k steps of Gibbs sampling starting at the point 𝑤 𝑜

  • Update model parameters:
  • 𝑋 ← 𝑋 + 𝛽

𝜖 log 𝑞 𝑤(𝑜) 𝜖𝑋

= 𝑋 + 𝛽 ℎ 𝑤 𝑜 𝑤 𝑜 𝑈 − ℎ 𝑤 𝑤𝑈

  • 𝑐 ← 𝑐 + 𝛽 ℎ 𝑤 𝑜

− ℎ 𝑤

  • 𝑑 ← 𝑑 + 𝛽 𝑤 𝑜 −

𝑤

slide-31
SLIDE 31

Positive phase vs. negative phase

slide-32
SLIDE 32

Contrastive Divergence

Since convergence to a final distribution takes time, good initialization can speeds things up dramatically. Contrastive divergence uses a sample image to initialize the visible weights, then runs Gibbs sampling for a few iterations (even k = 1) – not to “equilibrium.” This gives acceptable estimates of the expected values in the gradient update formula.

slide-33
SLIDE 33

MNIST: Learned features

Larochelle et al., JMLR 2009

slide-34
SLIDE 34

Tricks and Debugging

  • Unfortunately, it is not easy to debug training RBMs (e.g. using

gradient checks)

  • We instead rely on approximate ‘‘tricks’’

– we plot the average stochastic reconstruction 𝑤(𝑜) − 𝑤 and see if it tends to decrease – for inputs that correspond to image, we visualize the connection coming into each hidden unit as if it was an image – gives an idea of the type of visual feature each hidden unit detects – we can also try to approximate the partition function Z and see whether the (approximated) NLL decreases

Salakhutdinov, Murray, ICML 2008.

slide-35
SLIDE 35

RBM inference

  • Block-Gibbs MCMC

35

slide-36
SLIDE 36

Gaussian Bernoulli RBMs

slide-37
SLIDE 37

Gaussian Bernoulli RBMs

  • Let x represent a real-valued (unbounded) input, add a quadratic

term to the energy function 𝐹 𝑤, ℎ = 𝑤𝑈𝑋ℎ + 𝑑𝑈𝑤 + 𝑐𝑈ℎ + 1 2 𝑤𝑈𝑤

  • In this case 𝑞 𝑤|ℎ becomes a Gaussian distribution with mean 𝜈 = 𝑑

+ 𝑋𝑈ℎ and identity covariance matrix

  • recommend to normalize the training set by:

– subtracting the mean of each input – dividing each input by the training set standard deviation

  • should use a smaller learning rate than in the regular RBM
slide-38
SLIDE 38

Gaussian Bernoulli RBMs

slide-39
SLIDE 39

Gaussian Bernoulli RBMs

slide-40
SLIDE 40

Deep Belief Networks

  • one of the first non-convolutional models to successfully admit

training of deep architectures (2007)

slide-41
SLIDE 41

Pre-training

  • We will use a greedy, layer-wise procedure
  • Train one layer at a time with unsupervised criterion
  • Fix the parameters of previous hidden layers
  • Previous layers viewed as feature extraction
slide-42
SLIDE 42

Pre-training

  • Unsupervised Pre-training

– first layer: find hidden unit features that are more common in training inputs than in random inputs – second layer: find combinations of hidden unit features that are more common than random hidden unit features – third layer: find combinations of combinations of ...

  • Pre-training initializes the parameters in a region from which we can

reach a better parameters

slide-43
SLIDE 43

Deep Belief Networks

  • First construct a standard RBM with visible layer 𝑤 and hidden layer ℎ
  • Train this RBM

Tutorial on Deep Learning and Applications, Honglak Lee, NIPS 2010

slide-44
SLIDE 44

Deep Belief Networks

  • Stack another hidden layer on top of the RBM to form a new RBM
  • Fix W1, sample ℎ1 from 𝑞 ℎ1|𝑤 as input.
  • Train W2 as an RBM.

Tutorial on Deep Learning and Applications, Honglak Lee, NIPS 2010

𝑞 ℎ1|𝑤

slide-45
SLIDE 45

Deep Belief Networks

  • Stack a third hidden layer on top of the RBM to form a new RBM
  • Fix W1, W2 sample ℎ2 from 𝑞 ℎ2|ℎ1 as input.
  • Train W3 as RBM.
  • Continue…

𝑞 ℎ1|𝑤 𝑞 ℎ2|ℎ1

Tutorial on Deep Learning and Applications, Honglak Lee, NIPS 2010

slide-46
SLIDE 46

Learning Deep Autoencoders

Important in the history of deep learning

The global fine-tuning uses backpropagation. Initially encoder and decoder networks use the same weights.

slide-47
SLIDE 47

Fine-tuning

  • Once all layers are pre-trained

– add output layer – train the whole network using supervised learning

  • Supervised learning is performed as in a regular network

– forward propagation, backpropagation and update – We call this last phase fine-tuning

  • all parameters are ‘‘tuned’’ for the supervised task at hand
  • representation is adjusted to be more discriminative
slide-48
SLIDE 48

Fine-tuning

  • Discriminative fine-tuning:

– Use DBN to initialize a multi-layer neural network. – Maximize the conditional distribution:

slide-49
SLIDE 49

Learning Deep Autoencoders

Important in the history of deep learning

  • Learn probabilistic model on unlableled data
  • Use learned parameters to initialize a discriminative model for a

specific task

  • Slightly adjust discriminative model for a specific task using a labeled

set.

slide-50
SLIDE 50

Stacking RBMs, Autoencoders

  • Stacked Restricted Boltzmann Machines:

– Hinton, Teh and Osindero suggested this procedure with RBMs:

  • A fast learning algorithm for deep belief nets.

– To recognize shapes, first learn to generate images (Hinton, 2006).

  • Stacked autoencoders, sparse-coding models, etc.

– Bengio, Lamblin, Popovici and Larochelle (stacked autoencoders) – Ranzato, Poultney, Chopra and LeCun (stacked sparse coding models)

  • Lots of others started stacking models together.
slide-51
SLIDE 51

Impact of Initialization

slide-52
SLIDE 52

Stacked denoising autoencoder

Vincent et al., Extracting and Composing Robust Features with Denoising Autoencoders, 2008.

slide-53
SLIDE 53

Deep Belief Network

Hinton et.al. Neural Computation 2006.

slide-54
SLIDE 54

DBN Training

  • This process of adding layers can be repeated recursively

– we obtain the greedy layer-wise pre-training procedure for neural networks

  • This procedure corresponds to maximizing a bound on the likelihood
  • f the data in a DBN

– in theory, if our approximation 𝑟(ℎ(1)|𝑤) is very far from the true posterior, the bound might be very loose – this only means we might not be improving the true likelihood – we might still be extracting better features!

  • Fine-tuning is done by the Up-Down algorithm

– A fast learning algorithm for deep belief nets (Hinton, Teh, Osindero, 2006).

slide-55
SLIDE 55

Learning Part-based Representation

Lee et al., ICML 2009

slide-56
SLIDE 56

Learning Part-based Representation

Lee et al., ICML 2009

slide-57
SLIDE 57

Deep Autoencoders

slide-58
SLIDE 58

Deep Belief Networks: Examples

  • (top) input image.
  • (middle) reconstruction from the second layer units after single bottom-up pass, by projecting the

second layer activations into the image space.

  • (bottom) reconstruction from second layer after 20 iterations of block Gibbs sampling.

Lee et al., Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations, ICML 2009.

slide-59
SLIDE 59

Sampling from DBN

slide-60
SLIDE 60

Deep Belief Networks

Tutorial on Deep Learning and Applications, Honglak Lee, NIPS 2010

slide-61
SLIDE 61

Deep Belief Networks Examples

You can also run a DBN generator unsupervised:

https://www.youtube.com/watch?v=RSF5PbwKU3I

slide-62
SLIDE 62

Deep Boltzmann Machines (DBM)

There are limitations on the types of structure that can be represented efficiently by a single layer of hidden variables. Similar idea, but more layers. Training more complicated…

slide-63
SLIDE 63

Deep architectures

63

DBM

slide-64
SLIDE 64

Shallow and Deep architectures

  • Modeling high-order and long-range interactions

64

RBM DBM

slide-65
SLIDE 65

Deep Boltzmann Machines

65

slide-66
SLIDE 66

Deep Boltzmann Machines

Conditional distributions remain factorized due to layering.

66

slide-67
SLIDE 67

Deep Boltzmann Machines

All connections are undirected. Bottom-up and Top-down:

Unlike many existing feed-forward models: ConvNet (LeCun), HMAX (Poggio et.al.), Deep Belief Nets (Hinton et.al.)

slide-68
SLIDE 68

Deep Boltzmann Machines

Conditional distributions:

slide-69
SLIDE 69

Deep Boltzmann Machines

  • Probabilistic
  • Generative
  • Powerful

Typically trained with many examples.

69

DBM

DBM’s have the potential of learning internal representations that become increasingly complex at higher layers

slide-70
SLIDE 70

DBM: Learning

slide-71
SLIDE 71

DBM: Learning

slide-72
SLIDE 72

DBM: Learning

slide-73
SLIDE 73

Experiments

60,000 training and 10,000 testing examples 0.9 million parameters Gibbs sampler for 100,000 steps

Tutorial on Deep Learning and Applications, Honglak Lee, NIPS 2010

slide-74
SLIDE 74

Sampling from DBMs

slide-75
SLIDE 75

HandwriDng Recognition

slide-76
SLIDE 76

Generative Model of 3-D Objects

slide-77
SLIDE 77

Generative Model of 3-D Objects

slide-78
SLIDE 78

Experiments

Running a generator open-loop: https://www.youtube.com/watch?v=-l1QTbgLTyQ

slide-79
SLIDE 79

Resources

  • Deep Learning Book, Chapter 20 (and Chapter 18).