SLIDE 18 Gibbs Sampling in General Gibbs Sampling in General
- Given 1 site in each of the n sequences, each configuration is a a vector of
positions at which the sites occur.
- the Gibbs sampler samples the joint probability distribution
- At each step a sequence k is chosen and the conditional distribution
- then a position is sampled from this conditional distribution.
- One can show that, in the limit of many iterations, the number of times
that the system will be in configuration is proportional to
- This is one example of what are generally called Monte-Carlo Markov chain
techniques for sampling a complex probability distribution on a computer.
) , , , (
2 1 n
i i i L ) , , , (
2 1 n
i i i P L
∑
+ − + −
=
j n k k n n k k k
i i j i i i P i i P i i i i i i P ) , , , , , , ( ) , , ( ) , , , , , | (
1 1 2 1 1 1 1 2 1
L L L L L
k
i ) , , , (
2 1 n
i i i N L ) , , , (
2 1 n
i i i L ) , , , (
2 1 n
i i i P L
Finding the maximum: Simulated Annealing Finding the maximum: Simulated Annealing
- To sample from the distribution we pick a k and sample
from the conditional distribution:
- In simulated annealing one introduces a parameter and samples instead
from:
- One then slowly increases with time.
- One can show that in the limit in which is increased infinitely slowly,
- ne is guaranteed to find the global optimum of
.
- One hopes that if one increases slowly that one is still likely to find the
global optimum.
) , , , (
2 1 n
i i i P L
∑
+ − + −
=
j n k k n n k k k
i i j i i i P i i P i i i i i i P ) , , , , , , ( ) , , ( ) , , , , , | (
1 1 2 1 1 1 1 2 1
L L L L L
k
i
β
∑
+ − + −
=
j n k k n n k k k
i i j i i i P i i P i i i i i i P
β β β
) , , , , , , ( ) , , ( ) , , , , , | (
1 1 2 1 1 1 1 2 1
L L L L L β β ) , , , (
2 1 n
i i i P L
β