@@ -138,7 +138,7 @@ This leaves us a practical question: Can we sample exactly from an arbitrary $p(

So what is a Markov chain? Consider a sequence of random states $\{R_i\} = R_1, R_2, ..., R_N$. We say that the probability of choosing a new random state $R_{i+1}$ only depends on $R_i$, not any other previous states. Now, one can transition from one state to the other with a certain probability, which we denote as: $T(R \rightarrow R^\prime)$. This rate is normalised: $\sum_{R^\prime}T(R \rightarrow R^\prime)=1$. This simply means: you have to go to some state.

We can then write a rate equation to see what the chance is of obtaining state $p(X)$ at step $i+1$:

We can then write a rate equation to see what the chance is of obtaining state $p(R)$ at step $i+1$:

But we want this Markov chain to eventually sample our probability distribution $p(R)$. This means that we want the probability to be stationary, such that we always sample the same distribution:

...

...

@@ -155,14 +155,14 @@ probability distribution? The Metropolis algorithm solves this problem!

The Metropolis algorithm uses the following ansatz: $T(R \rightarrow R^\prime) = \omega_{RR^\prime} \cdot A_{RR^\prime}$. Here, the generation of the

new state in the Markov chain is split into two phases. First, starting from the previous state $R = R_i$, we generate a candidate state $R^\prime$

with probability $\omega_{XX^\prime}$. This is the so-called "trial move". We then accept this trial move with probability $A_{RR^\prime}$,

i.e. set $R_{i+1} = R'$. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(X^\prime \rightarrow X)$)

i.e. set $R_{i+1} = R'$. If we don't accept it, we take the old state again, $R_{i+1} = R$. Altogether, the probability of going to a state new state ($T(R^\prime \rightarrow R)$)

is the product of proposing it ($\omega_{RR^\prime}$) and accepting it ($A_{RR^\prime})$.

The problem can we further simplified by demanding that $\omega_{RR^\prime}=\omega_{R^\prime R}$ - the trial move should have a symmetric probability of going from $R$ to $R^\prime$

and vice versa. The detailed balance equation then reduces to:

The key to understanding why Eq. (5) solves (4) is by observing that $A_{RR'}$ and $A{R'R}$ will necessarily be given by different cases in the Eq. (5).

### Summary of Metropolis algorithm

...

...

@@ -170,7 +170,7 @@ The key to understanding why Eq. (5) solves (4) is by observing that $A_{RR'}$ a

1. Start with a state $R_i$

2. Generate a state $R'$ from $R_i$ (such that $\omega_{R_i,X'}=\omega_{R',R_i}$)

3. If $p(R') > p(R_i)$, set $R_{i+1}=R'$ <br>

If $p(R') <p(R_i)$,set$R_{i+1}=R'$withprobability$\frac{p(X')}{p(X)}$<br>

If $p(R') <p(R_i)$,set$R_{i+1}=R'$withprobability$\frac{p(R')}{p(R)}$<br>