Unformatted text preview:

Markov ProcessesLimiting distributionDeriving transition probabilitiesThe Metropolis AlgorithmMarkov chains and importance samplingMarkov ProcessesLet us summarize our discussion of Monte Carlo integration. We have learned that it is possible to evaluate integrals by a process in which the integration space is sampled randomly. Values of the integrand evaluated at these randomly chosen points can be summed and normalized, just as we do with methodical schemes based on equally spacedabscissas, to give a good estimate of the integral. These Monte Carlo schemes gain significant advantage over methodical approaches when applied to high-dimensional integrals. In these situations, and in particular when applied to statistical-mechanics integrals, the integral may have significant contributions from only a very small portion of the entire domain of integration. If these regions of integration can be sampled preferentially, with a well-characterized bias, then it is possible to correct for the biased sampling when summing the contributions to the integral, and thereby obtain a higher-quality estimate of the integral. This idea is known as importance sampling. The basic equation of importance-sampling Monte Carlo integration can be written in a compact formfI fpp� =This formula states that the integral, defined here as the unweighted average of a functionf, can be expressed as the weighted average of f/ , where  is the weight applied to the sampling. We are now ready to address the question of how to sample a space according to some weight .A stochastic process is a procedure by which a system moves through a series of well-defined states in a way that exhibits some element of randomness. A Markov process is a stochastic process that has no memory. That is, the probability that the system moves into a particular state depends only upon the state it is currently in, and not on the history of the past visitations of states. Thus, a Markov process can be fully specified via a set oftransition probabilities $\pi_{ij}$ that describe the likelihood that the system moves into state $j$ given that it is presently in state $i$. The full set of transition probabilities can be viewed as a matrix $\Pi$.As a simple example, we can consider a system that can occupy any of three states. The probability of moving from one state to another in a Markov process is given via the transition probability matrix (TPM)11 12 1321 22 2331 32 330.1 0.5 0.40.9 0.1 0.00.3 0.3 0.4p p pp p pp p p� � � �� � � �P � =� � � �� � � �� � � �where for the sake of example we have filled in the matrix with specific values for the transition probabilities. Now consider what happens in a process that moves from one state to another, each time selecting the new state according to the transition probabilities given here (for example, say the system presently is in state 1; generate a random numberuniformly on (0,1); if the value is less than 0.1, stay in state 1; if between 0.1 and 0.6,move to state 2, otherwise move to state 3). One could construct a histogram to describe the number of times visited in each of the three states during the process. After a long period of sampling, steady state is reached and the histogram does not change with continued sampling. The histogram so obtained is called the <em>limiting distribution</em> of the Markov process. Examine the applet in Illustration 1 to see Markov sampling in action.So what is the connection to Monte Carlo integration? The scheme is to devise a Markovprocess to yield a limiting distribution that covers the important regions of our simulated system. In this manner we can do importance sampling of a complex region of integration by specifying only the transition probabilities of the Markov process. To accomplish this we need to develop the connection between the transition probabilities and the limiting distribution.Several important features should be noted: first, each probability is properly specified, <em>i.e.</em>, it is nonnegative and does not exceed unity; second, each row sums to unity, indicating a unit probability for going from one state to another in a given step; third, the diagonal elements are not necessarily zero, indicating that an acceptable outcome for a Markov step leaves the system in its present state. More on this detail later. In all of what follows it is important that we have a transition-probability matrix that corresponds to an ergodic process. Ergodicity was discussed in a previous section. In this context, it means that it is possible to get from one state to another via a sufficiently long Markov chain. Note it is not required that each state be accessible from every other state in a single step—it is OK (and very common) to have zero for some of the transition probabilities.Limiting distributionWe consider now how the limiting distribution relates to the TPM. Consider the product of  with itself11 12 13 11 12 13221 22 23 21 22 2331 32 33 31 32 3311 11 12 21 13 31 11 12 12 22 13 3221 11 22 21 23 31 21 12 22 22 23 3231 11 32 21 33 31 31 12 32 22..etcetc                                                                        33 32.etc      Look closely at the first (1,1) element. It is the sum of three terms. The first term,11 11 is the probability of staying in state 1 for two successive steps, given that the system started in state 1. Similarly, the second term in the sum 12 21  the probability that the system moves in successive steps from state 1 to state 2, and then back again. Finally the third term is the probability of moving from 1 to 3 and back to 1. Thus the (1,1) terms in the product matrix contains all ways of going from state 1 back to state 1 intwo steps. Likewise, the (1,2) term of the product matrix is the probability that thesystem moves from state 1 to state 2 in exactly two steps. The same interpretation holds for all the terms in the product matrix. Thus the square of  is a two-step transition-probability matrix, and in general the multiple product n is the n-step TPM( ) ( ) ( )11 12 13( ) ( ) ( )21 22 23( ) ( ) ( )31 32


View Full Document

UB CE 530 - Markov Processes

Download Markov Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Markov Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Markov Processes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?