Unformatted text preview:

1 Monte Carlo Simulation of Stochastic Processes In this lecture, we discuss the MC method used to simulate stochastic natural and artificial processes. §1 Random Walks1 We consider the simplest but most fundamental stochastic process, i.e., random walks in one dimension. DRUNKARD’S WALK PROBLEM Consider a drunkard on a street in front of a bar, who starts walking at time t = 0. At every time interval τ (say 1 second) the drunkard moves randomly either to the right or to the left by a step of l (say 1 meter). The position of the drunkard x along the street is a random variable. A MC simulation of the drunkard is implemented according to the following pseudocode. • Program diffuse.c Initialize a random number sequence for walker = 1 to N_walker position = 0 for step = 1 to Max_step if rand()2 > RAND_MAX/2 then Increment position by 1 else Decrement position by 1 endif endfor step endfor walker Figure. An MC simulation result of a walking drunkard’s position for 500 steps. 1 H. Gould and J. Tobochnik, An Introduction to Computer Simulation Methods, Part 2 (Addison-Wesley, Reading, MA, 1988). 2 The function rand() returns a random integer with uniform distribution in the range between 0 and RAND_MAX.2 PROBABILITY DISTRIBUTION The drunkard’s position, x(t), at time t is a random variable, which follows the probability density function, P(x, t). By generating many drunkards (with different random-number seeds), we can have a MC estimate of P(x, t). The following graph shows a histogram of the drunkard’s position over 1,000 samples at 100 and 500 steps. Note that the initial probability density is P(x, 0) = δx,0, meaning that the drunkard is at the origin with probability 1. As time progresses, the probability distribution becomes broader. Figure. A histogram of the drunkard’s position for 1,000 random drunkards. Let’s analyze the probability density of the drunkard’s position. First consider the probability, Pn(x), that the drunkard is at position x at time nτ. Suppose that the drunkard has walked to the right n→ times to the right and n← = n - n→ times to the left. Then the drunkard’s position x is (n→ - n→)l. There are many ways that the drunkard can reach the same position at the same time; the number of possible combinations is ! n!n"!n#!, where n! is the factorial of n. (There are n! combinations to arrange n distinct objects in a list. However n→ objects are indistinguishable and therefore the number of combinations is reduced by a factor of n→! Due to a similar reason, the number must be further divided by n→!.) Let’s assume that the drunkard walks to the right with probability, p, and to the left with probability, q = 1 - p. Then each of the above path occurs with probability, pn→(1 - p)n←. Consequently the probability that the drunkard is at position x = (n→ - n→)l at time nτ is given by Pnx = (n!" n#)l( )=n!n!!n#!pn!(1" p)n#. The mean value of x at time nτ is thus xn= Pnx = (n!" n#)l( )(n!" n#)ln!=0n$=n!n!!n#!pn!(1" p)n#(n!" n#)ln!=0n$. Now recall the binomial identity, which we will use as a generating function for the binomial series,3 n!n!!n"!pn!qn"n!=0n#= (p + q)n. By differentiating both sides of the above identity by p, we obtain !!pn!n!!n"!pn!qn"n!=0n#=n!n!!n"!n!pn!$1qn"n!=1n#=!!p(p + q)n= n( p + q)n$1 By multiplying both sides by p, and noting the term n!= 0 is zero, n!n!!n"!n!pn!qn"n!=0n#= np( p + q)n$1. For q = 1 - p, we get n!n!!n"!n!pn!(1# p)n"n!=0n$= np. Similarly, n!n!!n"!n"pn!(1# p)n"n!=0n$= nq. Using the above two equalities in the expression for xn, xn=n!n!!n"!pn!(1# p)n"(n!# n")ln!=0n$= n( p # q)l. If the drunkard walks to the right and left with equal probability, then p = q = 1/2 and xn = 0 as can be easily expected. Now let’s consider the variance of xn. xn2=n!n!!n"!pn!(1# p)n"(n!# n")l( )2n!=0n$=n!n!!n"!pn!(1# p)n"n!2# 2n!n"+ n"2( )n!=0n$l2. We can again make use of the binomial relation. By differentiating both sides by p twice, n!n!!n"!n!2pn!#2qn"n!=2n$= n(n #1)(p + q)n#2+n!n!!n"!n!pn!#2qn"n!=2n$. Multiplying both sides by p2,4 n!n!!n"!n!2pn!qn"n!=2n#= n(n $1)p2(p + q)n$2+n!n!!n"!n!pn!qn"n!=2n#%n!n!!n"!n!2pn!qn"n!=0n#$ npqn$1= n(n $1)p2(p + q)n$2+n!n!!n"!n!pn!qn"$ npqn$1n!=0n#%n!n!!n"!n!2pn!qn"n!=0n#= n(n $1)p2(p + q)n$2+ np( p + q)n$1 For p + q = 1, n!n!!n"!n!2pn!qn"n!=0n#= n(n $1)p2+ np. Similarly, n!n!!n"!n"2pn!qn"n!=0n#= n(n $1)q2+ nq. Now differentiate both sides of the binomial relation with respect to p and then by q, n!n!!n"!n!n"pn!#1qn"#1n!=1n#1$= n(n #1)(p + q)n#2%n!n!!n"!n!n"pn!qn"n!=1n#1$= n(n #1)pq( p + q)n#2%n!n!!n"!n!n"pn!qn"n!=0n$= n(n #1)pq( p + q)n#2 For p + q = 1, n!n!!n"!n!n"pn!qn"n!=0n#= n(n $1)pq. By combining the above results, xn2=n!n!!n"!pn!qn"n!2# 2n!n"+ n"2( )n!=0n$l2= n(n #1)p2+ np # 2n(n #1)pq + n(n #1)q2+ nq%&'(l2= n(n #1)(p # q)2+ n%&'(l2 The variance is obtained as xn2! xn2= n(n !1)(p ! q)2+ n"#$%l2! n( p ! q)l[ ]2= 1! (p ! q)2"#$%nl2= ( p + q)2! ( p ! q)2"#$%nl2= 4 pqnl2. For p = q = 1/2,5 Var[xn] = nl2. DIFFUSION LAW The main result of the above analysis is the linear relation between the steps and the variance of the random walk (the latter is also called the mean square displacement). The following graph confirms this linear relation. This relation means that a drunkard cannot go far. If he walks straight to the right, he can reach to the distance, nl, in n steps. On the other hand, the drunkard can reach only, Std[xn] = nl, on average. The time evolution of P(x, t) for the drunkard’s walk problem is typical of the so call diffusion processes. Diffusion is characterized by a linear relation between the mean square displacement and time, !R(t)2= 2Dt. The above drunkard follows this general relation, since x(t = n!)2= nl2= 2l22!!"#$%&t. The “diffusion constant” in this example is D = l2/2τ. Figure. Variance of the drunkard’s position (1,000 samples.) CONTINUUM LIMIT—DIFFUSION EQUATION Diffusion is central to many stochastic processes. The probability density function, P(x, t), is often analyzed by partial differential equations, which is derived as follows. We start from a recursive relation, P(x,t) =12P(x ! l, t !!) +12P(x + l, t !!), i.e., the probability density is obtained by adding two conditionally probabilities that he was: i) at one step


View Full Document

USC PHYS 516 - 05Stochastic

Documents in this Course
Load more
Download 05Stochastic
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view 05Stochastic and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 05Stochastic 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?