DOC PREVIEW
MIT 6 041 - Discrete-state Markov Processes

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CHAPTER FIVE discrete state Markov 'ocesses The Bernoulli and Poisson processes are defined by probabilistic descrip- tions of series of independent trials. The R2arkov process is one type of characterization of a series of dependent trials. We have emphasized the no-memory properties of the Bernoulli and Poisson processes. R/larltov processes do have menlory (events in nonoverlapping intervals of time need not be independent), but the dependence of future events on past events is of a particularly simple nature.5-1 Series of Dependent Trials for Discrete-state Processes Consider a system which may be described at any time as being in one of a set of mutually exclusive collectively exhaustive states Sl, 82, . . . , S,. According to a set of probabilistic rules, the system may, at certain discrete instants of time, undergo changes of state (or state transitions). We number the particular instants of time at which transitions may occur, and we refer to these instants as the first trial, the second trial, etc. Let Si(n) be the event that the system is in state Si immediately after the nth trial. The probability of this event may be written as P[Si(n)]. Each trial in the general process of the type (discrete state, disc~ele transition) introduced in the above paragraph may be described by transition probabilities of the form These transition probabilities specify the probabilities associated with each trial, and they are conditional on the entire past history of the process. The above quantity, for instance, is the conditional prob- ability that the system will be in state Sj immediately after the nth trial, given that 'the previous state history of the process is specified by the event Sa(n - l)Sb(n - 2)X,(n - 3) . . . We note some examples of series of dependent trials in discrete- state discrete-transition processes. The states might be nonnegative integers representing the number of people on a bus, and each bus stop might be a probabilistic trial at which a change of state may occur. Another example is a process in which one of several biased coins is flipped for each trial and the selection of the coin for each trial depends in some manner on the outcomes of the previous flips. The number of items in a warehouse at the start of each day is one possible state desc~iption of an iinventoiy. For this process, the state transition due to the total transactions on any day could be considered to be the result of one of a continuing series of dependent trials. 5-2 Discrete-state Discrete-transition Ma rkov Processes If the transition probabilities for a series of dependent trials satisfy the = - - Markou condition: P[Sj(n) 1 Sa(n - I)&,(% - 2)S,(n - 3) . . -1 = P[Sj(n) I S,(n - I)] for all n, j, a, b, c, . . . E the system is said to be a diso-eie-state disc?-ete-transition Markov process. ZESZS If the state of the system immediately prior to the nth trial is known, the Markov condition requires that the conditional transition prob- abilities describing the nth trial do not depend on any additional past history of the process. The present state of the system specifies all historical information relevant to the future behavior of a i\Iarkov process. , We shall not consider processes for which the conditional transi- tion probabilities depend on the number of the trial. Thus we rimy define the state transi- . tion probabilities for a discrete-transition Markov process to be Quantity p~ is the conditional probability that the system mill be in state Sj immediately after the next trial, given that the present state of the process is Si. We always have 0 _< pi, $ 1, and, because the list of states must be mutually exclusive and collectively exhaustive, it must also be true that It is often convenient to display these transition probabilities as members of an m X 772 transilion matrix [p], for which pij is the entry in the,ith row and jth coIumn We also define the k-step state transition prob. ility pij@), /conditional probability that wroc-\ ess will be in state Sj after exactly pii(k) = k more trials, given that present ) = PIS,(n + i) 1 Si(n)j \state of process is S, - / which simply notes that the process had to be in some state immediately after the (n + k - l)th trial. From the definition of conditional prob- ability we haveFOYa discrete-state discrete-transition Markov process we may use the Marliov condition on the right-hand side of this equation to obtain which may be substituted in the above equation for pij(k) to obtain the result This relation is a simple case of the Chapman-Kolmogorov equation, and it may be used as an alternative definition for the discrete-state discrete-transition Aiarkov process with constant transition proba- bilities. This equation need not apply to the more general process described in Sec. 5-1. Note that the above relation, with 1 = 1, provides a means of calculation of the k-step transition probabilities which is more efficient than preparing a probability tree for k trials and then collecting the probabilities of the appropriate events (see Prob. 5.02). We consider one example. Suppose that a series of dependent- coin flips can be described by a model which assigns to any trial con- ditional probabilities which depend only on the outcome of the previous trial. In particular, we are told that any flip immediately following an experimental outcome of a head ha5 probability 0.75 of also resulting in a head and that any flip immediately following a tail is a fair toss. Using the most recent outcome as the state description, we have the two-state Markov process S,: tails In the state-transition diagram shown above, we have made a picture of the process in which the states are circles and the trial transition probabilities are labeled on the appropriate arrowed branches. We may use the relation first for k = 2, then for k = 3, etc., to compute the following table (in which we round off to three significant figures) : Our table informs us, for instance, that, given the process is in state S1at any time, the conditional probability that the process will be in state S2 exactly three trials later is equal to 0.328. In this example it appears that the k-step transition probabilities pij(k) reach a limiting value as k increases and that these limiting values do not depend on i. We shall study this important property


View Full Document

MIT 6 041 - Discrete-state Markov Processes

Documents in this Course
Quiz 1

Quiz 1

5 pages

Quiz 2

Quiz 2

6 pages

Quiz 1

Quiz 1

11 pages

Quiz 2

Quiz 2

2 pages

Syllabus

Syllabus

11 pages

Quiz 2

Quiz 2

7 pages

Quiz 1

Quiz 1

6 pages

Quiz 1

Quiz 1

11 pages

Quiz 2

Quiz 2

13 pages

Quiz 1

Quiz 1

13 pages

Load more
Download Discrete-state Markov Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Discrete-state Markov Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Discrete-state Markov Processes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?