Unformatted text preview:

SCHOOL OF MECHANICAL, MANUFACTURING & MEDICAL ENGINEERING MEN170: SYSTEMS MODELLING AND SIMULATION 6: PROBABILISTIC MODELS : MARKOV CHAINS. 6.1 Introduction: Suppose we observe some characteristic of a system such as level of water in a dam, the temperature of the oven (both continuously changing), state of a machine (discrete changes only), at some discrete points in time (0,1,2,3..) and let Xt be the value of this characteristic at time t. In most situations, Xt will not be known with certainty until the time arrives and may be viewed as a random variable.In general, Xt depends on all the previous stages X0, X1, .... Xt-1. Discrete time stochastic process is simply a description of the relation between these random variables X0, X1, X2 ... Xt. One set of characteristics of a system is the states of the system. If we know all the possible states of the system, then the behaviour of the system is completely described by its states. A system may have finite or infinite number of states. Here, we are concerned with only finite states systems. If the state of a system can change in some probabilistic fashion at fixed or random interval in time, we have a stochastic process. An important feature of a stochastic process is the way in which the past behaviour influences the future behaviour. Suppose X(t) describes the state of the system and has n values. That is, at a given time, X1(t), X2(t) ...Xn(t) are the possible states of the system. Xi(t) could be demand of a product or number of parts waiting to be processed in a machine or price of a share or condition of a machine etc. The system will move from one state to another with some random fashion. That is, there is a probability attached to this. For example, the condition of the machine will change from working to failed with a probability of 0.12 and is called the transition probability. This value may be constant or may change with time. For example, we know that old machines fail quite frequently than new machines. Then the probability of changing from working to fail state of the machine will increase with time. Let us suppose that p(t) represents the probability distribution over X(t) (NOTE: X(t) and p(t) are vectors of size n x 1). i.e. p1(t) is the probability of finding the system in state X1(t). In general, the predictive distribution for X(t) is quite complicated with p(t), being a function of all previous state variables X(t-1), X(t-2) and so on. However, if p(t) depends only upon the preceding state then the process is called Markov Process. A Markov process is a mathematical model that describes, in probabilistic terms, the dynamic behaviour of certain type of system over time. The change of state occurs only at the end the time period and nothing happens during the time period chosen. Thus, a Markov process is a stochastic process which has the property that the probability of a transition from a given state pi(t) to a future state pj(t+1) is dependent only on the present state and not onthe manner in which the current state was reached. This is also called the Markovian property. If a Markov process meets the following conditions: 1. The system can be described by a set of finite states and that the system can be in one and only one state at a given time. 2 The transition probability Pij, the probability of transition from state i to state j, is given from every possible combination of i and j (including i = j) and the transition probabilities are assumed to be stationary (unchanging) over the time period of interest and independent of how state i was reached. and 3. Either the initial state of the system or the probability distribution of the initial state is known. Then, is called a finite-state first order Markov chain. 6.2 Finite State First Order Markov Chain Model development: As mentioned in the introduction, to completely specify the model, all we need to know are the initial state or probability distribution of the intial state) of the system p(0) = [p1, p2,.... pn] and the transition probability matrix P. Here, Pij represent the constant probability (finite state first order Markov chain) of transition from state Xi (t) to state Xj (t+1) for any value of t. The Markovian property makes P, time invariant. Knowing P, we can also construct a transition diagram to represent the system. Given the initial distribution p(0), p(1) = p(0) . P p(2) = p(1).P = p(0).P.P = p(0).P2 thus, for any k, p(k) = p(0) . Pk We also note that the elements of P must satisfy the following conditions: and Pij >= 0 for all i and j.AN EXAMPLE: A delicate precision instrument has a component that is subject to random failure. In fact, if the instrument is working properly at a given moment in time, then with probability 0.15, it will fail within the next 10 minute period. If the component fails, it can be replaced by a new one, an operation that also takes 10 minutes. The present supplier of replacement components does not guarantee that all replacement components are in proper working condition. The present quality standards are such that about 4% of the components supplied are defective. However, this can be discovered only after the defective component has been installed. If, defective, the instrument has to go through a new replacement operation. Assume that when failure occurs, it always occurs at the end of a 10- minute period. a) Find the transition probability matrix associated with this process. b) Given that it was properly working initially, what is the probability of finding the instrument not in proper working condition after 20 minutes? after 40 minutes?. 6.3 Classification of Finite Markov Chains: Two states i and j of a system defined by the transition matrix, are said to communicate if j is accessible from i and i is accessible from j. The number of transitions is not important. Communication is a class property. If i communicates with both j and k then j communicate with k. Consider the following transition matrices: In matrix P1 , states 1 and 2 can be reached from 3 and 4 but once in 1 or 2 it cannot return back to states 3 or 4. that is the process has been absorbed in the set of states 1 and 2. The set of states 1 and 2 is called an ergodic set (closed set) and the individual states 1 and 2 are called ergodic states. No matter where the process starts out, it will soon end up in the ergodic set. The states 3 and 4 are called transient states and the set 3 and 4 is called the transient set. If


View Full Document

UT CS 380S - Systems modelling and simulation

Download Systems modelling and simulation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Systems modelling and simulation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Systems modelling and simulation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?