Markov process · · ·
In the definition of a Markov process we stated that the next state
- nly depends on the current state, and not on how long we have been
already in that state. This means that in a Markov process, the state residence times (sojourn times) must be random variables that have a memoryless distribution. We will show that the residence times in continuous-time Markov chain need to be exponentially distributed and in a discrete-time Markov chain need to be geometrically distributed. An extension of Markov processes can be imagined in which the state residence time distributions are not exponential or geometric any more.
◮ In that case it is important to know how long we have been in a
particular state and we speak of semi-Markov processes.
R.B. Lenin (rblenin@daiict.ac.in) () Queueing Models Part - 1 Autumn 2007 13 / 49