stochastic processes
play

Stochastic Processes Markov Processes Hamid R. Rabiee 1 Overview - PowerPoint PPT Presentation

Stochastic Processes Markov Processes Hamid R. Rabiee 1 Overview o Markov Property o Markov Chains o Definition o Stationary Property o Paths in Markov Chains o Classification of States o Steady States in MCs. Stochastic Processes 2 Markov


  1. Stochastic Processes Markov Processes Hamid R. Rabiee 1

  2. Overview o Markov Property o Markov Chains o Definition o Stationary Property o Paths in Markov Chains o Classification of States o Steady States in MCs. Stochastic Processes 2

  3. Markov Property A discrete process has the Markov property if given its โ€ข value at time t, the value at time t+1 is independent of values at times before t. That is: ๐‘„๐‘  ๐‘Œ $%& = ๐‘ฆ $%& ๐‘Œ $ = ๐‘ฆ $ , ๐‘Œ $*& = ๐‘ฆ $*& , โ€ฆ , ๐‘Œ & = ๐‘ฆ & = ๐‘„๐‘  ๐‘Œ $%& = ๐‘ฆ $%& ๐‘Œ $ = ๐‘ฆ $ For all t, x -%& , x - , ๐‘ฆ $*& , ๐‘ฆ $*. , โ€ฆ , ๐‘ฆ & . 3

  4. Stationary Property A Markov Process is called stationary if โ€ข Pr ๐‘Œ $%& = ๐‘ฃ|๐‘Œ $ = ๐‘ค = ๐‘„๐‘  ๐‘Œ & = ๐‘ฃ| ๐‘Œ 0 = ๐‘ค for all t. The evolution of stationary processes donโ€™t change over โ€ข time. For defining the complete joint distribution of a โ€ข stationary Markov Process it is sufficient to define ๐‘„๐‘  ๐‘Œ & = ๐‘ฃ| ๐‘Œ 0 = ๐‘ค and ๐‘„๐‘  ๐‘Œ 0 = ๐‘ค for all u and v. We will mainly consider stationary Markov processes โ€ข here. 4

  5. Example (Coin Tossing Game) Consider a single player game in which at every step a โ€ข biased coin is tossed and according to the result, the score will be increased or decreased by one point. The game ends if either the score reaches 100 โ€ข (winning) or -100 (losing). Score of the player at each step ๐‘ข โ‰ฅ 0 is a random โ€ข variable and the sequence of scores as the game progresses forms a random process ๐‘Œ 7 , ๐‘Œ & , โ€ฆ , ๐‘Œ $ . 5

  6. Example (Coin Tossing Game) It is easy to verify that X is a stationary Markov chain: โ€ข At the end of each step the score solely depends on the current score ๐‘ก 9 and the result of tossing the coin (which is independent of time and previous tosses). Stating this mathematically (for ๐‘ก 9 โˆ‰ {โˆ’100,100} ): โ€ข ๐‘„๐‘  ๐‘Œ $%& = ๐‘ก ๐‘Œ $ = ๐‘ก 9 , ๐‘Œ $*& = ๐‘ก $*& , โ€ฆ , ๐‘Œ 7 = 0 ๐‘ž ; ๐‘ก = ๐‘ก 9 + 1 Independent of t = ? 1 โˆ’ ๐‘ž ; ๐‘ก = ๐‘ก 9 โˆ’ 1 and ๐‘ก 7 , โ€ฆ , ๐‘ก $*& 0 ; ๐‘๐‘ขโ„Ž๐‘“๐‘ ๐‘ฅ๐‘—๐‘ก๐‘“ = ๐‘„๐‘  ๐‘Œ $%& = ๐‘ก ๐‘Œ $ = ๐‘ก 9 = ๐‘„๐‘  ๐‘Œ & = ๐‘ก ๐‘Œ 7 = ๐‘ก 9 If value of p was subject to change in time, the process โ€ข would not be stationary. In the formulation we would have ๐‘ž $ instead of p . โ€ข 6

  7. Example (Tracking) Assume we want to track a target in XY plane, and so we need to โ€ข model its movement. Assuming that current position and speed of the target describes โ€ข everything about its future movements, we can consider the 4- dimensional state: $ , ฬ‡ ๐‘Œ $ , ฬ‡ ๐‘‡ $ = (๐‘Œ $ , ๐‘ ๐‘ $ ) It is common to consider linear relation between ๐‘‡ $ and ๐‘‡ $*& with a โ€ข Gaussian additive noise: ๐‘‡ $ = ๐ต $ ๐‘‡ $*& + ๐‘ค $ ; ๐‘ค $ ~๐‘‚(0, ฮฃ) Or equivalently: ๐‘” S T |S TUV ๐‘ก $ ๐‘ก $*& = ๐‘‚(๐‘ก $ ; ๐ต $ ๐‘ก $*& , ฮฃ) There exists efficient algorithms to work with these linear- โ€ข Gaussian models. 7

  8. Example (Tracking) Considering the kinematics relations of a moving object we โ€ข can define linear relation as: 1 0 ฮ”๐‘ข 0 0 1 0 ฮ”๐‘ข ๐ต $ = Independent of t (stationary) 0 0 1 0 0 0 0 1 This approach can not model Acceleration โ€ข the speed is changed only because of noise. โ€ข We can model arbitrary speed change by appropriately โ€ข setting the 3 rd and 4 th rows of the matrix at each time. An example of a non stationary Markov process. โ€ข โ€ข Another approach is to extend the states to 6- dimensions containing ฬˆ ๐‘Œ $ and ฬˆ ๐‘ $ . 8

  9. Example (Speech) A basic model for speech signal considers the value at time โ€ข t to be a linear combination of its d previous values with a Gaussian additive noise: \ ๐‘‡ $ = Y ๐›ฝ Z ๐‘‡ $*Z + ๐‘ค $ ; ๐‘ค $ ~๐‘‚(0, ฮฃ) Z[& ๐‘‡ $ is not a Markov process. โ€ข 9

  10. Example (Speech) We can make a stationary Markov process by considering โ€ข the d-dimensional state ๐‘‰ $ = ๐‘‡ $ , ๐‘‡ $*& โ€ฆ , ๐‘‡ $*\ _ : ๐‘‰ $ = ๐ต๐‘‰ $*& + ๐‘ค ` ๐ถ Where: ๐›ฝ & ๐›ฝ . โ‹ฏ ๐›ฝ \ 1 1 0 โ‹ฏ 0 0 0 1 ๐ต = , ๐ถ = โ‹ฎ โ‹ฎ โ‹ฑ 1 0 1 Equivalently: โ€ข ๐‘” e T |e TUV ๐‘ฃ $ ๐‘ฃ $*& = ๐‘‚ ๐‘ฃ $ ; ๐ต $ ๐‘ฃ $*& & , ฮฃ ๐• โˆ€ 1 โ‰ค ๐‘— โ‰ค ๐‘’ โˆถ ๐‘ฃ $ Z*& = ๐‘ฃ $*& Z โ€ข Which (๐‘ฆ) Z is the ๐‘— th dimension of vector ๐‘ฆ and ๐• is the indicator function (used for guaranteeing consistency). โ€ข Note that we have repeated ๐‘Œ $ in d consecutive states of U and there should be no inconsistency between those values. 10

  11. Markov Process Types โ€ข Two types of Markov processes based on domain of ๐‘Œ $ values: โ€ข Discrete โ€ข Continuous โ€ข Discrete Markov processes are called โ€œMarkov Chainsโ€ (MC). โ€ข In this session we will focus on stationary MCs. 11

  12. Transition matrix According to the Markov property and stationary property, โ€ข at each time step the process moves according to a fixed transition matrix: ๐‘„ ๐‘Œ $%& = ๐‘˜ ๐‘Œ $ = ๐‘— = ๐‘ž Zl Stochastic matrix: Rows sum up to 1 โ€ข Double stochastic matrix: Rows and columns sum up to 1 12

  13. State Graph It is convenient to visualize a stationary Markov Chain โ€ข with a transition diagram: A node represents a possible value of ๐‘Œ $ . โ€ข At each time t, we say the process is in state ๐‘ก if ๐‘Œ $ =s. โ€ข Each edge represents the probability of going from one state โ€ข to another (we omit edges with zero weight). We should also specify the vector of initial probabilities ๐œŒ โ€ข = ๐œŒ & , โ€ฆ , ๐œŒ n where ๐œŒ Z = ๐‘„๐‘ (๐‘Œ 7 = ๐‘—) . So a stationary discrete process could be described as a โ€ข person walking randomly on a graph (considering each step to depend only on the state he is currently in). The result path is called a โ€œRandom Walkโ€. โ€ข 13

  14. Example The transition diagram of the coin tossing game: โ€ข p p p 1 p 1 -99 -100 99 100 -98 1-p 1-p 1-p 1-p 1-p โ€ข We modeled winning and losing by states which when we get into, we never get out. โ€ข Note that if the process was not stationary we were not able to visualize it in this way: โ€ข For example consider the case that p is changing in time. 14

  15. Example (Modeling Weather) โ€ข Example: Each day is sunny or rainy. If a day is rainy, the next day is rainy with probability ๐›ฝ (and sunny with probability 1 โˆ’ ๐›ฝ ). If the day is sunny, the next day is rainy with probability ๐›พ (and sunny with probability 1 โˆ’ ๐›พ ). S = {rainy, sunny}, ๐‘„ = ๐›ฝ 1 โˆ’ ๐›ฝ ๐›พ 1 โˆ’ ๐›พ 1 โˆ’ ๐›พ ๐›ฝ 1 โˆ’ ๐›ฝ R S ๐›พ 15

  16. Paths If the state space is {0,1} we have: โ€ข ๐‘ž๐‘  ๐‘ฆ 2 = 0 ๐‘ฆ 0 = 0 = ๐‘„๐‘  ๐‘ฆ & = 1 ๐‘ฆ 7 = 0 ร— ๐‘ž ๐‘ฆ . = 0 ๐‘ฆ & = 1 +๐‘„๐‘  ๐‘ฆ & = 0 ๐‘ฆ 7 = 0 ร— ๐‘ž ๐‘ฆ . = 0 ๐‘ฆ & = 0 (n) as the probability that starting from state i, the Define ๐‘ž Zl โ€ข process stops at state j after n time steps. (.) = โˆ‘ `[& โ‚ฌ ๐‘ž Zl ๐‘ž Z` ๐‘ž `l Stochastic Processes 16

  17. Paths Theorem: โ€ข โ‚ฌ (n) = Y (n*&) ๐‘ž `l ๐‘ž Zl ๐‘ž Z` `[& โ€ข To simplify the notation we define the matrix ๐‘„ (n) such that (n) . the value at the iโ€™th row and jโ€™th column is ๐‘ž Zl โ€ข Corollary 1: ๐‘„ (n) can be calculated by: ๐‘„ (n) = ๐‘„ n Corollary 2: If the process starts at time 0 with distribution โ€ข ๐œŒ on the states then after n steps the distribution is ๐œŒ๐‘„ n . Stochastic Processes 17

  18. Absorbing Markov Chain An absorbing state is one in which the probability that the โ€ข process remains in that state once it enters the state is 1 (i.e., ๐‘ž ZZ = 1 ). A Markov chain is absorbing if it has at least one absorbing โ€ข state, and if from every state it is possible to go to an absorbing state (not necessarily in one step). An absorbing Markov chain will eventually enter one of the โ€ข absorbing states and never leave it. Example: The 100 and -100 states in coin tossing game โ€ข Note: After playing ling enough, the player will either win โ€ข or lose (with probability 1). p p p 1 p 1 -99 100 -100 99 -98 1-p 1-p 1-p 1-p 1-p Stochastic Processes 18

  19. Absorption theorem In an absorbing Markov chain the probability that the โ€ข process will be absorbed is 1. Proof: From each non-absorbing state ๐‘ก l it is possible to โ€ข reach an absorbing state starting from ๐‘ก l . Therefore there exists p and m, such that the probability of not absorbing after m steps is at most p, in 2m steps at most ๐‘ž . , etc. Since the probability of not being absorbed is โ€ข monotonically decreasing, we have: nโ†’โ€ž Pr(not absorbed after n steps) =0 lim โ€ข Stochastic Processes 19

  20. Classification of States Accessibility: State j is said to be accessible from state i if โ€ข starting in i it is possible that the process will ever enter state j: (๐‘„ n ) Zl > 0 . Communication: Two states i and j that are accessible to each โ€ข other are said to communicate. Every node communicates with itself: โ€ข 7 = Pr ๐‘Œ 7 = ๐‘— ๐‘Œ 7 = ๐‘— = 1 โ€ข p โ€ โ€  Communication is an equivalence relation: It divides the โ€ข state space up into a number of separate classes in which every pair of states communicate. (why?) The Markov chain is said to be irreducible if it has only one โ€ข class. Stochastic Processes 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend