gaussian markov and stationary processes
play

Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of - PowerPoint PPT Presentation

Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November 15, 2019 Introduction to Random


  1. Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November 15, 2019 Introduction to Random Processes Gaussian, Markov and stationary processes 1

  2. Introduction and roadmap Introduction and roadmap Gaussian processes Brownian motion and its variants White Gaussian noise Introduction to Random Processes Gaussian, Markov and stationary processes 2

  3. Random processes ◮ Random processes assign a function X ( t ) to a random event ⇒ Without restrictions, there is little to say about them ⇒ Markov property simplifies matters and is not too restrictive ◮ Also constrained ourselves to discrete state spaces ⇒ Further simplification but might be too restrictive ◮ Time t and range of X ( t ) values continuous in general ◮ Time and/or state may be discrete as particular cases ◮ Restrict attention to (any type or a combination of types) ⇒ Markov processes (memoryless) ⇒ Gaussian processes (Gaussian probability distributions) ⇒ Stationary processes (“limit distribution”) Introduction to Random Processes Gaussian, Markov and stationary processes 3

  4. Markov processes ◮ X ( t ) is a Markov process when the future is independent of the past ◮ For all t > s and arbitrary values x ( t ), x ( s ) and x ( u ) for all u < s � X ( s ) ≤ x ( s ) , X ( u ) ≤ x ( u ) , u < s � � � P X ( t ) ≤ x ( t ) � X ( s ) ≤ x ( s ) � � � = P X ( t ) ≤ x ( t ) ⇒ Markov property defined in terms of cdfs, not pmfs ◮ Markov property useful for same reasons as in discrete time/state ⇒ But not that useful as in discrete time /state ◮ More details later Introduction to Random Processes Gaussian, Markov and stationary processes 4

  5. Gaussian processes ◮ X ( t ) is a Gaussian process when all prob. distributions are Gaussian ◮ For arbitrary n > 0, times t 1 , t 2 , . . . , t n it holds ⇒ Values X ( t 1 ) , X ( t 2 ) , . . . , X ( t n ) are jointly Gaussian RVs ◮ Simplifies study because Gaussian distribution is simplest possible ⇒ Suffices to know mean, variances and (cross-)covariances ⇒ Linear transformation of independent Gaussians is Gaussian ⇒ Linear transformation of jointly Gaussians is Gaussian ◮ More details later Introduction to Random Processes Gaussian, Markov and stationary processes 5

  6. Markov processes + Gaussian processes ◮ Markov (memoryless) and Gaussian properties are different ⇒ Will study cases when both hold ◮ Brownian motion, also known as Wiener process ⇒ Brownian motion with drift ⇒ White noise ⇒ Linear evolution models ◮ Geometric brownian motion ⇒ Arbitrages ⇒ Risk neutral measures ⇒ Pricing of stock options (Black-Scholes) Introduction to Random Processes Gaussian, Markov and stationary processes 6

  7. Stationary processes ◮ Process X ( t ) is stationary if probabilities are invariant to time shifts ◮ For arbitrary n > 0, times t 1 , t 2 , . . . , t n and arbitrary time shift s P ( X ( t 1 + s ) ≤ x 1 , X ( t 2 + s ) ≤ x 2 , . . . , X ( t n + s ) ≤ x n ) = P ( X ( t 1 ) ≤ x 1 , X ( t 2 ) ≤ x 2 , . . . , X ( t n ) ≤ x n ) ⇒ System’s behavior is independent of time origin ◮ Follows from our success studying limit probabilities ⇒ Study of stationary process ≈ Study of limit distribution ◮ Will study ⇒ Spectral analysis of stationary random processes ⇒ Linear filtering of stationary random processes ◮ More details later Introduction to Random Processes Gaussian, Markov and stationary processes 7

  8. Gaussian processes Introduction and roadmap Gaussian processes Brownian motion and its variants White Gaussian noise Introduction to Random Processes Gaussian, Markov and stationary processes 8

  9. Jointly Gaussian random variables ◮ Def: Random variables X 1 , . . . , X n are jointly Gaussian (normal) if any linear combination of them is Gaussian ⇒ Given n > 0, for any scalars a 1 , . . . , a n the RV ( a = [ a 1 , . . . , a n ] T ) Y = a 1 X 1 + a 2 X 2 + . . . + a n X n = a T X is Gaussian distributed ⇒ May also say vector RV X = [ X 1 , . . . , X n ] T is Gaussian ◮ Consider 2 dimensions ⇒ 2 RVs X 1 and X 2 are jointly normal ◮ To describe joint distribution have to specify ⇒ Means: µ 1 = E [ X 1 ] and µ 2 = E [ X 2 ] ⇒ Variances: σ 2 � ( X 1 − µ 1 ) 2 � and σ 2 11 = var [ X 1 ] = E 22 = var [ X 2 ] ⇒ Covariance: σ 2 12 = cov( X 1 , X 2 ) = E [( X 1 − µ 1 )( X 2 − µ 2 )]= σ 2 21 Introduction to Random Processes Gaussian, Markov and stationary processes 9

  10. Pdf of jointly Gaussian RVs in 2 dimensions ◮ Define mean vector µ = [ µ 1 , µ 2 ] T and covariance matrix C ∈ R 2 × 2 � σ 2 σ 2 � 11 12 C = σ 2 σ 2 21 22 ⇒ C is symmetric, i.e., C T = C because σ 2 21 = σ 2 12 ◮ Joint pdf of X = [ X 1 , X 2 ] T is given by 1 � − 1 � 2( x − µ ) T C − 1 ( x − µ ) f X ( x ) = exp 2 π det 1 / 2 ( C ) ⇒ Assumed that C is invertible, thus det( C ) � = 0 ◮ If the pdf of X is f X ( x ) above, can verify Y = a T X is Gaussian Introduction to Random Processes Gaussian, Markov and stationary processes 10

  11. Pdf of jointly Gaussian RVs in n dimensions ◮ For X ∈ R n ( n dimensions) define µ = E [ X ] and covariance matrix σ 2 σ 2 σ 2  . . .  11 12 1 n σ 2 σ 2 σ 2 . . . 21 22 2 n � ( X − µ )( X − µ ) T �   C := E =  . .  ... . .   . .   σ 2 σ 2 σ 2 . . . n 1 n 2 nn ⇒ C symmetric, ( i , j )-th element is σ 2 ij = cov( X i , X j ) ◮ Joint pdf of X defined as before (almost, spot the difference) 1 � − 1 � 2( x − µ ) T C − 1 ( x − µ ) f X ( x ) = exp (2 π ) n / 2 det 1 / 2 ( C ) ⇒ C invertible and det( C ) � = 0. All linear combinations normal ◮ To fully specify the probability distribution of a Gaussian vector X ⇒ The mean vector µ and covariance matrix C suffice Introduction to Random Processes Gaussian, Markov and stationary processes 11

  12. Notational aside and independence ◮ With x ∈ R n , µ ∈ R n and C ∈ R n × n , define function N ( x ; µ , C ) as 1 � − 1 � 2( x − µ ) T C − 1 ( x − µ ) N ( x ; µ , C ) := exp (2 π ) n / 2 det 1 / 2 ( C ) ⇒ µ and C are parameters, x is the argument of the function ◮ Let X ∈ R n be a Gaussian vector with mean µ , and covariance C ⇒ Can write the pdf of X as f X ( x ) = N ( x ; µ , C ) ◮ If X 1 , . . . , X n are mutually independent, then C = diag( σ 2 11 , . . . , σ 2 nn ) and n − ( x i − µ i ) 2 1 � � � f X ( x ) = exp 2 σ 2 � 2 πσ 2 ii i =1 ii Introduction to Random Processes Gaussian, Markov and stationary processes 12

  13. Gaussian processes ◮ Gaussian processes (GP) generalize Gaussian vectors to infinite dimensions ◮ Def: X ( t ) is a GP if any linear combination of values X ( t ) is Gaussian ⇒ For arbitrary n > 0, times t 1 , . . . , t n and constants a 1 , . . . , a n Y = a 1 X ( t 1 ) + a 2 X ( t 2 ) + . . . + a n X ( t n ) is Gaussian distributed ⇒ Time index t can be continuous or discrete ◮ More general, any linear functional of X ( t ) is normally distributed ⇒ A functional is a function of a function � t 2 Ex: The (random) integral Y = X ( t ) dt is Gaussian distributed t 1 ⇒ Integral functional is akin to a sum of X ( t i ), for all t i ∈ [ t 1 , t 2 ] Introduction to Random Processes Gaussian, Markov and stationary processes 13

  14. Joint pdfs in a Gaussian process ◮ Consider times t 1 , . . . , t n . The mean value µ ( t i ) at such times is µ ( t i ) = E [ X ( t i )] ◮ The covariance between values at times t i and t j is �� �� �� C ( t i , t j ) = E X ( t i ) − µ ( t i ) X ( t j ) − µ ( t j ) ◮ Covariance matrix for values X ( t 1 ) , . . . , X ( t n ) is then  C ( t 1 , t 1 ) C ( t 1 , t 2 ) . . . C ( t 1 , t n )  C ( t 2 , t 1 ) C ( t 2 , t 2 ) . . . C ( t 2 , t n )   C ( t 1 , . . . , t n ) = . . .  ...  . . .   . . .   C ( t n , t 1 ) C ( t n , t 2 ) . . . C ( t n , t n ) ◮ Joint pdf of X ( t 1 ) , . . . , X ( t n ) then given as � � [ x 1 , . . . , x n ] T ; [ µ ( t 1 ) , . . . , µ ( t n )] T , C ( t 1 , . . . , t n ) f X ( t 1 ) ,..., X ( t n ) ( x 1 , . . . , x n ) = N Introduction to Random Processes Gaussian, Markov and stationary processes 14

  15. Mean value and autocorrelation functions ◮ To specify a Gaussian process, suffices to specify: ⇒ Mean value function ⇒ µ ( t ) = E [ X ( t )]; and � � ⇒ Autocorrelation function ⇒ R ( t 1 , t 2 ) = E X ( t 1 ) X ( t 2 ) ◮ Autocovariance obtained as C ( t 1 , t 2 ) = R ( t 1 , t 2 ) − µ ( t 1 ) µ ( t 2 ) ◮ For simplicity, will mostly consider processes with µ ( t ) = 0 ⇒ Otherwise, can define process Y ( t ) = X ( t ) − µ X ( t ) ⇒ In such case C ( t 1 , t 2 ) = R ( t 1 , t 2 ) because µ Y ( t ) = 0 ◮ Autocorrelation is a symmetric function of two variables t 1 and t 2 R ( t 1 , t 2 ) = R ( t 2 , t 1 ) Introduction to Random Processes Gaussian, Markov and stationary processes 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend