ee558 digital communications
play

EE558 - Digital Communications Lecture 3: Review of Probability and - PowerPoint PPT Presentation

EE558 - Digital Communications Lecture 3: Review of Probability and Random Processes Dr. Duy Nguyen Outline Introduction 1 Probability and Random Variables 2 Random Processes 3 Introduction 2 Introduction The main objective of a


  1. EE558 - Digital Communications Lecture 3: Review of Probability and Random Processes Dr. Duy Nguyen

  2. Outline Introduction 1 Probability and Random Variables 2 Random Processes 3 Introduction 2

  3. Introduction The main objective of a communication system is the transfer of information over a channel. Message signal is best modeled by a random signal Two types of imperfections in a communication channel: ◮ Deterministic imperfection, such as linear and nonlinear distortions, inter-symbol interference, etc. ◮ Nondeterministic imperfection, such as addition of noise, interference, multipath fading, etc. We are concerned with the methods used to describe and characterize a random signal, generally referred to as a random process (also commonly called stochastic process). In essence, a random process is a random variable evolving in time. Introduction 3

  4. Outline Introduction 1 Probability and Random Variables 2 Random Processes 3 Probability and Random Variables 4

  5. Sample Space and Probability Random experiment : its outcome, for some reason, cannot be predicted with certainty. Examples: throwing a die, flipping a coin and drawing a card from a deck. Sample space : the set of all possible outcomes, denoted by Ω . Outcomes are denoted by ω ’s and each ω lies in Ω , i.e., ω ∈ Ω . A sample space can be discrete or continuous . Events are subsets of the sample space for which measures of their occurrences, called probabilities, can be defined or determined. Probability and Random Variables 5

  6. Example of Throwing a Fair Die Ω Various events can be defined: “the outcome is even number of dots”, “the outcome is smaller than 4 dots”, “the outcome is more than 3 dots”, etc. Probability and Random Variables 6

  7. Three Axioms of Probability For a discrete sample space Ω , define a probability measure P on Ω as a set function that assigns nonnegative values to all events, denoted by E , in Ω such that the following conditions are satisfied Axiom 1: 0 ≤ P ( E ) ≤ 1 for all E ∈ Ω (on a % scale probability ranges from 0 to 100%. Despite popular sports lore, it is impossible to give more than 100%). Axiom 2: P (Ω) = 1 (when an experiment is conducted there has to be an outcome). Axiom 3: For mutually exclusive events 1 E 1 , E 2 , E 3 ,. . . we have P ( � ∞ i =1 E i ) = � ∞ i =1 P ( E i ) . 1 The events E 1 , E 2 , E 3 ,. . . are mutually exclusive if E i ∩ E j = ⊘ for all i � = j , where ⊘ is the null set. Probability and Random Variables 7

  8. Important Properties of the Probability Measure 1. P ( E c ) = 1 − P ( E ) , where E c denotes the complement of E . This property implies that P ( E c ) + P ( E ) = 1 , i.e., something has to happen. 2. P ( ⊘ ) = 0 (again, something has to happen). 3. P ( E 1 ∪ E 2 ) = P ( E 1 ) + P ( E 2 ) − P ( E 1 ∩ E 2 ) . Note that if two events E 1 and E 2 are mutually exclusive then P ( E 1 ∪ E 2 ) = P ( E 1 ) + P ( E 2 ) , otherwise the nonzero common probability P ( E 1 ∩ E 2 ) needs to be subtracted off. 4. If E 1 ⊆ E 2 then P ( E 1 ) ≤ P ( E 2 ) . This says that if event E 1 is contained in E 2 then occurrence of E 1 means E 2 has occurred but the converse is not true. Probability and Random Variables 8

  9. Conditional Probability We observe or are told that event E 1 has occurred but are actually interested in event E 2 : Knowledge that of E 1 has occurred changes the probability of E 2 occurring. If it was P ( E 2 ) before, it now becomes P ( E 2 | E 1 ) , the probability of E 2 occurring given that event E 1 has occurred. This conditional probability is given by � P ( E 2 ∩ E 1 ) , if P ( E 1 ) � = 0 P ( E 1 ) P ( E 2 | E 1 ) = . 0 , otherwise If P ( E 2 | E 1 ) = P ( E 2 ) , or P ( E 2 ∩ E 1 ) = P ( E 1 ) P ( E 2 ) , then E 1 and E 2 are said to be statistically independent . Bayes ’ rule P ( E 2 | E 1 ) = P ( E 1 | E 2 ) P ( E 2 ) , P ( E 1 ) Probability and Random Variables 9

  10. Total Probability Theorem The events { E i } n i =1 partition the sample space Ω if: n � (i) E i = Ω (1) i =1 (ii) E i ∩ E j = ⊘ for all 1 ≤ i, j ≤ n and i � = j (2) If for an event A we have the conditional probabilities { P ( A | E i ) } n i =1 , P ( A ) can be obtained as n � P ( A ) = P ( E i ) P ( A | E i ) . i =1 Bayes’ rule : P ( E i | A ) = P ( A | E i ) P ( E i ) P ( A | E i ) P ( E i ) = j =1 P ( A | E j ) P ( E j ) . � n P ( A ) Probability and Random Variables 10

  11. Random Variables Ω ω ω 1 2 ω 4 ω 3 R ( ω ) ( ω ) ( ω ) ( ω ) x x x x 1 3 2 4 A random variable is a mapping from the sample space Ω to the set of real numbers. We shall denote random variables by boldface, i.e., x , y , etc., while individual or specific values of the mapping x are denoted by x ( ω ) . Probability and Random Variables 11

  12. Random Variable in the Example of Throwing a Fair Die Ω R 5 1 2 3 4 6 There could be many other random variables defined to describe the outcome of this random experiment! Probability and Random Variables 12

  13. Cumulative Distribution Function (cdf) cdf gives a complete description of the random variable. It is defined as: F x ( x ) = P ( ω ∈ Ω : x ( ω ) ≤ x ) = P ( x ≤ x ) . The cdf has the following properties: 1. 0 ≤ F x ( x ) ≤ 1 2. F x ( x ) is nondecreasing: F x ( x 1 ) ≤ F x ( x 2 ) if x 1 ≤ x 2 3. F x ( −∞ ) = 0 and F x (+ ∞ ) = 1 4. P ( a < x ≤ b ) = F x ( b ) − F x ( a ) . Probability and Random Variables 13

  14. Typical Plots of cdf I A random variable can be discrete , continuous or mixed . F x ( ) x 1 . 0 (a) x 0 − ∞ ∞ Probability and Random Variables 14

  15. Typical Plots of cdf II ( ) F x x 1 . 0 (b) x 0 − ∞ ∞ F x ( ) x 1 . 0 (c) x 0 − ∞ ∞ Probability and Random Variables 15

  16. Probability Density Function (pdf) The pdf is defined as the derivative of the cdf: f x ( x ) = d F x ( x ) . d x It follows that: P ( x 1 ≤ x ≤ x 2 ) = P ( x ≤ x 2 ) − P ( x ≤ x 1 ) � x 2 = F x ( x 2 ) − F x ( x 1 ) = f x ( x )d x. x 1 Basic properties of pdf: 1. f x ( x ) ≥ 0 . � ∞ 2. −∞ f x ( x )d x = 1 . � 3. In general, P ( x ∈ A ) = A f x ( x )d x . For discrete random variables, it is more common to define the probability mass function (pmf): p i = P ( x = x i ) . Note that, for all i , one has p i ≥ 0 and � i p i = 1 . Probability and Random Variables 16

  17. Bernoulli Random Variable f ( ) x F x ( ) x x 1 1 p − ( ) p (1 p ) − x x 0 1 0 1 A discrete random variable that takes two values 1 and 0 with probabilities p and 1 − p . Good model for a binary data source whose output is 1 or 0. Can also be used to model the channel errors. Probability and Random Variables 17

  18. Binomial Random Variable f ( ) x x 0 . 30 0 . 25 0 . 20 0 . 15 0 . 10 0 . 05 x 0 2 4 6 A discrete random variable that gives the number of 1’s in a sequence of n independent Bernoulli trials. n � n � � n � n ! � p k (1 − p ) n − k δ ( x − k ) , where f x ( x ) = = k !( n − k )! . k k k =0 Probability and Random Variables 18

  19. Uniform Random Variable f ( ) x F x ( ) x x 1 1 b − a x x a 0 b a 0 b A continuous random variable that takes values between a and b with equal probabilities over intervals of equal length. The phase of a received sinusoidal carrier is usually modeled as a uniform random variable between 0 and 2 π . Quantization error is also typically modeled as uniform. Probability and Random Variables 19

  20. Gaussian (or Normal) Random Variable f ( ) x F x ( ) x x 1 2 2 πσ 1 1 2 x x 0 µ 0 µ A continuous random variable whose pdf is: − ( x − µ ) 2 � � 1 √ f x ( x ) = 2 πσ 2 exp , 2 σ 2 µ and σ 2 are parameters. Usually denoted as N ( µ, σ 2 ) . Most important and frequently encountered random variable in communications. Probability and Random Variables 20

  21. Functions of A Random Variable The function y = g ( x ) is itself a random variable. From the definition, the cdf of y can be written as F y ( y ) = P ( ω ∈ Ω : g ( x ( ω )) ≤ y ) . Assume that for all y , the equation g ( x ) = y has a countable number of solutions and at each solution point, d g ( x ) / d x exists and is nonzero. Then the pdf of y = g ( x ) is: f x ( x i ) � f y ( y ) = , � � � d g ( x ) � � i � � d x � � x = x i � � where { x i } are the solutions of g ( x ) = y . A linear function of a Gaussian random variable is itself a Gaussian random variable . Probability and Random Variables 21

  22. Expectation of Random Variables I Statistical averages , or moments , play an important role in the characterization of the random variable. The expected value (also called the mean value, first moment) of the random variable x is defined as � ∞ m x = E { x } ≡ xf x ( x )d x, −∞ where E denotes the statistical expectation operator . In general, the n th moment of x is defined as � ∞ E { x n } ≡ x n f x ( x )d x. −∞ Probability and Random Variables 22

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend