What is an experiment? In an experiment, a Physical Phenomenon is - - PowerPoint PPT Presentation

what is an experiment
SMART_READER_LITE
LIVE PREVIEW

What is an experiment? In an experiment, a Physical Phenomenon is - - PowerPoint PPT Presentation

What is an experiment? In an experiment, a Physical Phenomenon is isolated as good as possible from the environment and its effect on (or interaction with) a Measuring quantity is recorded. Using a model , if the physical interaction is well


slide-1
SLIDE 1

What is an experiment?

In an experiment, a Physical Phenomenon is isolated as good as possible from the environment and its effect on (or interaction with) a Measuring quantity is recorded. Using a model, if the physical interaction is well known, information can be obtained about the sample Or if the sample is well known, information about the interaction Can be obtained. What we learn in this course is ways to address different parts of the above statements

slide-2
SLIDE 2

What is an experiment?

In an experiment, The effect induces changes on the Measured quantity. Because absolute measurements are either hard or impossible, we look at differences (or changes) in the measured quantity most of the time. Example:

  • plot temperature as a function of Time.
  • measure current as a function of voltage
  • measure luminescence as a function of wavelength
slide-3
SLIDE 3

Signals and Noise

  • In the broadest sense, a signal is

– A time varying quantity, or, – A sequence of numbers

  • In information theory

– A signal is a sequence of numbers carrying a message (coded with an alphabet)

  • Noise is an unintended (unwanted) and

random addition to the signal that can not be separated from the signal

slide-4
SLIDE 4

Signal and Noise example

interference noise

slide-5
SLIDE 5

Example : Sinusoidal and Noise

Original sinusoid Original sinusoid + added noise time

Example Oscilloscope traces

slide-6
SLIDE 6

Why can’t we separate the noise?

  • Noise is RANDOM, i.e. we don’t know it a priori.

And we can not estimate it perfectly. (otherwise it wouldn’t be noise, but some sort of interference)

“ I will give you 50 + (a random number between -50 and 50) YTL” “I can not decide if I will be happy or not! The amount can be 0 to 100 YTL. ”

slide-7
SLIDE 7

How strong the noise is compared to the signal makes a difference.

“ I will give you 50 + (a random number between -5 and 5) YTL” “The amount can be 45 to 55 YTL. I have a better idea of how much you will give me!”

slide-8
SLIDE 8

Signal to Noise Ratio

Informal, verbal definition: Signal – to – Noise Ratio (SNR)

SNR =

Signal Power Noise Power Generally we want as high an SNR as possible. This makes our measurement result more accurate

slide-9
SLIDE 9

Signal to Noise Ratio

SNR =

Signal Power Noise Power The limit of our measurement is the noise level. A measurement with SNR ~ 1 is barely acceptable

signal

~ 1 In this case

noise

slide-10
SLIDE 10

We measure Voltages (most of the time anyways)

  • For pedagogical reasons, during the

discussion of noise and signals, we will use voltages and currents.

  • Other signals and noise sources can be

understood easily by extending your understanding of voltage and current waveforms.

slide-11
SLIDE 11

Noise and Random Numbers

Signal + Noise = Measurement Result

We want to know this one We know this one This is a random number

We have to refresh our memory about random numbers

slide-12
SLIDE 12

Discrete Random Numbers (variables)

  • A series of numbers X

– Gives a different outcome for each trial – Trials are INDEPENDENT of each other

Trial number 1 X1=1 2 X2=3 3 X3=2 4 X4=5 ……….

slide-13
SLIDE 13

Discrete Random Variables

  • A series of numbers Xn

– Results of trials take values from a set (the space of outcomes, Ω) Xn is an element of Ω={1,2,3,4,5,6}

Each outcome value has a probability assigned to it, pi

slide-14
SLIDE 14

Uniformly Distributed Random Var.

Xn is an element of Ω={1,2,3,4,5,6} Example

P1 = 1/6 P2 = 1/6 P3 = 1/6 P4 = 1/6 P5 = 1/6 P6 = 1/6

1 =

i i

p

Probabilities add up to 1

slide-15
SLIDE 15

Uniformly Distributed Random Var.

Xn is an element of Ω={0,1} Example

P0 = ½ P1 = ½

1

slide-16
SLIDE 16

Concept of Probability Distribution

Xn is an element of Ω={1,2,3,4,5,6} P1 = 1/6, P2 = 1/6, P3 = 1/6, P4 = 1/6, P5 = 1/6, P6 = 1/6

1 =

i i

p

Probability mass function

slide-17
SLIDE 17

Expected value of a Random Variable

Different Notations can be used E(X) = μ <X> = μ “Expected value of X is μ”

slide-18
SLIDE 18

Expected value of a Random Variable

E(X) = μ <X> = μ

=

i iX

p μ

Weighted average of outcomes, using probabilites as weights Expectation shows the “center of mass” of the probability distribution For the dice outcomes, μ = 3.5

slide-19
SLIDE 19

Variance of a Random Variable

σx2=E((X- μ )2) σx2= <X2>-<X>2

Variance is a measure of the width of the Probability distribution

slide-20
SLIDE 20

Variance of a Random Variable

σx2= <X2>-<X>2 σx is very closely related to the noise amplitude

slide-21
SLIDE 21

Standart Deviation

σx2= <X2>-<X>2

slide-22
SLIDE 22

Binomial Distribution

P0 = p P1 = q =1-p

1

Binomial coin toss with an unbalanced coin. You toss it N times. And count the number of heads. Your result X is the number of heads.

slide-23
SLIDE 23

Binomial Distribution

1 1 1 1 1/16 1 1 1 0 1/16 1 1 0 1 1/16 1 1 0 0 1/16 1 0 1 1 1/16 1 0 1 0 1/16 1 0 0 1 1/16 1 0 0 0 1/16 0 1 1 1 1/16 0 1 1 0 1/16 0 1 0 1 1/16 0 1 0 0 1/16 0 0 1 1 1/16 0 0 1 0 1/16 = q*q*p*q 0 0 0 1 1/16 = q*q*q*p 0 0 0 0 1/16 = q*q*q*q

P0 = p P1 = q =1-p

1

Example : 4 trials. p=q=1/2

4 3 3 2 3 2 2 1 3 2 2 1 2 1 1

1/16 4 4/16 3 6/16 2 4/16 1 1/16 Number

  • f 1s

Prob. Number

  • f 1s

Increasing number of trials

slide-24
SLIDE 24

Binomial Distribution

P0 = p P1 = q =1-p

1

Increasing number of trials p=0.5 q=0.5 p=0.01 q=0.99 In the limit of large number of trials: Changes between a Gaussian And something like an exponential Depending on p and q !

slide-25
SLIDE 25

Discrete and Continous Distributions

Outcomes are discrete numbers Probability Mass function Probability Distribution function Outcomes are Real numbers

=

i iX

p μ

slide-26
SLIDE 26

Continous Distributions

Probability Distribution function Outcomes are Real numbers dP = f(x)dx Probability that the outcome is between x and x+dx

slide-27
SLIDE 27

Normal Distribution and Binomial Distribution

There is a special relation between Binomial and Gaussian distributions. Gaussian is a limit for the binomial is p is about 0.5, and number of trials is large.

slide-28
SLIDE 28

Poisson Distribution and Binomial Distribution

There is a special relation between Binomial and Poisson distributions. Poisson is a limit for the binomial is N*p is small, and number of trials is large (p<<1). Probability Mass function Important because electrons and photons are quantized (discrete). If we measure them at regular intervals, we are counting the success events.

slide-29
SLIDE 29

Normal Distribution and Central Limit Theorem

So important that Gauss appears on the German Mark

slide-30
SLIDE 30

Normal Distribution and the Central Limit Theorem

The sum of a large number of arbitrary random variables converge to a gaussian distribution. X Y= X1+X2 Z=X1+X2+X3 K=X1+X2+X3+X4

  • The convergence can be

rather quick.

  • This theorem explains why

Gaussian Distributed Noise is commonly observed. We skip the proof, can be found elsewhere (wikipedia etc.)

slide-31
SLIDE 31

Mean and Variance of Random Variables upon addition

Gaussian (Normal) Random Variable

slide-32
SLIDE 32

Observations

  • 1. Addition of a Constant signal to the Random Variable

Does not increase the variance.

  • 2. Multiplication by a constant number multiplies the variance

as well

slide-33
SLIDE 33

Observations

  • 1. Addition of two Random Variables adds the means linearly.
  • 2. Standard Deviations are added in the second order.
  • 3. Addition of N Random variables of same distribution with

mean μ and standard deviation σ results in μs = N μ σs = σ√N

slide-34
SLIDE 34

Observations

  • 1. Subtraction increases the Standard Deviation just like

addition does, Subtracting two noisy waveforms does not clear the noise.

slide-35
SLIDE 35

Averaging

slide-36
SLIDE 36

Ensemble Averaging

Ensemble of experiments: N Copies of the same system Each having its own noise (but same distribution)

slide-37
SLIDE 37

Averaging

μs = N μ σs = σ√N If SNR is defined as μ/σ Then by making multiple measurements and adding the results we get μs / σs = N μ / σ√N = (μ / σ) √ N There is an improvement in the SNR!