Pr Proba
- bability a
and nd Sta Statistics Basic c concepts pts
(from a a physi sicist st p point of vi view)
Benoi noit C t CLEMENT – Univer ersité té J
- J. Four
Pr Proba obability a and nd Sta Statistics Basic c concepts - - PowerPoint PPT Presentation
Pr Proba obability a and nd Sta Statistics Basic c concepts pts (from a a physi sicist st p point of vi view) Benoi noit C t CLEMENT Univer ersit t J J. Four urier er / / L LPSC bclem ement@l nt@lpsc.in2p n2p3.fr
2
Kendall’s Advanced theory of statistics, Hodder Arnold Pub. volume 1 : Distribution theory, A. Stuart et K. Ord volume 2a : Classical Inference and and the Linear Model, A. Stuart, K. Ord, S. Arnold volume 2b : Bayesian inference, A. O’Hagan, J. Forster The Review of Particle Physics, K. Nakamura et al., J. Phys. G 37, 075021 (2010) (+Booklet) Data Analysis: A Bayesian Tutorial, D. Sivia and J. Skilling, Oxford Science Publication Statistical Data Analysis, Glen Cowan, Oxford Science Publication Analyse statistique des données expérimentales, K. Protassov, EDP sciences Probabilités, analyse des données et statistiques, G. Saporta, Technip Analyse de données en sciences expérimentales, B.C., Dunod
3
SAMPLE Finite size Selected through a random proces cess
measurement POPULATION Potentially infinite size
results
4
SAMPLE Finite size xi POPULATION f(x; x;θ) PH PHYSI SICS param ameter eters θ EX EXPER PERIMENT IN INFEREN ENCE Observ rvab able
5
6
A n
+∞ →
process a great number of times n , and count the number of times the outcome satisfy event A, nA then the ratio :
defines a probability
the credibility associated to the event.
7
8
9
10
dF = F(x+dx)-F(x) = P(X < x+dx) - P(X < x) = P(X < x or x < X < x+dx) - P(X < x) = P(X < x) + P(x < X < x+dx) - P(X < x) = P(x < P(x < X < x+d +dx) = ) = f(x) (x)dx
Probab ability ty d density ty f functi ction Note : discrete variables can also be described by a probability density function using Dirac distributions: Cumulati ative e density ty f functi ction By construction :
11
x f(x) x F(x) 1
= =
i i
1 p(i) x)
f(x)
= = < < = = = +∞ = = ∞
∞ b a a
F(a)
b) X P(a f(x)dx F(a) 1 ) Ω P( ) F( P(Ø) ) F(-
+∞ ∞ −
= = 1 ) Ω P( f(x)dx
For any function g( g(x), the expectati ectation of g is : It’s the mean value of g Momen ents ts μk are the expectation of Xk. 0th moment : μ0=1 =1 (pdf normalization) 1st moment : μ1=μ (mean)
12
1 ixt ixt −
k k k k k
t k k k k
=
X’ = ’ = X-μ1 is a central tral v vari riab able 2nd central moment : μ’2=σ2 (variance) Characteri racteristic f c functi ction : From Taylor expansion : Pdf entirely defined by its moments CF : useful tool for demonstrations
A sample ple is obtained from a random d
wing ng within a populati ation, described by a probability density function. We’re going to discuss how to charact aracteri erize, e, indep epen enden entl tly fro rom one e an anoth ther: :
ation
ple To this end, it is useful, to consider a sample as a finite set from which one can randomly draw elements, with equipropability We can the associate to this process a probability density, the empiri rical al density ty or sample e density ty This density will be useful to translate properties of distribution to a finite sample.
13
i sample
14
Mea Mean val alue : Sum (integral) of all possible values weighted by the probability of occurrence:
15
+∞ ∞ −
=
n 1 i i
population sample (size n) population sample (size n)
2 2 ∫
=
n 1 i 2 i 2
2 2 2 2 2 2 2
Standard ard d deviati ation (σ) and vari rian ance ce (v= σ²) ²) : Mean value of the squared deviation to the mean : Koen enig’s th theo eorem :
Binomial al distri ributi tion: randomly choosing K objects within a finite set of n, with a fixed drawing probability of p Variable : K Parameters : n,p ,p Law : Mean : np np Variance : np( p(1-p) p)
16
k n k
p) (1 p k)! (n k! n! p) P(k;n,
−
− − = k! λ e λ) P(k;
k
=
p = 0.65 n = 10 λ = 6.5
Poisson d distri tributi tion : limit of the binomial when n→+∞,p →0,np= p=λ Counting events with fixed probability per time/space unit. Variable : K Parameters : λ Law : Mean : λ Variance : λ
Uniform rm d distri ributi tion : equiprobability over a finite range [a,b] Parameters : a,b ,b Law : Mean : Variance :
17
b x a if a b 1 a,b) f(x; < < − =
/12 a) (b b)/2 (a σ v μ
2 2
− + = = =
2 2
2σ μ) (x
e 2π σ 1 σ) μ, f(x;
− −
=
Chi hi-square d are distri tributi tion : sum of the square
Variable : Parameters : n Law : Mean : n n Variance : 2n 2n
=
− − −
2 n Γ 2 e c f(c;n)
1 2 n 2 c 1 2 n
=
=
n 1 k 2 X X k
k k
σ μ
C
Normal al d distri tributi tion (Gaussian an) ) : limit of many processes Parameters : μ, , σ Law :
18
Random variables can be generalized to random vectors : the probab ability ty d density ty f functi ction becomes : and Marg rginal al density ty : probability of only one of the component
19
X X
n n n n 2 2 2 2 1 1 1 1 n 2 1 n 2 1
n 2 1
b a d c
For a fixed value of Y=y0: f(x|y x|y0)dx x = « Probabil bilit ity of x<X <X<x <x+dx +dx knowin ing g that Y=y0 » is , a conditional al density ty for X. It is proportional to f(x,y), so
20
Y
Y Y X Y
The two random variables X and Y are indep epen enden ent if all events
X<x+dx +dx are independent from y<Y< <Y<y+dy +dy f( f(x|y x|y)= )=fX(x) ) an and f(y|x)= )=fY(y (y) hence f(
Translated in term of pdf’s, Bayes’ theorem becomes:
A ran random v vect ector (X (X,Y) ) can be treated as 2 separate arate vari riab ables es marginal densities : mean and variance for each variable : μX
X μY Y σX X σY
Doesn’t t t take i e into account c correl rrelati ations b between en t the vari riab ables es
21
Correl elati tion : Uncorrel rrelated ated : ρ=0 =0 : Indep epen enden ent Unco correl rrelated ated
Y X XY Y X Y X
=
n 1 i y i X i
Y Xσ
ρ=-0.5 ρ=0 ρ=0.9 ρ=0
Generalized measure of dispersion : Covar arian ance o ce of X and Y
Covariance matrix for n variables Xi: For uncorrel rrelat ated ed v variab ables es Σ is diagona
22
= ⇒ =
2 n n 2 2n n 1 1n n 2 2n 2 2 2 1 12 n 1 1n 2 1 12 2 1 j i ij
σ σ σ ρ σ σ ρ σ σ ρ σ σ σ ρ σ σ ρ σ σ ρ σ Σ ) X , Cov(X Σ
BX Y ΣB, B σ' σ' σ' Σ'
1
n 2 2 2 1
= = =
Matrix rea real and symmetri etric : c : can de diagonalized On can define n new uncorrelated variables Yi σ’i
2 are the eigen
enval alues es of Σ, B contains the orthonorm rmal al e eigen envect ectors rs. The Yi are the princi cipal al componen ents
smaller σ’ ’ they allow dimen ensional al r reducti ction
Meas asure o re of locat cation:
, μY)
ear regre ression Minimizing the dispersion between the curve « y=ax+b » and the distribution :
23
− − = − − =
i 2 i i 2
b) ax (y n 1 y)dxdy f(x, b) ax (y w(a,b) − = = ⇔ = + + = + − ⇔ − − = = ∂ ∂ − − = = ∂ ∂
X X Y Y X Y Y X Y X Y X X 2 X 2 X
μ σ σ ρ μ b σ σ ρ a μ b aμ μ μ σ ρσ bμ ) μ a(σ y)dxdy b)f(x, ax (y b w y)dxdy b)f(x, ax x(y a w
Fully correlated ρ=1 Fully anti-correlated ρ=-1 Then Y = aX+b
Multinomial al d distri ributi tion : randomly choosing K1 , K2 ,… Ks objects within a finite set of n, with a fixed drawing probability for each category p1, , p2,… … ps with ΣKi=n =n and Σpi=1 =1 Parameters : n, p , p1, , p2,… … ps Law : Mean : μi=n =npi Variance : σi
2=n
=npi(1 (1-pi) Cov(K (Ki,K ,Kj)= )=-np npipj
Rem : var varia iable les ar are n e not in t indep depen
24
) μ x ( Σ ) μ x ( 2 1
1 Τ
e Σ 2π 1 Σ) , μ ; x f(
− − −
−
=
Σ , μ
− −
=
2 i 2 i i
2σ ) μ (x i
e 2π σ 1 Σ) , μ ; x f(
s 2 1
k s k 2 k 1 s 2 1
p p p ! k ! k ! k n! ) p ;n, k P( =
Multinorm rmal al distri ributi tion : : Parameters : Law : if uncorrel rrelat ated ed Indepen enden ent t Unco correl rrelated ated
The sum of several random variable is a new random variable S Assuming the mean and variance of each variable exists, Mea Mean val alue of S : The mean an is an additi tive e quantity ty
25
=
n 1 i i
= = =
n 1 i i n 1 i i i X i n 1 n 1 n 1 i i S
i
< = =
i i j j i n 1 i 2 X n 1 n 1 2 n 1 i X i 2 S
i i
=
n 1 k 2 X 2 S
i
Varia riance of S : Fo For unco correl elated ted var ariables, th the e var arian ance i e is ad additive e
Probability density function of S : fS(s) Using the characteristic function : For independent variables The characteristic function factorizes. Finally the pdf is the Fourier transform of the cf, so : The pdfs of the sum is a convolution. Sum of Normal variables -> Normal Sum of Poisson variables (λ1 and λ2) -> Poisson, λ = λ1 + λ2 Sum of Khi-2 variables (n1 and n2) -> Khi-2, n = n1 + n2
26
i
x it X ist S S
n 2 1
X X X S
i k k
X k itx k X S
Weak ak law of large n e number ers Sample of size n = realization of n independent variables, with the same e distri tributi tion (mean μ, variance σ2). The sam ample m mean ean is a realization of Mea Mean val alue of M : μM=μ Vari arian ance ce of M : σM
2 =
= σ2/n Centr tral al-Limit t theorem em n independent random variables of mean μi
i and variance σi 2
Sum of the reduced variables : The pdfs of C C converge to a reduced normal distribution : The sum of many random f fluct ctuation is normal ally d distri tributed ted
27
= =
i
X n 1 n S M
− =
i i i
σ μ X n 1 C
2 c n C
2
− +∞ →
28
100 200 300 400 500 600 700
1 2 3 4 5
Gauss X1
100 200 300 400 500 600 700
1 2 3 4 5
Gauss (X1+X2)*racine(2)
100 200 300 400 500 600 700
1 2 3 4 5
Gauss (X1+X2+X3)*racine(3)
100 200 300 400 500 600 700
1 2 3 4 5
Gauss (X1+X2+X3+X4+X5)*racine(5)
Any measure (or combination of measure) is a realization of a random variable.
Uncertai ertainty ty = quantifying the difference between θ and θ0 :
easure e of d dispers ersion We will postulate : Δθ = = ασθ
Absolute error, always positive
Usually one differentiate
tical cal e error r : due to the measurement Pdf.
temati atic e c errors rs or bias -> fixed but unknown deviation (equipment, assumptions,…) Systematic errors can be seen as statistical error in a set a similar experiences.
29
30
Observation error : ΔO Position error : ΔP Scaling error: ΔS
θ = θ0+δO+δS+δP Each δi is a realization of a random variable : mean 0 (negligible) and variance σi
ted error r sources ces : Choice ce of α ? ? If many sources, from central-limit -> normal distribution α=1 gives (approximately) a 68% confidence interval α=2 gives 95% CL (and at least 75% from Bienaymé-Chebyshev)
2 P 2 S 2 O 2 P 2 S 2 O 2 2 tot 2 tot P P S S O O
Δ Δ Δ ) σ σ (σ α ) σ (α Δ ασ Δ ασ Δ ασ Δ + + = + + = = = = =
31
f(x) x Δx Δx
dx df (x) f' =
Δf Δf
Meas asure re : x±Δx Co Compute : f(x) ) -> > Δf ? f ? Assuming smal all errors rs, using Taylor expansion :
Δx dx df Δx) f(x Δx) f(x 2 1 Δf Δx dx f d 2 1 Δx dx df f(x) Δx) f(x Δx dx f d 2 1 Δx dx df f(x) Δx) f(x
2 2 2 2 2 2
= − − + = ⇒ + − = − + + = +
Meas asure re : x±Δx, y , y±Δy,… ,… Co Compute : f(x,y ,y,… ,…) -> > Δf ? f ? Idea : treat the effect of each variable as separate error r sources rces
32 2Δy
2Δx
2Δfx
xm ym
zm=f(xm,ym)
dx ) df(x,y x f
m
= ∂ ∂
Curve z=f(x,ym), fixed ym Surface z=f(x,y)
Δy y f f Δ , Δx x f f Δ
y x
∂ ∂ = ∂ ∂ =
ΔxΔy y f x f ρ Δy y f Δx x f f fΔ Δ ρ f Δ f Δ Δf
xy 2 2 y x xy 2 Y 2 x 2
∂ ∂ ∂ ∂ + ∂ ∂ + ∂ ∂ = + + =
<
∂ ∂ ∂ ∂ + ∂ ∂ =
i j i i i , j i j i x x 2 i 2
Δx Δx x f x f ρ Δx x f Δf
j i
Δy y f Δx x f Δf ∂ ∂ + ∂ ∂ = Δy y f Δx x f Δf ∂ ∂ − ∂ ∂ =
∂ ∂ =
i i 2 i 2
Δx x f Δf
uncorrelated correlated anticorrelated Th Then
From a finite sample {x {xi} } -> estimating a parameter θ Statisti tic = a function S S = = f({x {xi}) Any statistic can be considered as an estimato ator of θ To be a good estimator it needs to satisfy :
tency cy : limit of the estimator for a infinite sample.
as : difference between the estimator and the true value
cien ency cy : speed of convergence
tnes ess : sensitivity to statistical fluctuations A good esti timat ator r should at least be consisten tent and asymptoti tical cally unbias ased ed Efficient / Unbiased / Robust often contradict each other ⇒differen erent t choices ces f for d r differen erent a t applicati cations
33
As the sample is a set of realization of random variables (or one vector variable), so is the estimator : it has a mean, a variance,… and a probability density function
34
Θ
ˆ
n
+∞ →
n
+∞ →
n Θ
+∞ → ˆ
biased asymptotically unbiased unbiased
Bias as : Mean value of the estimator unbias ased ed e estimato ator r : asympto totica cally unbias ased ed : Consisten tency cy: formally in practice, if asymptoti tical cally unbias ased ed
35
2 2 2 2 Θ
ˆ
Sam ample mean ean is a good estimator of the populati ation m mean
ak law o
rge n e numbers ers : convergent, unbiased
36
2 2 2 2 μ i 2 2 2 i 2 i i 2 i 2
ˆ
2 2 2 μ μ i
ˆ ˆ
biased, a asympto toti tical cally unbias ased ed unbias ased ed v varian ance e ce estimato ator r : variance of the estimator (convergence)
i 2 i 2
4 2 4 2 σ2
ˆ
Sample v e vari rian ance ce as an estimator of the population v varian ance ce :
Uncertai ertainty ty ⇔ Estimato ator r standard ard d deviati ation Use an estimator of standard deviation : (!!! Biased ) Mea Mean : Vari rian ance ce : Central-Limit theorem -> empirical estimators of mean and variance are normal ally d distri tributed ted, for larg arge en enough sam amples define 68% confidence intervals
37
2 2 2 μ i
ˆ
2 2 4 2 σ i 2 i 2
2
ˆ
2
2 2
x : random variable(s) θ : parameter(s)
38
for Bayesian f(x|θ)= f(x;θ)
for Bayesian f(θ|x)= L (θ)/∫L (θ)dθ fix θ= θ0 (true value) fix x= u (one realization of the random variable)
For a sample : n independent realizations of the same variable X
i i i i
rem rem : system of equations for several parameters rem rem : often en m minimize e -ln lnL : simplify expressions
39
i i x i ) (θ λ
i i
θ θ
= ˆ
Mostly asympto totic p c propert erties : valid for large sample, often assumed in any case for lack of better information Asymptotically unbias ased ed Asymptotically effici cien ent (reaches the CR bound) Asymptotically normal ally d distri tributed ted
Goodness of fit = The value of is Khi-2 distributed, with ndf = sample e size e – number o er of param amet eters ers
40
) θ
( Σ ) θ
( 2 1
1 Τ
ˆ ˆ
−
−
− j i 1 ij
) θ ( 2ln
L
+∞
) θ ( 2ln
2
ˆ L χ
Probability of getting a worse agreement
Errors on parameter -> from the covariance matrix For one p e para aramete ter, 68% interval More generally : Confidence contour are defined by the equation : Values of β for different number o er of param ameters eters nθ and confiden ence l ce level els α
41
) θ
( Σ ) θ
( 2 1
1 Τ
e Σ 2π 1 Σ) , θ ; θ f(
ˆ ˆ
ˆ
−
−
=
∂ ∂ ∂ ∂ − =
− j i 1 ij
θ ln θ ln E Σ L L
2 2 θ
θ ln 1 σ Δθ ∂ ∂ − = = L
ˆ
ˆ
empirical mean of 1 value…
3 j i, j j i i 1 ij
−
2β θ χ θ
2
nθ→ α↓ 1 (0. 5*nθ
2)
2 3 68.3 0.5 1.15 1.76 95.4 2 3.09 4.01 99.7 4.5 5.92 7.08
Set of measurements (x (xi, , yi) ) with uncertainties on yi Theoretical law : y = f(x, x,θ) Naïve e appro roach ach : use regres ression Rewei eight each term by the error Maximum likelihood : assume each yi is normally distributed with a mean equal to f(x (xi,θ) and a variance equal to Δyi Then the likel elihood is :
42
θ w , θ)) , f(x (y ) θ w(
i i 2 i i
= ∂ ∂ − =∑ θ K , y θ) , f(x y ) θ ( K
i 2 i 2 i i i 2
= ∂ ∂ ∆ − =∑
− −
i Δ θ) , f(x y 2 1 i
2 i y i i
2
Least s t squar ares es or Khi hi-2 f 2 fit t is the MLE, for Gaussian errors
) θ (x, f
( Σ ) θ (x, f
( 2 1 ) θ ( K
1 Τ 2
−
=
Generic case with correlations:
43
= = = = =
i 2 i i 2 i i i 2 i i i 2 i 2 i i 2 i i i
Δy 1 E , Δy y D , Δy x C , Δy x B , Δy y x A E 1.52 b Δ , B 1.52 a Δ C BE AC DB b , C BE DC AE a
2 2
= = − − = − − = ˆ ˆ ˆ ˆ
2 dimensional error contours on a and b
44
k+1[
45
k k
46
N/Z for stable heavy nuclei
1.321, 1.357, 1.392, 1.410, 1.428, 1.446, 1.464, 1.421, 1.438, 1.344, 1.379, 1.413, 1.448, 1.389, 1.366, 1.383, 1.400, 1.416, 1.433, 1.466, 1.500, 1.322, 1.370, 1.387, 1.403, 1.419, 1.451, 1.483, 1.396, 1.428, 1.375, 1.406, 1.421, 1.437, 1.453, 1.468, 1.500, 1.446, 1.363, 1.393, 1.424, 1.439, 1.454, 1.469, 1.484, 1.462, 1.382, 1.411, 1.441, 1.455, 1.470, 1.500, 1.449, 1.400, 1.428, 1.442, 1.457, 1.471, 1.485, 1.514, 1.464, 1.478, 1.416, 1.444, 1.458, 1.472, 1.486, 1.500, 1.465, 1.479, 1.432, 1.459, 1.472, 1.486, 1.513, 1.466, 1.493, 1.421, 1.447, 1.460, 1.473, 1.486, 1.500, 1.526, 1.480, 1.506, 1.435, 1.461, 1.487, 1.500, 1.512, 1.538, 1.493, 1.450, 1.475, 1.500, 1.512, 1.525, 1.550, 1.506, 1.530, 1.487, 1.512, 1.524, 1.536, 1.518, 1.577, 1.554, 1.586, 1.586
Statistical description : nk are multi tinomial random variables. with parameters :
47
k
C X k k k k
1 p r k r k n 1 p k k 2 n k n
k k k k k
<< <<
For a large sample : For small classes (width δ): So finally : Th The histo togram am is is an esti timato tor o r of t the probability ty density ty Each bin can be described by a Poi
dens nsity. The 1σ error on nk is then :
δ n → +∞ →
k k k n
+∞ →
k δ c C X k
k
→
k n 2 n k
k k
For a random variable, a confiden ence i ce interv erval al with confiden ence ce level el α, is any interval [a,b] ,b] such as : Generalization of the concept of uncertainty: interval that contains the tru true val alue with a given probability
tly differen erent t concep cepts ts For Bayes esian ans : the posterior density is the probability density of the true value. It can be used to derive interval : No such thing for a Frequen enti tist : The interval itself becomes the random variable [a, [a,b] is a realization of [A, [A,B] Independently of θ
48
α (x)dx f [a,b]) P(X
b a X
= = ∈
Probability of finding a realization inside the interval
Mean centered, symetric interval [μ-a, , μ+a +a]
49
Mean centered, probability symetric interval : [a, b] ,
b μ μ a
Highest Probability Density (HDP) [a, b]
a μ a μ
+ −
[a,b] y and [a,b] for x f(y) f(x) α f(x)dx
b a
∉ ∈ > =
To build a frequentist interval for an estimator of θ
the estimator for each (MC sampling of the estimator pdf)
, determine A(θ) ) and B(θ) ) such as : These 2 curves are the confiden ence b ce belt, for a CL L α.
50
θ ˆ s experiment
the
)/2
fraction a for () ˆ <
s experiment
the
)/2
fraction a for Ω() ˆ >
Confidence Belt for Poisson parameter λ estimated with the empirical mean of 3 realizations (68%CL)
θ ˆ
51
ν Δ ν
+ −
+
52
53
No true frequentist way to add systematic effects. Popular method of the day : profiling Deal with nuisance parameters as realization if random variables : extend the likelihood : G( G(v) is the likel elihood of the new p param amet eter ers (identical to prior) For each value of θ, maximize the likelihood with respect to nuisance : profile l e likel elihood PL PL(θ). ).
PL(θ) has t the same e stati tisti tical cal a asymptoti tical cal proper erti ties es t than the e regular ar likel elihood
54
Test for bin binned da data : use the Poisson limit of the histogram
i : ni
Cif(x
(x)d )dx
ation o
th the observ ervati ation from the expected ected m mean to the theoreti retical cal standard ard deviat ation. Then χ2 follow (asymptotically) a Khi-2 law with k-1 degrees of freedom (1 constraint ∑ni=n) p-val alue : probability of doing worse, For a “good” agreement χ2 /(k (k-1) ~ 1) ~ 1, More precisely (1σ interval ~ 68%CL)
55
i bins i 2 i i 2
Poisson mean Poisson variance Data
+∞
2 2
χ χ
1) 2(k 1) (k χ2 − ± − ∈
Test for unbinned ed data a : compare the sample cumulative density function to the tested one Sample Pdf (ordered sample) The the Kolmogoro rov s stati tisti tic is the largest deviation : The test distribution has been computed by Kolmogorov: [0; [0;β] ] define a confidence interval for Dn β=0.9584/√n for 68.3% CL β=1.3754/√n for 95.4% CL
56
+
n 1 k k s i s
S x n
2 2z
2r r 1 r n
− −
Test compatibility with an exponen enti tial al law :
57
0.008, 0.036, 0.112, 0.115, 0.133, 0.178, 0.189, 0.238, 0.274, 0.323, 0.364, 0.386, 0.406, 0.409, 0.418, 0.421, 0.423, 0.455, 0.459, 0.496, 0.519, 0.522, 0.534, 0.582, 0.606, 0.624, 0.649, 0.687, 0.689, 0.764, 0.768, 0.774, 0.825, 0.843, 0.921, 0.987, 0.992, 1.003, 1.004, 1.015, 1.034, 1.064, 1.112, 1.159, 1.163, 1.208, 1.253, 1.287, 1.317, 1.320, 1.333, 1.412, 1.421, 1.438, 1.574, 1.719, 1.769, 1.830, 1.853, 1.930, 2.041, 2.053, 2.119, 2.146, 2.167, 2.237, 2.243, 2.249, 2.318, 2.325, 2.349, 2.372, 2.465, 2.497, 2.553, 2.562, 2.616, 2.739, 2.851, 3.029, 3.327, 3.335, 3.390, 3.447, 3.473, 3.568, 3.627, 3.718, 3.720, 3.814, 3.854, 3.929, 4.038, 4.065, 4.089, 4.177, 4.357, 4.403, 4.514, 4.771, 4.809, 4.827, 5.086, 5.191, 5.928, 5.952, 5.968, 6.222, 6.556, 6.670, 7.673, 8.071, 8.165, 8.181, 8.383, 8.557, 8.606, 9.032, 10.482, 14.174
0.4 λ , λe f(x)
λx
= =
−
Dn = 0. 0.069 069 p-value = = 0. 0.0617 0617 1σ : [0, [0, 0. 0.0875] 0875]