Stochastic Simulation Introduction
Bo Friis Nielsen
Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby – Denmark Email: bfn@imm.dtu.dk
Stochastic Simulation Introduction Bo Friis Nielsen Applied - - PowerPoint PPT Presentation
Stochastic Simulation Introduction Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfn@imm.dtu.dk Practicalities Practicalities Reading material available
Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby – Denmark Email: bfn@imm.dtu.dk
02443 – lecture 1 2
02443
further reading
report over final project, and possibly oral presentation of project.
⋄ Bo Friis Nielsen, e-mail bfni@dtu.dk ⋄ Clara Brimnes Gardner (s153542@student.dtu.dk), Nikolaj Nikolaj Overgaard Sørensen (s190191@student.dtu.dk), Edward Xu (s181238@student.dtu.dk)
02443 – lecture 1 3
02443
techniques
act like, to mimic, to imitate.
To (have a computer) simulate a system which is affected by randomness. Narrow sense: To generate (pseudo)random numbers from a prescribed distribution (e.g. Gaussian)
02443 – lecture 1 5
02443
02443 – lecture 1 6
02443
02443 – lecture 1 7
02443
⋄ Computer science ⋄ Statistics ⋄ Operations Research ⋄ Planning and management
⋄ Variance reduction methods ⋄ Random number generation ⋄ Random variable generation ⋄ The event-by-event principle
⋄ Markov chain Monte Carlo ⋄ Bootstrap
02443 – lecture 1 9
02443
available online for DTU students
Algorithms and Analysis, Springer 2007, available online for DTU students
R, Springer, 2010
and Modelling, John Wiley & Sons 1998, First 50 pages available at DTU Inside. It is illegal to distribute these notes
02443 – lecture 1 10
02443
Discrete-Event System Simulation, Prentice and Hall 1999
Marcel Dekker 1987
02443 – lecture 1 11
02443
methodology
⋄ Random number generations ⋄ Sampling from distributions ⋄ Variance reduction techniques ⋄ Statistical techniques bootstrap/MCMC
02443 – lecture 1 12
02443
P(Ω) = 1 P(∅) = 0
P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
P(A∩B) P(B)
P(A) =
i P(Bi)P(A|Bi)
P(A|Bi)P(Bi)
(P(A ∩ B) = P(A)P(B))
02443 – lecture 1 14
02443
reference to the underlying sample space
02443 – lecture 1 15
02443
⋄ Distribution P(X = x)
⋄ Joint distribution P(X = x, Y = y)
⋄ Marginal distribution PX(X = x) =
y P(X = x, Y = y)
⋄ Conditional distribution P(Y = y|X = x) = P(X=x,Y =y) PX(X=x) ⋄ independence P(Y = y, X = x) = PX(X = x)PY (Y = y), ∀(x, y)
x g(x) · P(X = x)
02443 – lecture 1 16
DTU
A(D)
⋄ Density: f(x) ≥ 0,
P(X ∈ dx) = f(x)dx
⋄ Mean, variance (moments): E(X) =
E(g(X)) =
=
1 √ 2πσe− 1
2( x−µ σ ) 2
Z = X−µ
σ
f(x, y)dxdy = P(x ≤ X ≤ x + dx, y ≤ Y ≤ y + dy), f(x, y) ≥ 0
F(x, y) = P(X ≤ x, Y ≤ y) = y
−∞
x
−∞
f(u, v)dudv
fX(x)
P(A) =
Cov(X, Y ) = E[(X − E(X))(Y − E(Y ))] = E(XY ) − E(X)E(Y ) Corr(X, Y ) = Cov(X, Y ) SD(X)SD(Y )
Var N
k=1 Xk
k=1 Var(Xk) + 2 1≤j<k≤n Cov(Xj, Xk)
Cov n
i=1 aiXi, m j=1 bjYj
i=1
m
j=1 aibjCov(Xi, Yj)