Predictability of Coarse-Grained Models of Atomistic Systems in the - - PowerPoint PPT Presentation

predictability of coarse grained models of atomistic
SMART_READER_LITE
LIVE PREVIEW

Predictability of Coarse-Grained Models of Atomistic Systems in the - - PowerPoint PPT Presentation

Predictability of Coarse-Grained Models of Atomistic Systems in the Presence of Uncertainty J. Tinsley Oden Institute for Computational Engineering and Sciences The University of Texas at Austin Belytschko Lecture Theoretical and Applied


slide-1
SLIDE 1

Predictability of Coarse-Grained Models of Atomistic Systems in the Presence of Uncertainty

  • J. Tinsley Oden

Institute for Computational Engineering and Sciences The University of Texas at Austin Belytschko Lecture Theoretical and Applied Mechanics Program Departments of Mechanical Engineering and Civil and Environmental Engineering

Northwestern University October 14, 2014

J.T. Oden Belytschko Lecture October 2014 1 / 93

slide-2
SLIDE 2

Acknowledgements: Collaborators: Kathryn Farrell Danial Faghihi DOE-DE-SC009286 MMICC AFOSR: FA9550-12-1-0484 DOE Eureka: DE-SC0010518

J.T. Oden Belytschko Lecture October 2014 2 / 93

slide-3
SLIDE 3

Outline

1 The Logic of Predictive Science: What is it and Why Now? 2 The Tyranny of Scales: Predictivity of Multiscale Models 3 Bayesian Model Calibration, Validation, and Prediction 4 The Prediction Process: Traveling up the Prediction Pyramid 5 Exploratory Examples 6 Model Inadequacy - Specified and Misspecified Models 7 Conclusions

J.T. Oden Belytschko Lecture October 2014 3 / 93

slide-4
SLIDE 4
  • 1. The Logic of Predictive Science: What is Predictive

Science?

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty. Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-5
SLIDE 5
  • 1. The Logic of Predictive Science: What is Predictive

Science?

Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality

  • Predictive Science: the scientific discipline concerned with assessing the predictability of

mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty. Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-6
SLIDE 6
  • 1. The Logic of Predictive Science: What is Predictive

Science?

The process of adjusting the parameters

  • f a model to improve the agreement of

model predictions with experimental measurements Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality

❆ ❆

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty. Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-7
SLIDE 7
  • 1. The Logic of Predictive Science: What is Predictive

Science?

The process of adjusting the parameters

  • f a model to improve the agreement of

model predictions with experimental measurements Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality The process of determining the accuracy with which a model can predict features of physical reality

❆ ❆ ✘✘✘✘✘✘✘✘✘✘✘✘ ✘

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty. Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-8
SLIDE 8
  • 1. The Logic of Predictive Science: What is Predictive

Science?

The process of adjusting the parameters

  • f a model to improve the agreement of

model predictions with experimental measurements Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality The process of determining the accuracy with which a model can predict features of physical reality

❆ ❆ ✘✘✘✘✘✘✘✘✘✘✘✘ ✘

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty.

❳❳ ❳

The Quantities of Interest (QoIs): the goals of the simulation

Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-9
SLIDE 9
  • 1. The Logic of Predictive Science: What is Predictive

Science?

The process of adjusting the parameters

  • f a model to improve the agreement of

model predictions with experimental measurements Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality The process of determining the accuracy with which a model can predict features of physical reality

❆ ❆ ✘✘✘✘✘✘✘✘✘✘✘✘ ✘

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty.

❳❳ ❳ ❤❤❤ ❤

The Quantities of Interest (QoIs): the goals of the simulation UQ: quantifying the uncertainty in predicted QoIs

Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . .

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-10
SLIDE 10
  • 1. The Logic of Predictive Science: What is Predictive

Science?

The process of adjusting the parameters

  • f a model to improve the agreement of

model predictions with experimental measurements Mathematical constructions based on physical principles or empirical relations-generally based on inductive theories which attempt to characterize abstractions of physical reality The process of determining the accuracy with which a model can predict features of physical reality

❆ ❆ ✘✘✘✘✘✘✘✘✘✘✘✘ ✘

Predictive Science: the scientific discipline concerned with assessing the predictability of mathematical and computational models of physical events. It embraces the processes of model selection, calibration, validation, verification, and their use in forecasting features of physical events with quantified uncertainty.

❳❳ ❳ ❤❤❤ ❤

The Quantities of Interest (QoIs): the goals of the simulation UQ: quantifying the uncertainty in predicted QoIs

Comprehensive Nuclear Test Ban: UN 1996 Space Shuttle Accident, 2003 Climate/Weather Prediction Predictive Medicine . . . Predictability requires knowledge of the physical laws that are proposed to explain realities and requires recognizing and quantifying uncertainties

J.T. Oden Belytschko Lecture October 2014 4 / 93

slide-11
SLIDE 11

The Nature of Science

Science - The activity concerned with the systematic acquisition of knowledge The Pillars of Science - I. Theory - inductive hypotheses

  • II. Observation - experiments
  • III. Computational Science

The Scientific Method - Test statements that are logical consequences of scientific hypotheses (theories) or related computer models and simulation through repeatable experiments or observations

J.T. Oden Belytschko Lecture October 2014 5 / 93

slide-12
SLIDE 12

Logic: The science dealing with the formal principles of reasoning (or the study of reasoning) Deductive Reasoning (or deductive logic) The process of reasoning from one or more general statements (axioms or premises) to reach logically certain conclusions

  • “Top-down logic”: premises ⇒ conclusions
  • Established as a formal discipline by Aristotle

384-322 B.C.

Inductive Reasoning (or inductive logic) The process of reasoning by generalizing or extrapolating from initial information or hypotheses

  • “Bottom-up logic”: an open system including domains of epistemic

uncertainty (allowing a conclusion to be false)

J.T. Oden Belytschko Lecture October 2014 6 / 93

slide-13
SLIDE 13

The Imperfect Paths to Knowledge

  • f

THE UNIVERSE REALITIES PHYSICAL Observational Errors OBSERVATIONS COMPUTATIONAL MODELS Discretization Errors KNOWLEDGE DECISION THEORY / MATHEMATICAL MODELS Modeling Errors VALIDATION VERIFICATION PREDICTION

J.T. Oden Belytschko Lecture October 2014 7 / 93

slide-14
SLIDE 14

Cox’s Theorem

Every natural extension of Aristotelian logic with uncertainties is Bayesian Precisely: There exists a continuous, strictly increasing, real-valued, non-negative function p, the plausibility of a proposition conditioned on information X, such that for every proposition A and B and consistent X

1 p(A|X) = 0 iff A is false given the information in X 2 p(A|X) = 1 iff A is true given the information in X 3 0 ≤ p(A|X) ≤ 1 4

p(A ∧ B|X) = p(A|X)p(B|AX)

5 p( ¯

A|X) = 1 − p(A|X) if X is consistent

Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003

J.T. Oden Belytschko Lecture October 2014 8 / 93

slide-15
SLIDE 15

Cox’s Theorem

Every natural extension of Aristotelian logic with uncertainties is Bayesian Precisely:

  • ∄ A s.t. (A|X) and

( ¯ A|X) are true

There exists a continuous, strictly increasing, real-valued, non-negative function p, the plausibility of a proposition conditioned on information X, such that for every proposition A and B and consistent X

1 p(A|X) = 0 iff A is false given the information in X 2 p(A|X) = 1 iff A is true given the information in X 3 0 ≤ p(A|X) ≤ 1 4

p(A ∧ B|X) = p(A|X)p(B|AX)

5 p( ¯

A|X) = 1 − p(A|X) if X is consistent

Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003

J.T. Oden Belytschko Lecture October 2014 8 / 93

slide-16
SLIDE 16

Cox’s Theorem

Every natural extension of Aristotelian logic with uncertainties is Bayesian Precisely:

  • ∄ A s.t. (A|X) and

( ¯ A|X) are true

There exists a continuous, strictly increasing, real-valued, non-negative function p, the plausibility of a proposition conditioned on information X, such that for every proposition A and B and consistent X

1 p(A|X) = 0 iff A is false given the information in X 2 p(A|X) = 1 iff A is true given the information in X 3 0 ≤ p(A|X) ≤ 1 4

p(A ∧ B|X) = p(A|X)p(B|AX)

Bayes’ Rule

5 p( ¯

A|X) = 1 − p(A|X) if X is consistent

Richard Trelked Cox, Am. J. Physics, 1946 Edwin T. Jaynes, Probability Theory: The Logic of Science, 2003 Kevin van Horn, J. Approx. Reasoning, 2003

J.T. Oden Belytschko Lecture October 2014 8 / 93

slide-17
SLIDE 17

Post-Cox Developments

  • Halpern, Joseph Y., Counterexample to Cox’s Theorem - then a

correction in an “Addendum to Cox’s Theorem” (1999), then refuted by van Horn (2003)

  • Amborg, Stephan and Sjodin, Gunnar (1999, 2000)
  • Van Horn, Kevin S., “Constructing a Logic of Plausibility - A Guide to

Cox’s Theorem,” J. Approx. Reasoning (2003)

  • Jaynes, Edwin T., Probability Theory: The Logic of Science (2003)
  • Dupre, Maurice J. and Tipler, Frank J., “A Trivial Proof of Cox’s

Theorem” (2009)

  • McGrayne, Sharon B., The Theory That Would Not Die (2012)
  • Freedman, David (1999, 2006)
  • Kleijn, B. J. K. and van der Vaart, A. W., The Bernstein-von-Mises

Theorem Under Misspecification (2012)

  • Owhadi, Houman, Scoval, Clint and Sullivan, Tim, “Bayesian

Brittleness: Why no Bayesian Model is Good Enough” (2013)

J.T. Oden Belytschko Lecture October 2014 9 / 93

slide-18
SLIDE 18

The Logic of Science: Bayesian Inference

Bayes’ Rule

P(A|B) = P(B|A)P(A) P(B)

Thomas Bayes (1763): “An Essay Towards Solving a Problem in the Doctrine of Chances” PRS

∗ Logical Probability ⊃ frequency based theory J.T. Oden Belytschko Lecture October 2014 10 / 93

slide-19
SLIDE 19

Bayesian Model Calibration, Validation, and Prediction

likelihood prior

π(θ|y) = π(y|θ)π(θ) π(y)

✂✂ ◗ ◗

posterior evidence

P(Θ) = a parametric model class = {A(θ, S, u(θ, S)) = 0}

✂ ✂ ✂ PP P ❇ ❇ ❏ ❏ ✟ ✟

mathematical model

❍ ❍

solution

❩❩ ❩

scenario parameters

tΩi = reality

Ωi + εi = yi Ωi − di(u(θ, S)) = ηi p(εi + “ηi”) = p(yi − di(θ)) = π(yi|θ) = likelihood

J.T. Oden Belytschko Lecture October 2014 11 / 93

slide-20
SLIDE 20
  • 2. The Tyranny of Scales: Predictivity of Multiscale

Models

= ⇒ = ⇒

All-Atom (AA) Model Coarse-Grained (CG) Model Macro (Continuum) Model The confluence of all challenges in Predictive Science: Exactly what is the model? Is it “valid”? What is the level of uncertainty in the prediction?

J.T. Oden Belytschko Lecture October 2014 12 / 93

slide-21
SLIDE 21

Nanomanufacturing

a) Semiconductor Component c) Manufacturing detail b) Multiblock Component National Medal of Technology, 2008 Japan Prize, 2013

  • C. Grant Willson, UT Austin

J.T. Oden Belytschko Lecture October 2014 13 / 93

slide-22
SLIDE 22

Motivation for Coarse Graining

  • 30 nm = 600 atoms

⇒ 216,000,000 atoms in a cube ⇒ 216,000,000 x 3 degrees of freedom

= 20 coarse-grained particles

⇒ 8000 particles in a cube ⇒ 24,000 degrees of freedom

J.T. Oden Belytschko Lecture October 2014 14 / 93

slide-23
SLIDE 23

Coarse Graining as a Reduced Order Method

  • M.L. Huggins, Journal of Chemical Physics, 1941
  • P

.J. Flory, Journal of Computational Physics, 1942

  • S. Izvekov, M. Parrienllo, C.J. Burnham, and G.A. Voth, Journal of Chemical

Physics, 2004

  • S. Izvekov and G.A. Voth, Journal of Physical Chemistry B, 2005, Journal of

Chemical Physics, 2005, 2006

  • W.G. Noid, J.-W. Chu, P

. Liu, G.S. Ayton, V. Krishna, S. Izvekov, G.A. Voth,

  • A. Das, and H.C. Anderson, The Journal of Chemical Physics, 2008
  • J.W. Mullinax and W.G. Noid, Journal of Chemical Physics, 2009
  • S. Izvekov, P

.W. Chung, B.M. Rice, Journal of Chemical Physics, 2010

  • E. Brini, V. Marcon, and N.F.A. van der Vegt, Physical Chemistry Chemical

Physics, 2011

  • A. Chaimovich and M.S. Shell, Journal of Chemical Physics, 2011
  • E. Brini and N.F.A. van der Vegt, Journal of Chemical Physics, 2012
  • S.P

. Carmichael and M.S. Shell, Journal of Physical Chemistry B, 2012

  • Y. Li, B.C. Abberton, M. Kroger, W.K.Liu, Polymers, 2013.
  • W.G. Noid, Journal of Chemical Physics, 2013

J.T. Oden Belytschko Lecture October 2014 15 / 93

slide-24
SLIDE 24

Various CG Methods

  • Force-matching methods
  • Multiscale coarse-graining
  • Iterative Boltzmann inversion
  • Reverse Monte Carlo
  • Conditional Reverse Work
  • Minimum Relative Entropy

. . . While often advocated, few take into account uncertainties in data, parameters, model inadequacy, . . .

J.T. Oden Belytschko Lecture October 2014 16 / 93

slide-25
SLIDE 25

Parametric Model Classes Mk

G

M1 M2 Mk

· · · G1 G2 Gk

M = {M1, M2, . . . , Mk}

J.T. Oden Belytschko Lecture October 2014 17 / 93

slide-26
SLIDE 26

CG Model

∂U(G(ω); θ) ∂Ri − Fi = 0, i = 1, 2, . . . , n (+B.C.′s) U(G(ω); θ) =

Nco

X

i=1

ki 2 (R − R0i)2 +

X

i=1

κi 2 (θi − θ0i)2 µ +

X

i=1

κt

i

2 (1 + cos(iω − γ))2 µ +

N

X

i=1 N

X

j=i+1

( 4εij "„ σij Rij «12 − „ σij Rij «6# + qiqj 4πε0Rij ) θ = CG model parameters = {ki, κi, κt

i, εij, γ, σij}

Macroscale Model

Div∂W(µ; w) ∂F − f = 0, x ∈ Ω ⊂ R3 (+B.C.′s) W(µ; w) = α(I1(C) − 3) + β(I2(C) − 3) − κ ln J(C) “ C = FT F; F = I + ∇w ” µ = macromodel parameters = (α, β, κ)

J.T. Oden Belytschko Lecture October 2014 18 / 93

slide-27
SLIDE 27

Parametric Model Classes Mk

Mk:

Non-bonded interaction Bond Angle Dihedral

Pk1

Bond Angle

Pk2

Non-bonded interaction Angle Dihedral

Pk3

Non-bonded interaction Bond

Pkm

Mi = {Pi1(θi1), Pi2(θi2), . . . , Pim(θim)}, i = 1, 2, . . . , k

For simplicity in notation:

M = {P1(θ1), P2(θ2), . . . , Pm(θm)}

J.T. Oden Belytschko Lecture October 2014 19 / 93

slide-28
SLIDE 28

What are the Models?

O ✁

✁ ✁ ✁✁ ✕ ✈ ✟ ✟ ✯ r1

m1

p1 ✚✚✚✚ ✚ ❃ ✈ ✲ r2

m2

p2 . . . ❳❳❳ ❳ ③ ✈ ✁ ✁ ✕ rn mn pn AA Model rn = {r1, r2, . . . , rn} RN = {R1, R2, . . . , RN}

“G(rn) = G(ω) = RN” GAαrα = RA; GαARA = rα

O ✁

✁ ✁ ✁✁ ✕ ✈

R1

M1

P1 ✚✚✚✚ ✚ ❃ ✈ P P q R2

M2

P2 . . . ❳❳❳ ❳ ③ ✈ ✟ ✟ ✯ Rn Mn Pn CG Model

1 ≤ α ≤ n 1 ≤ A ≤ N

Observables AA q =

  • ΓAA

ρAA(rn)q(rn) drn = lim

τ→∞ τ −1

τ q(rn(t)) dt CG Q(θ) =

  • ΓCG

ρCG(RN, θ)q(RN) dRN = lim

τ→∞ τ −1

τ q(RN(t), θ) dt

J.T. Oden Belytschko Lecture October 2014 20 / 93

slide-29
SLIDE 29

What are the Models?

AA

mαβ¨ rβi + ∂ ∂rαi uAA(rn) − fαi = 0 1 ≤ α, β ≤ n 1 ≤ i ≤ 3

CG

MAB ¨ RBi + ∂ ∂RAi U(RN, θ) − FAi = 0 1 ≤ A, B ≤ N 1 ≤ i ≤ 3

Adjoint

mαβ¨ zβi + Hαiβj(rn)zβj − ∂ ∂rαi q(rn) = 0 Hαiβj(rn) = ∂2uAA(rn) ∂rαi∂rβj

J.T. Oden Belytschko Lecture October 2014 21 / 93

slide-30
SLIDE 30

What are the Models?

Residual

R(RN(θ), zn) = lim

τ→∞ τ −1

τ

  • zαiGαAMAB ¨

RBi + zαiGαB ∂ ∂RBi U(RN, θ) −zαiGαBFBi

  • dt

Theorem

(Under suitable smoothness conditions), the error in the observables due to the CG approximation is, ∀θ ∈ Θ,

ε(θ) = q − Q(θ) = R(RN(θ), zn) + ∆ ≈ R(RN(θ), zn)

where ∆ is a remainder of higher order in “

  • rn − RN

J.T. Oden Belytschko Lecture October 2014 22 / 93

slide-31
SLIDE 31

Information Entropy

Suppose

Q (rn) =

  • Γ

ρ (rn) q (rn) drn Q

  • RN (θ)
  • =
  • Γ

ρ (rn) q (G (rn (θ))) drn q (rn) = log ρ (rn)

Then

Q (rn) − Q

  • RN (θ)
  • =

DKL (ρ (rn) ρ (G (rn (θ)))) = R

  • RN (θ) , zn

J.T. Oden Belytschko Lecture October 2014 23 / 93

slide-32
SLIDE 32

Information Entropy

Suppose

Q (rn) =

  • Γ

ρ (rn) q (rn) drn Q

  • RN (θ)
  • =
  • Γ

ρ (rn) q (G (rn (θ))) drn q (rn) = log ρ (rn)

Then

Q (rn) − Q

  • RN (θ)
  • =

DKL (ρ (rn) ρ (G (rn (θ)))) = R

  • RN (θ) , zn

=

  • ρ(ω) log

ρ(ω) ρ(G(ω)) dω

✟✟✟✟✟✟✟✟✟

J.T. Oden Belytschko Lecture October 2014 23 / 93

slide-33
SLIDE 33
  • 3. Bayesian Model Calibration, Validation, and Prediction

“The essence of ‘honesty’ or ‘objectivity’ demands that we take into account all of the evidence we have, not just some arbitrarily chosen subset of it.”

−E.T. Jaynes, 2003

J.T. Oden Belytschko Lecture October 2014 24 / 93

slide-34
SLIDE 34

Climbing the Prediction Pyramid

Scenarios Observations QoI

0.2 0.4 0.6 0.8 1 0.5 1 0.1 0.2 0.3 0.4 0.5

✯ ✿ Prior

π(θ)

Calibration (Sc, yc)

π(θ|yc) = π(yc|θ)π(θ) π(yc)

Validation (Sv, yv)

π(θ|yv, yc) = π(yv|θ, yc)π(θ, yc) π(yv|yc)

Prediction (Sp, QoI)

π(Q) = π(Q|θ, Sv, Sc)

J.T. Oden Belytschko Lecture October 2014 25 / 93

slide-35
SLIDE 35

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ
  • Compute quantity of uncertainty in π(Q)

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-36
SLIDE 36

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

What is the likelihood function? What are the priors? How does

  • ne compute the posterior?
  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ
  • Compute quantity of uncertainty in π(Q)

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-37
SLIDE 37

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

What is the likelihood function? What are the priors? How does

  • ne compute the posterior?
  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

Is the validation experiment well chosen? Has it resulted in an information gain over the calibration?

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ
  • Compute quantity of uncertainty in π(Q)

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-38
SLIDE 38

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

What is the likelihood function? What are the priors? How does

  • ne compute the posterior?
  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

Is the validation experiment well chosen? Has it resulted in an information gain over the calibration?

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)? What is the criterion for

“validity” of a model?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ
  • Compute quantity of uncertainty in π(Q)

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-39
SLIDE 39

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

What is the likelihood function? What are the priors? How does

  • ne compute the posterior?
  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

Is the validation experiment well chosen? Has it resulted in an information gain over the calibration?

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)? What is the criterion for

“validity” of a model?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ

How do we solve the forward problem?

  • Compute quantity of uncertainty in π(Q)

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-40
SLIDE 40

Basic Ideas:

  • Use statistical inverse methods based on Bayes’ rule to calibrate

parameters

π(θ|yc) = π(yc|θ)π(θ) π(yc)

What is the likelihood function? What are the priors? How does

  • ne compute the posterior?
  • Design validation experiments to challenge model assumptions and

inform model of QoIs

π(θ|yv, yc) = π(yv|θ, yc)π(θ|yc) π(yv|yc)

Is the validation experiment well chosen? Has it resulted in an information gain over the calibration?

  • Is model “valid” (not invalid) for the validation QoI (observable) given

the data and predictions π(Qvk|yvk), π(Q|yc)? What is the criterion for

“validity” of a model?

  • Solve forward problem for the QoI (not observable) using validation

parameters

π(Q) =

  • π(Q|θ, yvk, yvk−1, . . . , yc) dθ

How do we solve the forward problem?

  • Compute quantity of uncertainty in π(Q)

How do we “quantify” uncertainty?

J.T. Oden Belytschko Lecture October 2014 26 / 93

slide-41
SLIDE 41

The Prior

We seek a logical measure H(p) of the amount

  • f uncertainty in a probability distribution p =

{p1, p2, . . . , pn}, pi = p(xi)

1 H(p) ∈ R 2 H ∈ C0(R) 3 “common sense:”

H( 1

n, 1 n, . . .) as n → ∞

4 Consistency

Shannon’s Theorem

The only function satisfying four logical desiderata is the information entropy H(p) = −

n

  • i=1

pi log pi (or −

  • p log p/m dx)

Moreover, the actual probability p maximizes H(p) subject to constraints imposed by available information

Relative Entropy

Given two pdfs p and q, the relative entropy is given by the Kullback-Leibler divergence, DKL(pq) =

  • p(x) log p(x)

q(x) dx = −H(p) + H(p, q)

J.T. Oden Belytschko Lecture October 2014 27 / 93

slide-42
SLIDE 42

The Prior

Maximize H(p) subject to prior information constraints:

  • x

L(p, λ) = H(p) − λ0 n

  • i=1

pi − 1

  • − λ1

n

  • i=1

pixi − x

π(θ) = 1 x exp {−x/ x}

  • x , σ2

x

L(p, λ) = H(p) − λ0 n

  • i=1

pi − 1

  • − λ1

n

  • i=1

pixi − x

  • −λ2

n

  • i=1

pi (xi − x)2 − σ2

x

π(θ) = 1 √ 2πσx exp −(x − x)2 2σ2

x

  • E. T. Jaynes (1988)

J.T. Oden Belytschko Lecture October 2014 28 / 93

slide-43
SLIDE 43

Determining Calibration Priors: Bonds

Bond Equilibrium Distance: R0

R0 = 2.5219 σ2

R0 = 4.1097 × 10−3

Spring Constant: kR0

kR = kBT/2σ2

R0

= 72.5264

Equilibrium Angle: θ0,

θ0 = 105.5117 σ2

θ0 = 192.8262

Spring Constant: kθ

kθ = kBT/2σ2

θ0

= 1.5458 × 10−3

J.T. Oden Belytschko Lecture October 2014 29 / 93

slide-44
SLIDE 44

The Likelihood Function

R.A. Fisher, 1922: The likelihood that any parameter should have any assigned value is proportional to the probability that if this were true the totality of all

  • bservations should be that observed.

Consider n i.i.d. random observables y1, y2, . . . , yn For each sample,

π(yi|θ) = p(yi − di(θ))

For many samples,

π(y1, y2, . . . , yn|θ) =

n

  • i=1

π(yi|θ)

Then the log-likelihood is

Ln(θ) =

n

  • i=1

log π(yi|θ)

J.T. Oden Belytschko Lecture October 2014 30 / 93

slide-45
SLIDE 45

The Model Evidence - Model Plausibilities: Which model is “best”?

M = set of parametric model classes = {P1, P2, . . . , Pm}

Each P has its own likelihood and parameters θj Bayes’ rule in expanded form: π(θj|y, Pj, M) = π(y|θj, Pj, M)π(θj|Pj, M) π(y|Pj, M) , 1 ≤ j ≤ m model evidence =

  • π(y|θj, Pj, M)π(θj|Pj, M) dθj

Now apply Bayes’ rule to the evidence:

ρj = π(Pj|y, M) = π(Pj|M) π(y|M) π(y|Pj, M)

= the posterior model plausibility

m

  • j=1

ρj = 1 π(y|M)

m

  • j=1

π(y|Pj, M)π(Pj|M) = 1

J.T. Oden Belytschko Lecture October 2014 31 / 93

slide-46
SLIDE 46
  • 4. The Prediction Process: Traveling up the Prediction

Pyramid

✟ ✟ ✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟ ✟✟✟✟ ✟

Sc

Molecular Unit

Sp

Q =total energy per unit volume

Sv

✟✟✟ ✟ ✟✟✟ ✟ ✟✟✟ ✟ ✟✟✟ ✟ ✲ Polymer chains and crosslinks = RPCs

J.T. Oden Belytschko Lecture October 2014 32 / 93

slide-47
SLIDE 47

SFIL Coarse Graining

Constituents of Etch Barrier

Monomer 1 Monomer 2 Crosslinker Initiator

C H H H Si C H H H O Si O Si C H H H C H H H C H H H C H H H O Si C H H H C H H H C H H H C H H C H H C H H O C O C C H H H C C H H C H C H H H H H H O C O C C H H H C H H O C O C C H H H C H H O C O C C H H H C C H H H C H H H O H C O C C C C C C H H H H H

Farrell and Oden 2013

J.T. Oden Belytschko Lecture October 2014 33 / 93

slide-48
SLIDE 48

SFIL Coarse Graining

Constituents of Etch Barrier

Monomer 1 Monomer 2 Crosslinker Initiator

alksjg dlkafjs J.T. Oden Belytschko Lecture October 2014 34 / 93

slide-49
SLIDE 49

SFIL Coarse Graining

Constituents of Etch Barrier

Monomer 1 Monomer 2 Crosslinker Initiator

alksjg J.T. Oden Belytschko Lecture October 2014 35 / 93

slide-50
SLIDE 50

SFIL Calibration Scenarios: Sc

Sc1 Sc2 Sc3

J.T. Oden Belytschko Lecture October 2014 36 / 93

slide-51
SLIDE 51

SFIL Coarse Graining

AA System

827 atoms 503 parameters

CG System

61 particles

J.T. Oden Belytschko Lecture October 2014 37 / 93

slide-52
SLIDE 52

SFIL Validation Scenario: Sv

Series of scenarios increasing in size

Sv,1 Sv,2 Sv,3

For each scenario compute the QoI:

Q =

  • ΓAA

ρ (rn) uAA (rn) drn; Qv,k (θ) =

  • ΓCG,k

ρ

  • RN

UCG

  • RN; θ
  • dRN

ρ (rn) ∝ exp {−βu (rn)}

J.T. Oden Belytschko Lecture October 2014 38 / 93

slide-53
SLIDE 53

SFIL Validation Scenario: Sv

Series of scenarios increasing in size

Sv,1 Sv,2 Sv,3

Compare the computed QoI to AA data: if

DKL

  • π(uAA|Sv)π(Qv)
  • < γtol

the model is considered validated

J.T. Oden Belytschko Lecture October 2014 38 / 93

slide-54
SLIDE 54

SFIL Model Classes

Model Bonds Angles Dihedrals Non-Bonded # of Parameters

P1

  • 12

P2

  • 18

P3

  • 30

P4

  • 32

P5

  • 44

P6

  • 50

P7

  • 62

P8

  • 96

P9

  • 108

P10

  • 114

P11

  • 126

P12

  • 128

P13

  • 140

P14

  • 146

P15

  • 158

J.T. Oden Belytschko Lecture October 2014 39 / 93

slide-55
SLIDE 55

Sensitivity Analysis

  • PIRT (Phenomena Identification and Ranking Table)
  • Importance Measures (Hora and Iman, 1995)
  • Correlation Ratios (McKay, 1995)
  • Sensitivity Analysis (Saltelli, Chan, Scott, 2000, Saltelli et. al. 2008)
  • Variance-based

Si1,...,ik = Vi1,...,ik(Y ) V (Y )

  • Entropy-based

KLi(p1p0) = ∞

−∞

p1(y(θ1, θ2, . . . , ¯ θi, . . . , θm)) ×

  • log p0(y(θ1, θ2, . . . , ¯

θi, . . . , θm)) p1(y(θ1, θ2, . . . , θi, . . . , θm))

  • dy,

¯ θi = θi = DKL

  • Scatter Plots, etc

Saltelli, A., et.al. (2001) Auder, B. and Iooss, B. (2009) Chen, W et.al (2005)

J.T. Oden Belytschko Lecture October 2014 40 / 93

slide-56
SLIDE 56

Sensitivity Analysis: Variance-Based Method

Y (θ) = model output (e.g. Y (θ) = U(·; θCG) V (Y ) = output variance = E(Y 2) − E2(Y ) V (Y ) = Vθ∼i[(EXi(Y |θ∼i))] + Eθ∼i[(VXi(Y |θ∼i))] STi = 1−Vθ∼i [Eθi(Y |θ∼i)] V (Y )

(Total effect sensitivity index) Example: Polyethylene chain with 24 beads

Saltelli, A., et.al. (2001)

J.T. Oden Belytschko Lecture October 2014 41 / 93

slide-57
SLIDE 57

Sensitivity Analysis: Monte Carlo Scatterplots

J.T. Oden Belytschko Lecture October 2014 42 / 93

slide-58
SLIDE 58

Sensitivity Analysis: Comparison

The sensitivity indices and scatterplots show that the dihedral parameters are unimportant, but how important are they?

J.T. Oden Belytschko Lecture October 2014 43 / 93

slide-59
SLIDE 59

Occam’s Razor

Principle of Occam’s Razor

Among competing theories that lead to the same prediction, the one that relies on the fewest assumptions is the best. When choosing among a set of models: The simplest valid model is the best choice.

J.T. Oden Belytschko Lecture October 2014 44 / 93

slide-60
SLIDE 60

Occam’s Razor

Principle of Occam’s Razor

Among competing theories that lead to the same prediction, the one that relies on the fewest assumptions is the best. When choosing among a set of models: The simplest valid model is the best choice.

  • simple ⇒ number of parameters
  • valid ⇒ passes Bayesian validation test

How do we choose a model that adheres to this principle?

J.T. Oden Belytschko Lecture October 2014 44 / 93

slide-61
SLIDE 61

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 45 / 93

slide-62
SLIDE 62
  • 5. Exploratory Example: Polyethylene

Consider as an example polyethylene C24H50

J.T. Oden Belytschko Lecture October 2014 46 / 93

slide-63
SLIDE 63

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 47 / 93

slide-64
SLIDE 64

Example: Occam-Plausibility Algorithm

Consider as an example polyethylene Define the coarse-grained map: 2 carbons per bead

J.T. Oden Belytschko Lecture October 2014 48 / 93

slide-65
SLIDE 65

Example: Occam-Plausibility Algorithm (cont)

Representation of the CG potential using OPLS form

Model Bonds Angles Dihedrals Non-Bonded Params P1

  • 2

P2

  • 2

P3

  • 2

P4

  • 4

P5

  • 4

P6

  • 4

P7

  • 6

P8

  • 4

P9

  • 6

P10

  • 6

P11

  • 6

P12

  • 8

P13

  • 8

P14

  • 8

P15

  • 10

J.T. Oden Belytschko Lecture October 2014 49 / 93

slide-66
SLIDE 66

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 50 / 93

slide-67
SLIDE 67

Example: Occam-Plausibility Algorithm (cont)

Y = U(·; θ) = potential energy

J.T. Oden Belytschko Lecture October 2014 51 / 93

slide-68
SLIDE 68

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals Non-Bonded Params P1

  • 2

P2

  • 2

P3

  • 2

P4

  • 4

P5

  • 4

P6

  • 4

P7

  • 6

P8

  • 4

P9

  • 6

P10

  • 6

P11

  • 6

P12

  • 8

P13

  • 8

P14

  • 8

P15

  • 10

J.T. Oden Belytschko Lecture October 2014 52 / 93

slide-69
SLIDE 69

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params ¯ P1

  • 2

¯ P2

  • 2

¯ P3

  • 2

¯ P4

  • 2

¯ P5

  • 4

¯ P6

  • 4

¯ P7

  • 4

¯ P8

  • 4

¯ P9

  • 4

¯ P10

  • 6

¯ P11

  • 6

J.T. Oden Belytschko Lecture October 2014 53 / 93

slide-70
SLIDE 70

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ P1

  • 2

1

¯ P2

  • 2

¯ P3

  • 2

¯ P4

  • 2

¯ P5

  • 4

2

¯ P6

  • 4

¯ P7

  • 4

¯ P8

  • 4

¯ P9

  • 4

¯ P10

  • 6

3

¯ P11

  • 6

J.T. Oden Belytschko Lecture October 2014 54 / 93

slide-71
SLIDE 71

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 55 / 93

slide-72
SLIDE 72

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ P1

  • 2

1

¯ P2

  • 2

¯ P3

  • 2

¯ P4

  • 2

¯ P5

  • 4

2

¯ P6

  • 4

¯ P7

  • 4

¯ P8

  • 4

¯ P9

  • 4

¯ P10

  • 6

3

¯ P11

  • 6

J.T. Oden Belytschko Lecture October 2014 56 / 93

slide-73
SLIDE 73

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 57 / 93

slide-74
SLIDE 74

Example: Occam-Plausibility Algorithm (cont)

Calibration π(θ∗

j|y, P∗ j , M∗) =

π(y|θ∗

j, P∗ j , M∗)π(θ∗ j|P∗ j , M∗)

π(y|P∗

j , M∗)

Here, y = potential energy of C6H14

Plausibility ρ∗

j = π(P∗ j |y, M∗) =

π(y|P∗

j , M∗)π(P∗ j |M∗)

π(y|M∗)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility P∗

1

  • 2

1 P∗

2

  • 2

P∗

3

  • 2

P∗

4

  • 2

J.T. Oden Belytschko Lecture October 2014 58 / 93

slide-75
SLIDE 75

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 59 / 93

slide-76
SLIDE 76

Example: Occam-Plausibility Algorithm (cont)

As a validation scenario, we consider C18H38 at T = 300K in a canonical ensemble.

Validation π(θ∗

1|yv, yc) = π(yv|θ∗ 1, yc)π(θ∗ 1|yc)

π(yv)

Here, yv is the potential energy How well does this updated model reproduce the desired observable? Let

  • π(Q) = π(uAA)

⇒ π(Q|θ∗) = π(U(·; θ∗)), γtol,1 = 0.05σ2

AA

  • Q = uAA

⇒ E [π(Q|θ∗)] = E [U(·; θ∗)] , γtol,2 = 0.2Q

J.T. Oden Belytschko Lecture October 2014 60 / 93

slide-77
SLIDE 77

Example: Occam-Plausibility Algorithm (cont)

If we compare the distributions,

DKL(π(QAA)π(QCG)) = 0.2435σ2

AA > γ1,tol

where γtol,1 = 0.05σ2

AA

If we compare the ensemble average,

  • QAA − Eπv

post[π(Qv|θ)]

  • = 0.67173Q > γ2,tol

where γtol,2 = 0.2QAA Model is invalid

J.T. Oden Belytschko Lecture October 2014 61 / 93

slide-78
SLIDE 78

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 62 / 93

slide-79
SLIDE 79

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 63 / 93

slide-80
SLIDE 80

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ P1

  • 2

1

¯ P2

  • 2

¯ P3

  • 2

¯ P4

  • 2

¯ P5

  • 4

2

¯ P6

  • 4

¯ P7

  • 4

¯ P8

  • 4

¯ P9

  • 4

¯ P10

  • 6

3

¯ P11

  • 6

J.T. Oden Belytschko Lecture October 2014 64 / 93

slide-81
SLIDE 81

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 65 / 93

slide-82
SLIDE 82

Example: Occam-Plausibility Algorithm (cont)

Calibration π(θ∗

j|y, P∗ j , M∗) =

π(y|θ∗

j, P∗ j , M∗)π(θ∗ j|P∗ j , M∗)

π(y|P∗

j , M∗)

Here, y = potential energy of C6H14

Plausibility ρ∗

j = π(P∗ j |y, M∗) =

π(y|P∗

j , M∗)π(P∗ j |M∗)

π(y|M∗)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility P∗

1

  • 4

3.7891 × 10−7 P∗

2

  • 4

0.3420 P∗

3

  • 4

0.6580

J.T. Oden Belytschko Lecture October 2014 66 / 93

slide-83
SLIDE 83

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 67 / 93

slide-84
SLIDE 84

Example: Occam-Plausibility Algorithm (cont)

As a validation scenario, we consider C18H38 at T = 300K in a canonical ensemble.

Validation π(θ∗

3|yv, yc) = π(yv|θ∗ 3, yc)π(θ∗ 3|yc)

π(yv)

Here, yv is the potential energy How well does this updated model reproduce the desired observable? Let

  • π(Q) = π(uAA)

⇒ π(Q|θ∗) = π(U(·; θ∗)), γ1,tol = 0.05σ2

AA

  • Q = uAA

⇒ E [π(Q|θ∗)] = E [U(·; θ∗)] , γ2,tol = 0.2Q

J.T. Oden Belytschko Lecture October 2014 68 / 93

slide-85
SLIDE 85

Example: Occam-Plausibility Algorithm (cont)

If we compare the distributions,

DKL(π(QAA)π(QCG)) = 0.2084σ2

AA > γ1,tol

where γtol,1 = 0.05σ2

AA

If we compare the ensemble average,

  • QAA − Eπv

post[π(Qv|θ)]

  • = 0.5731Q > γ2,tol

where γtol,2 = 0.2QAA Model is invalid

J.T. Oden Belytschko Lecture October 2014 69 / 93

slide-86
SLIDE 86

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 70 / 93

slide-87
SLIDE 87

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 71 / 93

slide-88
SLIDE 88

Example: Occam-Plausibility Algorithm (cont)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Category ¯ P1

  • 2

1

¯ P2

  • 2

¯ P3

  • 2

¯ P4

  • 2

¯ P5

  • 4

2

¯ P6

  • 4

¯ P7

  • 4

¯ P8

  • 4

¯ P9

  • 4

¯ P10

  • 6

3

¯ P11

  • 6

J.T. Oden Belytschko Lecture October 2014 72 / 93

slide-89
SLIDE 89

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 73 / 93

slide-90
SLIDE 90

Example: Occam-Plausibility Algorithm (cont)

Calibration π(θ∗

j|y, P∗ j , M∗) =

π(y|θ∗

j, P∗ j , M∗)π(θ∗ j|P∗ j , M∗)

π(y|P∗

j , M∗)

Here, y = potential energy of C6H14

Plausibility ρ∗

j = π(P∗ j |y, M∗) =

π(y|P∗

j , M∗)π(P∗ j |M∗)

π(y|M∗)

Model Bonds Angles Dihedrals LJ 12-6 LJ 9-6 Params Plausibility ¯ P10

  • 6

0.5 ¯ P11

  • 6

0.5

J.T. Oden Belytschko Lecture October 2014 74 / 93

slide-91
SLIDE 91

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 75 / 93

slide-92
SLIDE 92

Example: Occam-Plausibility Algorithm (cont)

If we compare the distributions,

DKL(π(QAA)π(QCG)) = 0.0452σ2

AA < γ1,tol

where γtol,1 = 0.05σ2

AA

If we compare the ensemble average,

  • QAA − Eπv

post[π(Qv|θ)]

  • = 0.1721Q < γ2,tol

where γtol,2 = 0.2QAA Model is NOT invalid

J.T. Oden Belytschko Lecture October 2014 76 / 93

slide-93
SLIDE 93

Example: Occam-Plausibility Algorithm (cont)

How do the observables change as we move through the Iterative Occam Step?

J.T. Oden Belytschko Lecture October 2014 77 / 93

slide-94
SLIDE 94

The Occam-Plausibility Algorithm

✲ ✲ ❄ ❄ ❄ ✛ ✲

no

yes

no

yes

✛ ✲

START Identify a set of possible models, M SENSITIVITY ANALYSIS Eliminate models with parameters to which the model output is insensitive

¯ M = { ¯ P1, . . . , ¯ Pm}

OCCAM STEP Choose model(s) in the lowest Occam Category

M∗ = {P∗

1, . . . , P∗ m}

ITERATIVE OCCAM STEP Choose models in next Occam category CALIBRATION STEP Calibrate all models in M∗ Identify a new set

  • f possible

models Does P∗

j have the

most parameters in ¯ M PLAUSIBILITY STEP Compute plausibilities and identify most plausible model P∗

j

Use validated params to predict QoI Is P∗

j valid?

VALIDATION STEP Submit P∗

j to validation test J.T. Oden Belytschko Lecture October 2014 78 / 93

slide-95
SLIDE 95

Work in Progress: PMMA

. . . C ❇ ❇ ✂ ✂ H H C ❇❇ . . . C P P ✏✏ H H H C O ✂ ✂ O ❇❇ C H H H One molecule has: 15 atoms 72 parameters

J.T. Oden Belytschko Lecture October 2014 79 / 93

slide-96
SLIDE 96

Work in Progress: PMMA

. . . C ❇ ❇ ✂ ✂ H H C ❇❇ . . . C P P ✏✏ H H H C O ✂ ✂ O ❇❇ C H H H

J.T. Oden Belytschko Lecture October 2014 80 / 93

slide-97
SLIDE 97

CG Calibration Scenario

G

− →

Model Bonds Angles LJ 9-6 LJ 12-6 A

# of Parameters P1

  • 9

P2

rigid

  • 9

P3

  • 13

P4

  • 9

P5

rigid

  • 9

P6

  • 13

P7

  • 5

P8

rigid

  • 5

P9

  • 9

J.T. Oden Belytschko Lecture October 2014 81 / 93

slide-98
SLIDE 98

The polymerization process (KMC)

10x10x10 nm

J.T. Oden Belytschko Lecture October 2014 82 / 93

slide-99
SLIDE 99

Continuum Models Calibration Scenario

− →

Model

# of Parameters P1: Saint Venant-Kirchhoff

2

P2: Neo-Hookean

2

P3: Mooney-Rivilin

3

J.T. Oden Belytschko Lecture October 2014 83 / 93

slide-100
SLIDE 100

Continuum Models Calibration Scenario

Initial Configuration Equlibration Configuration

J.T. Oden Belytschko Lecture October 2014 84 / 93

slide-101
SLIDE 101

Continuum Models Calibration Scenario

(PMMA Lattice under biaxial deformation)

J.T. Oden Belytschko Lecture October 2014 85 / 93

slide-102
SLIDE 102

Continuum Models Calibration Scenario

Hyperelastic Model P2: Compressible Neo-Hookean model

W = C1(I1 − 3) − 2C1 ln

  • I3 + C2(
  • I3 − 1)2

macromodel parameters = θ2 = (C1, C2) Biaxial deformation λ1 = λ2: Strain-Energy → W = W(λ1, θ2)

Observational data supplied by CG model to calibrate the continuum models

1 1.1 1.2 1.3 1.4 1.5 0.1 0.2 0.3 0.4 0.5 0.6 1 W [GPa] J.T. Oden Belytschko Lecture October 2014 86 / 93

slide-103
SLIDE 103
  • 6. Model Inadequacy - Misspecified Models

Suppose µ∗ /

∈ P(Θ). Then the

density g(y) /

∈ P(Θ) (model is

inadequate) ✈ ✟ ✟ ✈

Y P(Θ)

P(θ0) µ∗

Let there exist a θ0 such that

θ0 = argmin

θ∈Θ

DKL(g(y)π(·|θ)) ∀i

Then, under suitable smoothness conditions

  • πn(y1, y2, . . . , yn|·) − N
  • ˆ

θn, 1 nVθ0

  • TV

→ 0 as n → ∞

where ˆ

θn = the MLE and (Vθ0)ij = −Eg

  • ∂2

∂θi∂θj Ly(θ0)

  • Kleijn and van der Vaart (2012), Freedman, D. (2006), Geyer (2003)

J.T. Oden Belytschko Lecture October 2014 87 / 93

slide-104
SLIDE 104

Lemma: θ0 is the maximum likelihood estimate Proof:

θ0 = argmin

Θ

  • Y

(g(y) log g(y) − g(y)π(yθ)) dy

  • =

argmin

Θ

  • Y

g(y)π(yθ) dy

  • =

argmax

Θ

  • Y

g(y)π(yθ) dy

  • =

argmax

Θ

Eg [log π(y|θ)]

J.T. Oden Belytschko Lecture October 2014 88 / 93

slide-105
SLIDE 105

Model Misspecification and Model Plausibility

Theorem 1: (Bayesian → Frequentist) If

π(y|Pi, M) =

  • Θ

π(y|θ, Pi, M)δ(θ − θ0) dθ = π(y|θ0, Pi, M)

and

ρ1 ρ2 = π(y|θ0,1, P1, M) π(y|θ0,2, P2, M) × O12

Then, if P1 is more plausible than P2 and O12 ≤ 1,

DKL(gπ(y|θ0,1, P1, M)) < DKL(gπ(y|θ0,2, P2, M))

(The converse: DKL(1) < DKL(2) ⇒ ρ1 > ρ2, holds only under special assumptions)

J.T. Oden Belytschko Lecture October 2014 89 / 93

slide-106
SLIDE 106

Model Misspecification and Model Plausibility

Corollary

For given observational data y , let P1(θ1) be the only well specified model in a set M of parametric models {P1(θ1), P2(θ2), . . . , Pm(θm)} Then, a) P1(θ1) is the most plausible model in the set M,

ρ1 > ρk, k = 2, 3, . . . , m,

b) there exists θ∗ belonging to P1(Θ1) such that

θ∗ = argminDKL(gπ(y|θ))

J.T. Oden Belytschko Lecture October 2014 90 / 93

slide-107
SLIDE 107

Conclusions

  • Bayes’ theorem provides a powerful framework for dealing with model

validation and uncertainty quantification

  • The test of model validity can involve a sequence of statistical inverse

problems for model parameters, each reflecting the projected influence of the QoI

Validation is the process of determining the level of confidence one has in the ability of the model to predict quantities of interest based on the accuracy with which the model predicts specific observables to within preset tolerances

  • The concept of model plausibility provides a powerful tool for

1 determining potentials for CG models of atomic systems 2 choosing models among a class of models that have parameter closest

“in DKL” to the true distribution

J.T. Oden Belytschko Lecture October 2014 91 / 93

slide-108
SLIDE 108

Conclusions

  • Model inadequacy can be attributed to model misspecification:

θ∗ / ∈ P(Θ)

  • The calculation of model sensitivities due to variations in parameters

can significantly reduce the number of relevant models for given

  • utputs.
  • Hierarchical categories of models based on numbers of parameters

(the Occam categories) together with the evaluation of model plausibilities provide a basis for an adaptive process for validation of parametric classes of coarse-grained models

J.T. Oden Belytschko Lecture October 2014 92 / 93

slide-109
SLIDE 109

Thank you!

J.T. Oden Belytschko Lecture October 2014 93 / 93