Probabilistic modeling natural way to treat data Hussein Rappel - - PowerPoint PPT Presentation

probabilistic modeling natural way to treat data
SMART_READER_LITE
LIVE PREVIEW

Probabilistic modeling natural way to treat data Hussein Rappel - - PowerPoint PPT Presentation

Probabilistic modeling natural way to treat data Hussein Rappel University of Luxembourg h.rappel@gmail.com February 12, 2019 The DRIVEN project has received funding from the European Union's Horizon 2020 research and innovation program under


slide-1
SLIDE 1

Probabilistic modeling natural way to treat data

Hussein Rappel

University of Luxembourg h.rappel@gmail.com

February 12, 2019

The DRIVEN project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 811099. 1 / 34

slide-2
SLIDE 2

y x 1 2 3 4 1 2 3 4 a 1 y = −ax + b

Introduction to Gaussian Processes, Neil Lawrence 2 / 34

slide-3
SLIDE 3

y x 1 2 3 4 1 2 3 4

3 / 34

slide-4
SLIDE 4

y x 1 2 3 4 1 2 3 4

4 / 34

slide-5
SLIDE 5

y x 1 2 3 4 1 2 3 4

5 / 34

slide-6
SLIDE 6

6 / 34

slide-7
SLIDE 7

Each point can be written as the model+ a corruption: y1 = ax + c + ω1 y2 = ax + c + ω2 y3 = ax + c + ω3 ω is the difference between real world and model which can be presented by a probability distribution. We call ω noise!

7 / 34

slide-8
SLIDE 8

What if our observations are less than model parameters? Underdetermined system

8 / 34

slide-9
SLIDE 9

y x 1 2 3 4 1 2 3 4 How can we fit the y = ax + b line, having only one point?

Introduction to Gaussian Processes, Neil Lawrence 9 / 34

slide-10
SLIDE 10

y x 1 2 3 4 1 2 3 4 If b is fixed = ⇒ a = y−b

x

10 / 34

slide-11
SLIDE 11

y x 1 2 3 4 1 2 3 4

11 / 34

slide-12
SLIDE 12

y x 1 2 3 4 1 2 3 4 b ∼ π1 = ⇒ a ∼ π2

12 / 34

slide-13
SLIDE 13

◮ This is called Bayesian treatment. ◮ The model parameters are treated as random variables.

13 / 34

slide-14
SLIDE 14

Bayesian perspective

Original belief Observations New belief

14 / 34

slide-15
SLIDE 15

Bayesian formula (inverse probability)

posterior

π(x|y) =

prior

π(x)×

likelihood

π(y|x) π(y)

evidence

y := observation x := parameter π(x) := original belief π(y|x) := given by the mathematical model that relates y to x π(y) := is a constant number

15 / 34

slide-16
SLIDE 16

Bayesian formula (inverse probability)

π(x|y) ∝ π(x) × π(y|x)

16 / 34

slide-17
SLIDE 17

BI in computational mechanics

σ ǫ

17 / 34

slide-18
SLIDE 18

Linear elasticity

σ = Eǫ σ ǫ E 1

18 / 34

slide-19
SLIDE 19

Linear elasticity

y = Eǫ + ω Ω ∼ πω(ω)

Capital letters denote a random variable 19 / 34

slide-20
SLIDE 20

Linear elasticity

ǫ σ πω(ω) =

1 √ 2πsω exp

  • − ω2

2s2

ω

  • Noise PDF is modeled through calibration test.

20 / 34

slide-21
SLIDE 21

Linear elasticity

Bayes’ formula: π(E|y) = π(E)π(y|E)

π(y)

= π(E)π(y|E)

k

π(E|y) ∝ π(E)π(y|E)

21 / 34

slide-22
SLIDE 22

Linear elasticity

y = Eǫ + ω Ω ∼ N(0, s2

ω)

22 / 34

slide-23
SLIDE 23

Linear elasticity

π(y|E) = 1 √ 2πsω exp

  • − (y − Eǫ)2

2s2

ω

  • 23 / 34
slide-24
SLIDE 24

Linear elasticity

Posterior: π(E|y) ∝ exp

  • − (E−E)2

2s2

E

  • exp
  • − (y−Eǫ)2

2s2

ω

  • 24 / 34
slide-25
SLIDE 25

Linear elasticity

◮ Prediction interval: An estimate of an interval in which an observation will fall, with a certain probability. ◮ Credible region: A region of a distribution in which it is believed that a random variable lie with a certain probability.

25 / 34

slide-26
SLIDE 26

Linear elasticity

◮ Increase in number of observations/measurements makes us more sure of identification result.

26 / 34

slide-27
SLIDE 27

Prior effect

◮ Increase in number of observations/measurements decreases the effect of prior.

27 / 34

slide-28
SLIDE 28

Conclusion

◮ Probability is the natural way of dealing with

uncertainties/unknowns (what Laplace calls it our ignorance).

28 / 34

slide-29
SLIDE 29

Conclusion

◮ Probability is the natural way of dealing with

uncertainties/unknowns (what Laplace calls it our ignorance).

◮ From Bayesian perspective (inverse probability) the

parameters are treated as random variables.

29 / 34

slide-30
SLIDE 30

Conclusion

◮ Probability is the natural way of dealing with

uncertainties/unknowns (what Laplace calls it our ignorance).

◮ From Bayesian perspective (inverse probability) the

parameters are treated as random variables.

◮ The same logic can be used to model other kinds of

uncertainties/unknowns e.g. model uncertainties and material variability.

30 / 34

slide-31
SLIDE 31

Conclusion

◮ Probability is the natural way of dealing with

uncertainties/unknowns (what Laplace calls it our ignorance).

◮ From Bayesian perspective (inverse probability) the

parameters are treated as random variables.

◮ The same logic can be used to model other kinds of

uncertainties/unknowns e.g. model uncertainties and material variability.

◮ In Bayesian paradigm our assumptions are clearly stated

(e.g. the prior, model and ...).

31 / 34

slide-32
SLIDE 32

Conclusion

◮ Probability is the natural way of dealing with

uncertainties/unknowns (what Laplace calls it our ignorance).

◮ From Bayesian perspective (inverse probability) the

parameters are treated as random variables.

◮ The same logic can be used to model other kinds of

uncertainties/unknowns e.g. model uncertainties and material variability.

◮ In Bayesian paradigm our assumptions are clearly stated

(e.g. the prior, model and ...).

◮ As the number of observation/measurements increases we

become more sure of our identification results.

32 / 34

slide-33
SLIDE 33

Acknowledgement

The DRIVEN project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 811099.

33 / 34

slide-34
SLIDE 34

Some references

◮ P.S. marquis de Laplace, 1902. A philosophical essay on probabilities. Translated by F.W. Truscott and F.L. Emory, Wiley. ◮ E.T. Jaynes, 2003. Probability theory: The logic of science. Cambridge university press. ◮ D.J. MacKay, 2003. Information theory, inference and learning algorithms. Cambridge university press. ◮ A. Gelman et al., 2013. Bayesian data analysis. Chapman and Hall/CRC. ◮ Courses by N. Lawrence. http://inverseprobability.com/ References by Legatoteam ◮ H. Rappel et al., 2018. Bayesian inference to identify parameters in

  • viscoelasticity. Mechanics of Time-Dependent Materials.

◮ H. Rappel et al., 2018. Identifying elastoplastic parameters with Bayes’ theorem considering output error, input error and model uncertainty. Probabilistic Engineering Mechanics. ◮ H. Rappel et al., 2019. A tutorial on Bayesian inference to identify material parameters in solid mechanics. Archives of Computational Methods in Engineering. ◮ H. Rappel and L.A.A. Beex, 2019. Estimating fibres’ material parameter distributions from limited data with the help of Bayesian inference. European Journal of Mechanics-A/Solids.

34 / 34