Bayesian Networks in Reliability The Good, the Bad, and the Ugly - - PowerPoint PPT Presentation

bayesian networks in reliability
SMART_READER_LITE
LIVE PREVIEW

Bayesian Networks in Reliability The Good, the Bad, and the Ugly - - PowerPoint PPT Presentation

Bayesian Networks in Reliability The Good, the Bad, and the Ugly Helge Langseth Department of Computer and Information Science Norwegian University of Science and Technology MMR07 1 Helge Langseth Bayesian Networks in Reliability Outline


slide-1
SLIDE 1

Bayesian Networks in Reliability

The Good, the Bad, and the Ugly Helge Langseth

Department of Computer and Information Science Norwegian University of Science and Technology

MMR’07

1 Helge Langseth Bayesian Networks in Reliability

slide-2
SLIDE 2

Outline

1

Introduction

2

The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications

3

The Bad: Building complex quantitative models The model building process The quantitative part Utility theory

4

The Ugly: Continuous variables Introduction Approximations

5

Concluding remarks

2 Helge Langseth Bayesian Networks in Reliability

slide-3
SLIDE 3

Introduction

A simple example: “Explosion”

P(E, L, G, X, C) E: Environment L: Leak G: GD failed X: Explosion C: Casualties

3 Helge Langseth Bayesian Networks in Reliability

slide-4
SLIDE 4

Introduction

A simple example: “Explosion”

P(E, L, G, X, C) E: Environment L: Leak G: GD failed X: Explosion C: Casualties L: Leak G: GD failed pa (X) = {L, G} X: Explosion

3 Helge Langseth Bayesian Networks in Reliability

slide-5
SLIDE 5

Introduction

A simple example: “Explosion”

P(E, L, G, X, C) E: Environment L: Leak G: GD failed X: Explosion C: Casualties L: Leak G: GD failed pa (X) = {L, G} X: Explosion E: Environment L: Leak G: GD failed nd(X) = {E, L, G}

3 Helge Langseth Bayesian Networks in Reliability

slide-6
SLIDE 6

Introduction

A simple example: “Explosion”

P(E, L, G, X, C) E: Environment L: Leak G: GD failed X: Explosion C: Casualties L: Leak G: GD failed pa (X) = {L, G} X: Explosion E: Environment L: Leak G: GD failed nd(X) = {E, L, G} L: Leak G: GD failed pa (X) = {L, G} X: Explosion X⊥ ⊥E | {L, G} Other d-sep. rules: Jensen&Nielsen (07)

3 Helge Langseth Bayesian Networks in Reliability

slide-7
SLIDE 7

Introduction

A simple example: “Explosion”

P(E, L, G, X, C) E: Environment L: Leak G: GD failed X: Explosion C: Casualties L: Leak G: GD failed pa (X) = {L, G} X: Explosion E: Environment L: Leak G: GD failed nd(X) = {E, L, G} L: Leak G: GD failed pa (X) = {L, G} X: Explosion X⊥ ⊥E | {L, G} Other d-sep. rules: Jensen&Nielsen (07) X: Explosion E: Environment L: Leak G: GD failed X: Explosion C: Casualties G E =hostile E =normal yes λH · τ/2 λN · τ/2 no 1 − λH · τ/2 1 − λN · τ/2 P(G | pa (G))

  • Hence, P(X | E, L, G) = P(X | L, G)
  • P(E, L, G, X, C) = P(E) · P(L | E) · P(G | E, L) · P(X | E, L, G) · P(C | E, L, G, X)

= P(E) · P(L | E) · P(G | E) · P(X | L, G) · P(C | X)

Markov properties ⇔ Factorization property

3 Helge Langseth Bayesian Networks in Reliability

slide-8
SLIDE 8

The Good: Why Bayesian Nets are popular

Outline

1

Introduction

2

The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications

3

The Bad: Building complex quantitative models The model building process The quantitative part Utility theory

4

The Ugly: Continuous variables Introduction Approximations

5

Concluding remarks

4 Helge Langseth Bayesian Networks in Reliability

slide-9
SLIDE 9

The Good: Why Bayesian Nets are popular Mathematical properties

What the mathematical foundation has to offer

Intuitive representation: Almost defined as “box-diagram with formal meaning”. Causal interpretation natural in many cases. Efficient representation: The number of required parameters are

  • reduced. If all variables are binary, the example

requires 11 “local” parameters, compared to the 31 “global” parameters of the full joint. Efficient calculations: Efficient calculations of any joint distribution P(xi, xj) or conditional distribution P(xk | xℓ, xm). Model estimation: Estimating parameters (fixed structure) via EM, estimating structure by discrete optimization techniques.

5 Helge Langseth Bayesian Networks in Reliability

slide-10
SLIDE 10

The Good: Why Bayesian Nets are popular Making decisions

Influence diagrams: The “Explosion” example revisited

E: Environment L: Leak G: GD failed X: Explosion C: Casualties SSM Cost1 Cost2

6 Helge Langseth Bayesian Networks in Reliability

slide-11
SLIDE 11

The Good: Why Bayesian Nets are popular Making decisions

Influence diagrams: The “Explosion” example revisited

E: Environment L: Leak G: GD failed X: Explosion C: Casualties SSM Cost1 Cost2 F: Effectiveness, SSM Test interval Cost3

6 Helge Langseth Bayesian Networks in Reliability

slide-12
SLIDE 12

The Good: Why Bayesian Nets are popular Applications

An application: Troubleshooting

7 Helge Langseth Bayesian Networks in Reliability

slide-13
SLIDE 13

The Good: Why Bayesian Nets are popular Applications

Underlying model

TOP C1 C2 C3 C4 X1 X2 X3 X4 X5 system-layer

8 Helge Langseth Bayesian Networks in Reliability

slide-14
SLIDE 14

The Good: Why Bayesian Nets are popular Applications

Underlying model

TOP C1 C2 C3 C4 X1 X2 X3 X4 X5 system-layer E

8 Helge Langseth Bayesian Networks in Reliability

slide-15
SLIDE 15

The Good: Why Bayesian Nets are popular Applications

Underlying model

TOP C1 C2 C3 C4 X1 X2 X3 X4 X5 system-layer A1 A2 A3 A4 A5 X1 X2 X3 X4 X5 action-layer

8 Helge Langseth Bayesian Networks in Reliability

slide-16
SLIDE 16

The Good: Why Bayesian Nets are popular Applications

Underlying model

TOP C1 C2 C3 C4 X1 X2 X3 X4 X5 system-layer A1 A2 A3 A4 A5 X1 X2 X3 X4 X5 action-layer A1 A2 A3 A4 A5 C1 C2 C3 C4 R1 R2 R3 R4 R5 result-layer

8 Helge Langseth Bayesian Networks in Reliability

slide-17
SLIDE 17

The Good: Why Bayesian Nets are popular Applications

Underlying model

TOP C1 C2 C3 C4 X1 X2 X3 X4 X5 system-layer A1 A2 A3 A4 A5 X1 X2 X3 X4 X5 action-layer A1 A2 A3 A4 A5 C1 C2 C3 C4 R1 R2 R3 R4 R5 result-layer C1 C2 X3 X5 QS question-layer

8 Helge Langseth Bayesian Networks in Reliability

slide-18
SLIDE 18

The Good: Why Bayesian Nets are popular Applications

Other applications

Software reliability Modelling Organizational factors (e.g., the SAM-Framework) Explicit models of dynamics (e.g., repairable systems, phase-mission-systems, monitoring systems) Some of these can be seen at the Bayes net sessions later today

9 Helge Langseth Bayesian Networks in Reliability

slide-19
SLIDE 19

The Bad: Building complex quantitative models

Outline

1

Introduction

2

The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications

3

The Bad: Building complex quantitative models The model building process The quantitative part Utility theory

4

The Ugly: Continuous variables Introduction Approximations

5

Concluding remarks

10 Helge Langseth Bayesian Networks in Reliability

slide-20
SLIDE 20

The Bad: Building complex quantitative models The model building process

Phases of the model building process

Step 0 – Decide what to model: Select the boundary for what to include in the model. Step 1 – Defining variables: Select the important variables in the domain. Step 2 – The qualitative part: Define the graphical structure that connects the variables. Step 3 – The quantitative part: Fix parameters to specify each P(xi | pa (xi)). This is the ‘bad’ part. Step 4 – Verification: Verification of the model.

11 Helge Langseth Bayesian Networks in Reliability

slide-21
SLIDE 21

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities: All parameters are required if no other assumptions can be made.

12 Helge Langseth Bayesian Networks in Reliability

slide-22
SLIDE 22

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free: Y considered a function of its parents, e.g., {Y = fail} ⇐ ⇒ {Z1 = fail}∨{Z2 = fail}∨. . .∨{Zm = fail}.

12 Helge Langseth Bayesian Networks in Reliability

slide-23
SLIDE 23

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities:

Y Q1 Q2 . . . Qm Z1 Z2 . . . Zm

Independent inhibitors Q1, . . . , Qm; Assume {Q1 = fail}∨. . .∨{Qm = fail} = ⇒ {Y = fail}. For each Qi we have P(Qi = fail|Zi = fail) = qi, P(Qi = fail|Zi = ¬fail) = 0. “Leak probability”: P(Y = fail|Q1 = . . . = Qm = ¬fail) = q0.

12 Helge Langseth Bayesian Networks in Reliability

slide-24
SLIDE 24

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters: Y is dependent variable in logistic regression with Zi’s as “covariates”: log

  • pz1,...,zm

1 − pz1,...,zm

  • = η0 +
  • j

ηjzj +

  • i
  • j

ηijzi · zj + . . .

12 Helge Langseth Bayesian Networks in Reliability

slide-25
SLIDE 25

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs: Find a joint PT over Z1, . . . , Zm, Y with given CPRs. Assume m = 1, p0(z, y) initialized to fit CPR. – p′

k(z, y) ← pk−1(z, y) · p(z)/ y pk−1(z, y)

– pk(z, y) ← p′

k(z, y) · p(y)/ z p′ k(z, y)

12 Helge Langseth Bayesian Networks in Reliability

slide-26
SLIDE 26

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities: Y defined, e.g., by rules such as “P(Y = fail|Z1 = fail, . . . , Zm = fail) = p1, but P(Y = fail|z1, . . . , zm) = p2 for all other configurations z”.

12 Helge Langseth Bayesian Networks in Reliability

slide-27
SLIDE 27

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities Qualitative BNs: No quantitative parameters: Only qualitative effects modelled (and later calculated). From m to 2m qualitative effects (‘+’, ‘0’ or ‘−’).

12 Helge Langseth Bayesian Networks in Reliability

slide-28
SLIDE 28

The Bad: Building complex quantitative models The quantitative part

The quantitative part: Defining P(y|pa (y))

Consider a binary node with m binary parents. The CPT P(y|z1, . . . , zm) contains 2m parameters.

Y Z1 Z2 . . . Zm

Naïve approach: 2m conditional probabilities Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities Qualitative BNs: No quantitative parameters Alternative solutions: No conditional probabilities: Other frameworks (like vines), or parameter estimation.

12 Helge Langseth Bayesian Networks in Reliability

slide-29
SLIDE 29

The Bad: Building complex quantitative models Utility theory

Utility Theory

13 Helge Langseth Bayesian Networks in Reliability

slide-30
SLIDE 30

The Bad: Building complex quantitative models Utility theory

Utility Theory

Utility - 1 Utility - 2 Pareto boundary

13 Helge Langseth Bayesian Networks in Reliability

slide-31
SLIDE 31

The Ugly: Continuous variables

Outline

1

Introduction

2

The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications

3

The Bad: Building complex quantitative models The model building process The quantitative part Utility theory

4

The Ugly: Continuous variables Introduction Approximations

5

Concluding remarks

14 Helge Langseth Bayesian Networks in Reliability

slide-32
SLIDE 32

The Ugly: Continuous variables Introduction

The Ugly: Continuous variables

The original calculation procedure only supports a restricted set of distributional families:

Continuous variables must have Gaussian distributions. Discrete variables should only have discrete parents. Gaussian parents of Gaussians are partial regression coefficients

  • f their children.

X1 Disc. Y1 Disc. X2 Disc. Y2 N(µx, σ2

x)

X3 N(µ, σ2) Y3 Disc. X4 Disc. Y 4 N(µx, Σx) X5 Beta(α, β) Y5 Bern(x)

These classes of distributions are not sufficient for reliability

  • analysis. This is the ‘ugly’ part.

15 Helge Langseth Bayesian Networks in Reliability

slide-33
SLIDE 33

The Ugly: Continuous variables Introduction

An example model: The THERP methodology

Used to model human ability to perform in certain settings (measured as binary variables) Known environment variables, like “Level of feedback”

Z1 Z2 Always known T1 T2 T3 T4 w11 w24 Logistic regression

This is simple. The probability of a subject failing to perform task Ti is: P(Ti = ti|z) =

  • 1 + exp
  • −w′

iz

−1

16 Helge Langseth Bayesian Networks in Reliability

slide-34
SLIDE 34

The Ugly: Continuous variables Introduction

An example model: The THERP methodology

Used to model human ability to perform in certain settings (measured as binary variables) Known environment variables, like “Level of feedback”

Z1 Z2 Z3 N(µ, σ2) T1 T2 T3 T4 Logistic regression

We can also have latent traits, which are unknown and vary between subjects (like “Omitting a step in a procedure”). In this case, the model is a“latent trait model” (similar to binary factory analyzer). In the following we will focus on a situation with two latent “traits”, and one “task”.

16 Helge Langseth Bayesian Networks in Reliability

slide-35
SLIDE 35

The Ugly: Continuous variables Introduction

Why is this difficult

Assume we have one observation D1 = {1}, and parameters w1 = [1 1]T.

Z1 Z2 T

The likelihood is given by P(T = 1) = 1 2πσ1σ2

  • R2

exp

  • (z1−µ1)2

2σ2

1

+ (z2−µ2)2

2σ2

2

  • 1 + exp(−z1 − z2)

dz, which has no known analytic representation in general. Hence, we cannot do the required calculations in this model. Note! This is true even if we choose not to use Bayesian networks as our modelling language.

17 Helge Langseth Bayesian Networks in Reliability

slide-36
SLIDE 36

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Numerical approximation:

−3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −6 −4 −2 2 4 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Z1 Z2 T (Here: P(T = 1|z1 + z2))

P(T = 1) = 0.49945 CPU = 600 msec. f(z1, z2 | T = 1)

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

1000 × 1000 grid

18 Helge Langseth Bayesian Networks in Reliability

slide-37
SLIDE 37

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Discretization: Every continuous variable is “translated” into a discrete one. The more discrete states used the higher . . .

  • approximation quality.
  • computational complexity.

“Tricks” are available to find number of states and where to set split-points, including dynamic discretization.

18 Helge Langseth Bayesian Networks in Reliability

slide-38
SLIDE 38

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Discretization:

−3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −6 −4 −2 2 4 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Z1 Z2 T (Here: P(T = 1|z1 + z2))

P(T = 1) = 0.49761 CPU = 2 msec. f(z1, z2 | T = 1)

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

5 × 5 discretization grid

18 Helge Langseth Bayesian Networks in Reliability

slide-39
SLIDE 39

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Mixtures of Truncated Exponentials: In standard discretization, the continuous variable is approximated by a step-function. Calculations are also possible if each ‘step’ is replaced by a truncated exponential. A single variable density is split into n intervals Ik, k = 1, . . . , n, each approximated by f ∗(z) = a(k) +

m

  • i=1

a(k)

i

exp

  • b(k)

i

z

  • for z ∈ Ik

We typically see 1 ≤ n ≤ 4 and 0 ≤ m ≤ 2. Clever parameter choices are tabulated for many standard distributions.

18 Helge Langseth Bayesian Networks in Reliability

slide-40
SLIDE 40

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Mixtures of Truncated Exponentials:

−3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −6 −4 −2 2 4 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Z1 Z2 T (Here: P(T = 1|z1 + z2))

P(T = 1) = 0.49914 CPU = 4 msec. f(z1, z2 | T = 1)

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

  • S. Acid et al.: ELVIRA

18 Helge Langseth Bayesian Networks in Reliability

slide-41
SLIDE 41

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Markov Chain Monte Carlo: Works well with Bayesian Networks, as independence statements can be exploited for fast simulation:

Metropolis-Hastings works directly out-of-the-box. Gibbs sampling might sometimes require clever adaption.

18 Helge Langseth Bayesian Networks in Reliability

slide-42
SLIDE 42

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Markov Chain Monte Carlo:

−3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −6 −4 −2 2 4 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Z1 Z2 T (Here: P(T = 1|z1 + z2))

P(T = 1) = 0.49821 CPU = 32 · 103 msec. f(z1, z2 | T = 1)

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

  • W. Gilks et al.: BUGS

18 Helge Langseth Bayesian Networks in Reliability

slide-43
SLIDE 43

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Variational Approximations:

  • 3
  • 2
  • 1

1 2 3

P(T = 1|x)

  • 1.8
  • 1.6
  • 1.4
  • 1.2
  • 1
  • 0.8
  • 0.6

1 2 3 4 5 6 7 8 9

− log(exp(v/2) + exp(−v/2)) v2

Replace a tricky function h(v) with family of simple functions indexed by ξ, ˜ h(v, ξ), such that h(v) = supξ ˜ h(v, ξ). Note: log (P(T = 1|v)) = v/2 − log(exp(v/2) + exp(−v/2)), where the last term is convex in v2.

18 Helge Langseth Bayesian Networks in Reliability

slide-44
SLIDE 44

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Variational Approximations: Define A(z1, z2) = z1 + z2 λ(ξ) =

exp(−ξ)−1 4ξ(1+exp(−ξ))

Variational approximation The variational approximation of P(T = 1 | z) is ˜ P(T = 1 | z, ξ) = exp

  • (A(z) − ξ)/2 + λi(ξ) · (A(z)2 − ξ2)
  • 1 + exp(−ξ)

. Can be shown that the best choice is ξ ←

  • E
  • (Z1 + Z2)2
  • T = 1
  • =

⇒ We need to iterate

18 Helge Langseth Bayesian Networks in Reliability

slide-45
SLIDE 45

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Variational Approximations:

−3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −3 −2 −1 1 2 3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −6 −4 −2 2 4 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Z1 Z2 T (Here: P(T = 1|z1 + z2))

P(T = 1) = 0.49828 CPU = 17 msec. f(z1, z2 | T = 1)

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

  • J. M. Winn: VIBES

18 Helge Langseth Bayesian Networks in Reliability

slide-46
SLIDE 46

The Ugly: Continuous variables Approximations

Attempts to find f(z1, z2 | T = 1) and P(T = 1)

Other approaches: A number of other approaches are also being examined Laplace-approximation Transformation into Mixture-of-Gaussian models Other frameworks, like Vines etc.

18 Helge Langseth Bayesian Networks in Reliability

slide-47
SLIDE 47

Concluding remarks

Outline

1

Introduction

2

The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications

3

The Bad: Building complex quantitative models The model building process The quantitative part Utility theory

4

The Ugly: Continuous variables Introduction Approximations

5

Concluding remarks

19 Helge Langseth Bayesian Networks in Reliability

slide-48
SLIDE 48

Concluding remarks

Summary

Bayesian Networks’ popularity is increasing, also in the reliability community. The main features (as seen from our community) are:

Constitute an intuitive modelling ‘language’. High level of modelling flexibility. Efficient calculations based on utilization of the conditional independence structures encoded in the graph. Cost efficient representation.

Building models can still be time consuming. Problem owners lack training in using BNs:

Users more confident when using traditional frameworks, like, e.g., FT modelling. The calculations may be too complex to understand.

Most important research focus (for this community) is to find good approximations to handle continuous variables.

20 Helge Langseth Bayesian Networks in Reliability

slide-49
SLIDE 49

Concluding remarks

Colleagues

A number of people have helped or worked with me on the topics covered in this presentation: Bayesian Network Models: Thomas D. Nielsen, Finn V. Jensen, Jiří Vomlel Bayesian Networks in Reliability: Luigi Portinale, Claus Skaanning Continuous Variables: Antonio Salmerón, Rafael Rumí Reliability Models: Bo Lindqvist, Tim Bedford, Roger M. Cooke, Jørn Vatn

21 Helge Langseth Bayesian Networks in Reliability

slide-50
SLIDE 50

Concluding remarks

Further reading

Helge Langseth and Finn V. Jensen. Bayesian networks and decision graphs in reliability. In Encyclopedia of Statistics in Quality and Reliability. John Wiley & Sons, In press. Helge Langseth and Luigi Portinale. Bayesian networks in reliability. Reliability Engineering and System Safety, 92(1):92–108, 2007. Finn V. Jensen and Thomas D. Nielsen. Bayesian Networks and Decision Graphs. Springer-Verlag, Berlin, Germany, 2007. Barry R. Cobb, Prakash P. Shenoy, and Rafael Rumí. Approximating probability density functions in hybrid Bayesian networks with mixtures of truncated exponentials. Statistics and Computing, 46(3):293–308, 2006.

22 Helge Langseth Bayesian Networks in Reliability