Welcome to
Probabilistic Graphical Models
Introduction
- Welcome to
Welcome to Welcome to the PGM Class - - PowerPoint PPT Presentation
Probabilistic Graphical Introduction Models Welcome to Welcome to the PGM Class
Probabilistic Graphical Models
Introduction
!"
– #$% – "
– ' – (& – )& &*"
– !+,%-./%
– #%
– 1'' – "&""& – "&"
– & – "" – (*
"9'&
Daphne Koller
Probabilis#c' Graphical' Models'
Introduc#on'
Daphne Koller
predisposing factors symptoms test results millions of pixels or thousands of superpixels each needs to be labeled {grass, sky, water, cow, horse, …} diseases treatment outcomes
Daphne Koller
Daphne Koller
Data Model
Declarative representation Algorithm Algorithm Algorithm elicitation domain expert Learning
Daphne Koller
Daphne Koller
Probability Theory
semantics
Daphne Koller
predisposing factors symptoms test results diseases treatment outcomes class labels for thousands of superpixels
Random variables X1,…, Xn Joint distribution P(X1,…, Xn)
Daphne Koller
Intelligence Difficulty Grade Letter SAT B D C A
Bayesian networks Markov networks
Daphne Koller
Daphne Koller
algorithms
– feasible elicitation – learning from data
Daphne Koller
processing
– Image segmentation – 3D reconstruction – Holistic scene analysis
mapping
Daphne Koller
Daphne Koller
Thanks to: Eric Horvitz, Microsoft Research
Daphne Koller
the finance committee.
Daphne Koller
Learned Model
Multi-Sensor Integration: Traffic
Multiple views
Incident reports Weather Thanks to: Eric Horvitz, Microsoft Research
to ±5 MPH in 85% of cases
Daphne Koller
Known 15/17 Supported 2/17 Reversed 1 Missed 3
Causal protein-signaling networks derived from multiparameter single-cell data Sachs et al., Science 2005
Biological Network Reconstruction
Phospho-Proteins Phospho-Lipids Perturbed in data
PKC Raf Erk Mek Plcγ PKA Akt Jnk P38 PIP2 PIP3 Subsequently validated in wetlab
This figure may be used for non-commercial and classroom purposes only. Any other uses require the prior written permission from AAAS
Daphne Koller
Overview
– Directed and undirected – Temporal and plate models
– Exact and approximate – Decision making
– Parameters and structure – With and without complete data
Daphne Koller
Probabilis0c+ Graphical+ Models+
Introduc0on+
Daphne Koller
Joint Distribution
– i0 (low), i1 (high),
– d0 (easy), d1 (hard)
– g1 (A), g2 (B), g3 (C)
I D G Prob. i0 d0 g1 0.126 i0 d0 g2 0.168 i0 d0 g3 0.126 i0 d1 g1 0.009 i0 d1 g2 0.045 i0 d1 g3 0.126 i1 d0 g1 0.252 i1 d0 g2 0.0224 i1 d0 g3 0.0056 i1 d1 g1 0.06 i1 d1 g2 0.036 i1 d1 g3 0.024
Daphne Koller
Conditioning
I D G Prob. i0 d0 g1 0.126 i0 d0 g2 0.168 i0 d0 g3 0.126 i0 d1 g1 0.009 i0 d1 g2 0.045 i0 d1 g3 0.126 i1 d0 g1 0.252 i1 d0 g2 0.0224 i1 d0 g3 0.0056 i1 d1 g1 0.06 i1 d1 g2 0.036 i1 d1 g3 0.024
condition on g1
Daphne Koller
Conditioning: Reduction
I D G Prob. i0 d0 g1 0.126 i0 d1 g1 0.009 i1 d0 g1 0.252 i1 d1 g1 0.06
Daphne Koller
P(I, D, g1)
Conditioning: Renormalization
I D G Prob. i0 d0 g1 0.126 i0 d1 g1 0.009 i1 d0 g1 0.252 i1 d1 g1 0.06
P(I, D | g1)
I D Prob. i0 d0 0.282 i0 d1 0.02 i1 d0 0.564 i1 d1 0.134
0.447
Daphne Koller
Marginalization
D Prob. d0 0.846 d1 0.154 I D Prob. i0 d0 0.282 i0 d1 0.02 i1 d0 0.564 i1 d1 0.134
Marginalize I
Daphne Koller
Probabilis1c+ Graphical+ Models+
Introduc1on+
Daphne Koller
φ : Val(X1,…,Xk) → R
Daphne Koller
I D G Prob. i0 d0 g1 0.126 i0 d0 g2 0.168 i0 d0 g3 0.126 i0 d1 g1 0.009 i0 d1 g2 0.045 i0 d1 g3 0.126 i1 d0 g1 0.252 i1 d0 g2 0.0224 i1 d0 g3 0.0056 i1 d1 g1 0.06 i1 d1 g2 0.036 i1 d1 g3 0.024
P(I,D,G)
Daphne Koller
Unnormalized measure P(I,D,g1)
I D G Prob. i0 d0 g1 0.126 i0 d1 g1 0.009 i1 d0 g1 0.252 i1 d1 g1 0.06
P(I,D,g1)
Daphne Koller
Conditional Probability Distribution (CPD)
0.3 0.08 0.25 0.4 g2 0.02 0.9 i1,d0 0.7 0.05 i0,d1 0.5 0.3 g1 g3 0.2 i1,d1 0.3 i0,d0
P(G | I,D)
Daphne Koller
A B
φ
a0 b0 30 a0 b1 5 a1 b0 1 a1 b1 10
Daphne Koller
a1 b1 0.5 a1 b2 0.8 a2 b1 0.1 a2 b2 a3 b1 0.3 a3 b2 0.9 b1 c1 0.5 b1 c2 0.7 b2 c1 0.1 b2 c2 0.2 a1 b1 c1 0.5·0.5 = 0.25 a1 b1 c2 0.5·0.7 = 0.35 a1 b2 c1 0.8·0.1 = 0.08 a1 b2 c2 0.8·0.2 = 0.16 a2 b1 c1 0.1·0.5 = 0.05 a2 b1 c2 0.1·0.7 = 0.07 a2 b2 c1 0·0.1 = 0 a2 b2 c2 0·0.2 = 0 a3 b1 c1 0.3·0.5 = 0.15 a3 b1 c2 0.3·0.7 = 0.21 a3 b2 c1 0.9·0.1 = 0.09 a3 b2 c2 0.9·0.2 = 0.18
Factor Product
Daphne Koller
a1 b1 c1 0.25 a1 b1 c2 0.35 a1 b2 c1 0.08 a1 b2 c2 0.16 a2 b1 c1 0.05 a2 b1 c2 0.07 a2 b2 c1 a2 b2 c2 a3 b1 c1 0.15 a3 b1 c2 0.21 a3 b2 c1 0.09 a3 b2 c2 0.18 a1 c1 0.33 a1 c2 0.51 a2 c1 0.05 a2 c2 0.07 a3 c1 0.24 a3 c2 0.39
Factor Marginalization
Daphne Koller
a1 b1 c1 0.25 a1 b2 c1 0.08 a2 b1 c1 0.05 a2 b2 c1 a3 b1 c1 0.15 a3 b2 c1 0.09 a1 b1 c1 0.25 a1 b1 c2 0.35 a1 b2 c1 0.08 a1 b2 c2 0.16 a2 b1 c1 0.05 a2 b1 c2 0.07 a2 b2 c1 a2 b2 c2 a3 b1 c1 0.15 a3 b1 c2 0.21 a3 b2 c1 0.09 a3 b2 c2 0.18
Factor Reduction
a1 b1 c1 0.25 a1 b1 c2 0.35 a1 b2 c1 0.08 a1 b2 c2 0.16 a2 b1 c1 0.05 a2 b1 c2 0.07 a2 b2 c1 a2 b2 c2 a3 b1 c1 0.15 a3 b1 c2 0.21 a3 b2 c1 0.09 a3 b2 c2 0.18
Daphne Koller
distributions in high-dimensional spaces
these probability distributions