Online Collec*ve Inference
Jay Pujara
- U. Maryland, College Park
Ben London
- U. Maryland, College Park
Lise Getoor
- U. California, Santa Cruz
BIRS Workshop: New Perspec*ves for Rela*onal Learning 4/23/2015
Online Collec*ve Inference Jay Pujara U. Maryland, College Park U. - - PowerPoint PPT Presentation
Online Collec*ve Inference Jay Pujara U. Maryland, College Park U. Maryland, College Park Ben London Lise Getoor U. California, Santa Cruz BIRS Workshop: New Perspec*ves for Rela*onal Learning 4/23/2015 Real-world problems benefit from
Jay Pujara
Ben London
Lise Getoor
BIRS Workshop: New Perspec*ves for Rela*onal Learning 4/23/2015
Genre(M1, G)∧ Genre(M2, G)∧ Likes(U1, M1) → Likes(U1, M2)
Friends Genre
Likes(U1, M1)∧ Friends(U1, U2) → Likes(U2, M1)
Users Items
Coworkers(U1, C)∧ Coworkers(U2, C)∧ → Coworkers(U1, U2)
MutEx(L1, L2)∧ Label(E, L1)∧ → ¬Label(E, L2) Rel(R, E1, T)∧ Rel(R, E2, T)∧ Label(E1, L)∧ → Label(E2, L)
Genre Ar*st
(Jiang et al., ICDM12; Pujara et al., ISWC13)
Millions of users, thousands movies Millions of facts, thousands of
Millions of users Millions of users, thousands of genes
A user rates a new movie? New facts are extracted from the Web? New user links form? A new gene=c similarity is discovered?
A user rates a new movie? New facts are extracted from the Web? New user links form? A new gene=c similarity is discovered?
Online Collec*ve Inference
without recompu*ng inference?
Full Inference
Full Inference
– e.g. Gardenfors, 1992
– e.g. Bun*ne, 1991; Friedman & Goldszmidt, 1997
– e.g. Murphy, 2002 / Fine et al., 1998
– e.g. Acar et al., 2008
– e.g. Nath & Domingos, 2010
– e.g. London et al., 2013
Ques*ons:
inference objec*ves (like PSL!)
variables during inference, given budget
Online Collec*ve Inference
full inference result and the par*al inference update (when condi*oning on )
YS YS
?
YS P(Y |X)
inference where
h(x; ˙ w) = arg min
y
w · f(x, y) + wp 2 kyk2
2 .
Prior weight
Rn(x, yS; ˙ w) , 1 n kh(x; ˙ w) h(x, yS; ˙ w)k1
Regret Ingredients: Lipschitz constant 2-norm of model weights Weight of L2 prior L1 distance fixed variables and values in full inference
Key Takeaway: Regret depends on L1 distance between fixed variables & their “true” values in the MAP state
Rn(x, yS; ˙ w) O s Bkwk2 n · wp kyS ˆ ySk1 !
5 10 15 20 25 30 35 40 45 50 0.05 0.1 0.15 0.2 0.25
# epochs inference regret
scaled regret bound HighLocal Balanced HighRelational
Measure regret of no updates versus full inference, varying the importance of rela*onal features
Online Collec*ve Inference
(Boyd et al., 2011; Bach et al. 2012) y1 y2 y3
f2 f1 f4 f3
Poten*als Variables
y12 y23 y33
f2 f1 f4 f3
y11 y22 y34
Variable Copies Poten*als
y12 y23 y33
f2 f1 f4 f3
y11 y22 y34
y1 y2 y3
Poten*als Consensus Es*mates
y12 y23 y33
f2 f1 f4 f3
y11 y22 y34
y1 y2 y3
α11 α12 α22 α23 α33 α34
Poten*als Consensus Es*mates
disagreement is there across poten*als? min
˜ yg
wg fg(x, ˜ yg)+ρ 2
yg − yg + 1 ρ αg
2
mul*pliers high
– Infer arributes of users in a social network as progressively more informa*on is shared
– Infer user ra*ngs of jokes as users provide ra*ngs for an increasing number of jokes
regret vs. epochs error vs. epochs
Regret RMSE
Epochs 50% ac*vated 25% ac*vated
RMSE
Epochs 50% ac*vated 25% ac*vated
error than full inference
approximate inference
Online Collec*ve Inference
– Drug targe*ng – Knowledge Graph construc*on – Context-aware mobile devices