DeepProbLog: Neural Probabilistic Logic Programming Robin Manhaeve, - - PowerPoint PPT Presentation

deepproblog neural probabilistic logic programming
SMART_READER_LITE
LIVE PREVIEW

DeepProbLog: Neural Probabilistic Logic Programming Robin Manhaeve, - - PowerPoint PPT Presentation

DeepProbLog: Neural Probabilistic Logic Programming Robin Manhaeve, Sebastijan Dumani , Angelika Kimmig, Thomas Demeester*, Luc De Raedt* * Joint last authors Real-life problems involve two important aspects. Sub-symbolic perception


slide-1
SLIDE 1

Robin Manhaeve, Sebastijan Dumančić, Angelika Kimmig, Thomas Demeester*, Luc De Raedt*

DeepProbLog: Neural Probabilistic Logic Programming

* Joint last authors

slide-2
SLIDE 2

DTAI reserach group

Real-life problems involve two important aspects.

Geiger, Andreas, et al. "Vision meets robotics: The KITTI dataset." The International Journal of Robotics Research (2013)

Reasoning with knowledge under uncertainty Sub-symbolic perception

Stop in front of a red light Obey the speed limit Be in the correct lane …

Deep Learning Probabilistic logic program

2

P( light = red) = 0.9 P( obj1 = car ) = 0.8 P( obj1 turn right) = 0.7

DeepProbLog

ProbLog

= ProbLog + neural predicate

slide-3
SLIDE 3

DTAI reserach group

3

The neural predicate

0,25 0,5 0,75 1 1 2 3 4 5 6 7 8 9

Classifier with softmax

  • Classifier defines a probability distribution over its output
  • Uncertainty in the prediction
  • Neural predicate: output = probabilistic choices in program
  • No changes needed in the ProbLog inference or its semantics
  • ProbLog can natively calculate the gradient

Probability distribution

slide-4
SLIDE 4

DTAI research group 4

Perception

slide-5
SLIDE 5

DTAI research group 5

ProbLog

Perception

Stop in front of a red light Obey the speed limit Be in the correct lane P( light = red) = 0.9

Reasoning

slide-6
SLIDE 6

DTAI research group 6

ProbLog

Perception

Stop in front of a red light Obey the speed limit Be in the correct lane P( light = red) = 0.9 P( obj1 = car ) = 0.8 P( obj1 turn right) = 0.7

Neural predicate

Reasoning

slide-7
SLIDE 7
  • Neural-symbolic integration (Garcez)
  • Logical constraints as a regularizer (Xu, Diligenti, …)
  • Differentiable logical framework (Rocktäschel and Riedel, Evans and Grefenstette)
  • Differentiable interpreters (Graves, Bosnjak)

DTAI reserach group

7

Related work

Related work DeepProbLog Logic is made less expressive Full expressivity is retained Logic is pushed into the neural part Clean separation Fuzzy logic Probabilistic logic Language semantics is unclear Clear semantics

slide-8
SLIDE 8
  • Only labeled sums, not single digits
  • Train using only neural networks? Not suited!
  • DeepProbLog can solve this:
  • Neural predicate
  • From pixels to distribution over digits
  • NN trained from scratch
  • Logic:
  • Combine predictions into larger numbers
  • Perform addition

DTAI reserach group

8

Example task: MNIST addition + = ?

slide-9
SLIDE 9

DTAI research group

9

Combined reasoning

Unknown distribution

slide-10
SLIDE 10

DTAI research group

10

Combined reasoning

0.7::Red 0.8::Blue 0.6::Heads

Unknown distribution Perception

slide-11
SLIDE 11

DTAI research group

11

Combined reasoning

0.7::Red 0.8::Blue 0.6::Heads Rules 0.7 :: win

Unknown distribution Perception Logical inference

slide-12
SLIDE 12

DTAI research group

12

Combined reasoning

0.7::Red 0.8::Blue 0.6::Heads Rules 0.7 :: win

Unknown distribution Perception Logical inference Loss

Backpropagation

Update parameters Train NN

slide-13
SLIDE 13

Conclusion

DeepProbLog: Neural Probabilistic Logic Programming

  • Integration of DL and PLP
  • Probabilistic
  • Clean semantics, clear separation
  • Retain power of both worlds
  • Power of ProbLog

DTAI reserach group

13

Poster #118

Code is available at: https://bitbucket.org/problog/deepproblog