SMT-Based Weighted Model Integration Roberto Sebastiani 1 joint work - - PowerPoint PPT Presentation

smt based weighted model integration
SMART_READER_LITE
LIVE PREVIEW

SMT-Based Weighted Model Integration Roberto Sebastiani 1 joint work - - PowerPoint PPT Presentation

SMT-Based Weighted Model Integration Roberto Sebastiani 1 joint work with Paolo Morettin 1 , Andrea Passerini 1 with contributions by Samuel Kolb 2 , Luc De Raedt 2 , Francesco Sommavilla 1 , Pedro Zuidberg 2 1 University of Trento, Italy 2 KU


slide-1
SLIDE 1

SMT-Based Weighted Model Integration

Roberto Sebastiani1

joint work with

Paolo Morettin1, Andrea Passerini1

with contributions by

Samuel Kolb2, Luc De Raedt2, Francesco Sommavilla1, Pedro Zuidberg2

1 University of Trento, Italy 2 KU Leuven; Belgium

– 17th International Workshop on Satisfiability Modulo Theories, SMT 2019 – – 6th Vampire Workshop, Vampire 2019 –

slide-2
SLIDE 2

Context

Goal Efficiently perform probabilistic inference in hybrid domains

both Boolean and continuous variables arithmetical and logical constraints

Using SMT-based Weighted Model Integration. Brief History Weighted Model Counting (WMC) [10] [Chavira & Darwiche, AIJ 2008]

SAT-based probabilistic inference in Boolean domains

Weighted Model Integration (WMI) [8] [Belle, Passerini & Van den Broeck, IJCAI 2015]

SMT-based probabilistic inference in hybrid domains (Boolean+arithmetic)

Weighted Model Integration, revisited [19, 20] [Morettin, Passerini & Sebastiani, IJCAI 2017, AIJ 2019]

WMI reformulated from scratch, novel SMT-based algorithms

slide-3
SLIDE 3

Context

Goal Efficiently perform probabilistic inference in hybrid domains

both Boolean and continuous variables arithmetical and logical constraints

Using SMT-based Weighted Model Integration. Brief History Weighted Model Counting (WMC) [10] [Chavira & Darwiche, AIJ 2008]

SAT-based probabilistic inference in Boolean domains

Weighted Model Integration (WMI) [8] [Belle, Passerini & Van den Broeck, IJCAI 2015]

SMT-based probabilistic inference in hybrid domains (Boolean+arithmetic)

Weighted Model Integration, revisited [19, 20] [Morettin, Passerini & Sebastiani, IJCAI 2017, AIJ 2019]

WMI reformulated from scratch, novel SMT-based algorithms

slide-4
SLIDE 4

Context

Goal Efficiently perform probabilistic inference in hybrid domains

both Boolean and continuous variables arithmetical and logical constraints

Using SMT-based Weighted Model Integration. Brief History Weighted Model Counting (WMC) [10] [Chavira & Darwiche, AIJ 2008]

SAT-based probabilistic inference in Boolean domains

Weighted Model Integration (WMI) [8] [Belle, Passerini & Van den Broeck, IJCAI 2015]

SMT-based probabilistic inference in hybrid domains (Boolean+arithmetic)

Weighted Model Integration, revisited [19, 20] [Morettin, Passerini & Sebastiani, IJCAI 2017, AIJ 2019]

WMI reformulated from scratch, novel SMT-based algorithms

slide-5
SLIDE 5

Context

Goal Efficiently perform probabilistic inference in hybrid domains

both Boolean and continuous variables arithmetical and logical constraints

Using SMT-based Weighted Model Integration. Brief History Weighted Model Counting (WMC) [10] [Chavira & Darwiche, AIJ 2008]

SAT-based probabilistic inference in Boolean domains

Weighted Model Integration (WMI) [8] [Belle, Passerini & Van den Broeck, IJCAI 2015]

SMT-based probabilistic inference in hybrid domains (Boolean+arithmetic)

Weighted Model Integration, revisited [19, 20] [Morettin, Passerini & Sebastiani, IJCAI 2017, AIJ 2019]

WMI reformulated from scratch, novel SMT-based algorithms

slide-6
SLIDE 6

Context

Goal Efficiently perform probabilistic inference in hybrid domains

both Boolean and continuous variables arithmetical and logical constraints

Using SMT-based Weighted Model Integration. Brief History Weighted Model Counting (WMC) [10] [Chavira & Darwiche, AIJ 2008]

SAT-based probabilistic inference in Boolean domains

Weighted Model Integration (WMI) [8] [Belle, Passerini & Van den Broeck, IJCAI 2015]

SMT-based probabilistic inference in hybrid domains (Boolean+arithmetic)

Weighted Model Integration, revisited [19, 20] [Morettin, Passerini & Sebastiani, IJCAI 2017, AIJ 2019]

WMI reformulated from scratch, novel SMT-based algorithms

slide-7
SLIDE 7

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-8
SLIDE 8

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-9
SLIDE 9

Weighted Model Counting

Definition (Weighted Model Count) Let ϕ be a propositional formula on A

def

= {A1, ..., AM} and let w be a function associating a non-negative weight to each literal on Atoms(ϕ). Then the Weighted Model Count of ϕ is: WMC(ϕ, w) =

  • µ∈TTA(ϕ)
  • ℓ∈µ

w(ℓ). Proposition ([10, 8]) The probability of a query q given evidence e in a Boolean Markov Network N is computed as: PrN(q|e) = WMC(q ∧ e ∧ ∆, w) WMC(e ∧ ∆, w) , where ∆ encodes N and w the potential. Many efficient computing techniques based on knowledge compilation [12, 21] or exhaustive DPLL search [23] improved by component caching techniques [22, 6]

slide-10
SLIDE 10

Weighted Model Counting

Definition (Weighted Model Count) Let ϕ be a propositional formula on A

def

= {A1, ..., AM} and let w be a function associating a non-negative weight to each literal on Atoms(ϕ). Then the Weighted Model Count of ϕ is: WMC(ϕ, w) =

  • µ∈TTA(ϕ)
  • ℓ∈µ

w(ℓ). Proposition ([10, 8]) The probability of a query q given evidence e in a Boolean Markov Network N is computed as: PrN(q|e) = WMC(q ∧ e ∧ ∆, w) WMC(e ∧ ∆, w) , where ∆ encodes N and w the potential. Many efficient computing techniques based on knowledge compilation [12, 21] or exhaustive DPLL search [23] improved by component caching techniques [22, 6]

slide-11
SLIDE 11

Weighted Model Counting

Definition (Weighted Model Count) Let ϕ be a propositional formula on A

def

= {A1, ..., AM} and let w be a function associating a non-negative weight to each literal on Atoms(ϕ). Then the Weighted Model Count of ϕ is: WMC(ϕ, w) =

  • µ∈TTA(ϕ)
  • ℓ∈µ

w(ℓ). Proposition ([10, 8]) The probability of a query q given evidence e in a Boolean Markov Network N is computed as: PrN(q|e) = WMC(q ∧ e ∧ ∆, w) WMC(e ∧ ∆, w) , where ∆ encodes N and w the potential. Many efficient computing techniques based on knowledge compilation [12, 21] or exhaustive DPLL search [23] improved by component caching techniques [22, 6]

slide-12
SLIDE 12

Weighted Model Integration [8]

Definition (Weighted Model Integral [8]) Let ϕ be a LRA formula on x

def

= {x1, ..., xN} and A

def

= {A1, ..., AM}. Let w be a function associating an expression (possibly constant) over x to each literal whose atom occurs in ϕ. Then the Weighted Model Integral of ϕ is defined as: WMIold(ϕ, w) =

  • µ∈TTA(ϕ)
  • µLRA
  • ℓ∈µ

w(ℓ) dx, s.t. µ

def

= µA ∧ µLRA Note: ϕ, w implicitly defines an un-normalized probability distribution If P(x) is polynomial and µLRA(x) is a conjunction of linear constraints, then

  • µLRA P(x) dx can be exactly computed [7] (e.g., by LATTE INTEGRALE [18])

Proposition ([8]) The probability of a query q given evidence e in a Hybrid Markov Network N is computed as: PrN(q|e) = WMIold(q ∧ e ∧ ∆, w) WMIold(e ∧ ∆, w) , where ∆ encodes N and w the potential.

slide-13
SLIDE 13

Weighted Model Integration [8]

Definition (Weighted Model Integral [8]) Let ϕ be a LRA formula on x

def

= {x1, ..., xN} and A

def

= {A1, ..., AM}. Let w be a function associating an expression (possibly constant) over x to each literal whose atom occurs in ϕ. Then the Weighted Model Integral of ϕ is defined as: WMIold(ϕ, w) =

  • µ∈TTA(ϕ)
  • µLRA
  • ℓ∈µ

w(ℓ) dx, s.t. µ

def

= µA ∧ µLRA Note: ϕ, w implicitly defines an un-normalized probability distribution If P(x) is polynomial and µLRA(x) is a conjunction of linear constraints, then

  • µLRA P(x) dx can be exactly computed [7] (e.g., by LATTE INTEGRALE [18])

Proposition ([8]) The probability of a query q given evidence e in a Hybrid Markov Network N is computed as: PrN(q|e) = WMIold(q ∧ e ∧ ∆, w) WMIold(e ∧ ∆, w) , where ∆ encodes N and w the potential.

slide-14
SLIDE 14

Weighted Model Integration [8]

Definition (Weighted Model Integral [8]) Let ϕ be a LRA formula on x

def

= {x1, ..., xN} and A

def

= {A1, ..., AM}. Let w be a function associating an expression (possibly constant) over x to each literal whose atom occurs in ϕ. Then the Weighted Model Integral of ϕ is defined as: WMIold(ϕ, w) =

  • µ∈TTA(ϕ)
  • µLRA
  • ℓ∈µ

w(ℓ) dx, s.t. µ

def

= µA ∧ µLRA Note: ϕ, w implicitly defines an un-normalized probability distribution If P(x) is polynomial and µLRA(x) is a conjunction of linear constraints, then

  • µLRA P(x) dx can be exactly computed [7] (e.g., by LATTE INTEGRALE [18])

Proposition ([8]) The probability of a query q given evidence e in a Hybrid Markov Network N is computed as: PrN(q|e) = WMIold(q ∧ e ∧ ∆, w) WMIold(e ∧ ∆, w) , where ∆ encodes N and w the potential.

slide-15
SLIDE 15

Weighted Model Integration - Example

Example ϕ

def

= (A2 → ((1 ≤ x) ∧ (x ≤ 3))) ∧ (A3 → (¬(x ≤ 3) ∧ (x ≤ 5))) ∧ (A1 ↔ (¬A2 ∧ ¬A3)) ∧ (1 ≤ x) ∧ (x ≤ 5). Let w(A1) = 0.1, w(A2) = (0.25 · x − 0.25), w(A3) = (1.25 − 0.25 · x), w(l) = 1 for the others.

TTA(ϕ) =        A1 ∧ ¬A2 ∧ ¬A3 ∧ (1 ≤ x) ∧ (x ≤ 5) ∧ (x ≤ 3), A1 ∧ ¬A2 ∧ ¬A3 ∧ (1 ≤ x) ∧ (x ≤ 5) ∧ ¬(x ≤ 3), ¬A1 ∧ A2 ∧ ¬A3 ∧ (1 ≤ x) ∧ (x ≤ 5) ∧ (x ≤ 3), ¬A1 ∧ ¬A2 ∧ A3 ∧ (1 ≤ x) ∧ (x ≤ 5) ∧ ¬(x ≤ 3)        . WMIold(ϕ, w) =

  • (1≤x)∧(x≤5)∧(x≤3)

w(A1) dx +

  • (1≤x)∧(x≤5)∧¬(x≤3)

w(A1) dx +

  • (1≤x)∧(x≤5)∧(x≤3)

w(A2) dx +

  • (1≤x)∧(x≤5)∧¬(x≤3)

w(A3) dx =

  • [1,3]

0.1 dx +

  • (3,5]

0.1 dx +

  • [1,3]

0.25 · x − 0.25 dx +

  • (3,5]

1.25 − 0.25 · x dx = (...) = 1.4 Models an unnormalized distribution over x in [1, 5], which: is uniform with w = 0.1 if A1 is true is modeled as a triangular distribution with mode w = 0.5 at x = 3 otherwise.

1 3 0.5 0.1 5 A1 ¬A1

slide-16
SLIDE 16

Weighted Model Integration - Example (cont.)

Example Given the previous unnormalized distribution ϕ, w and the information that A1 = ⊥ (evidence), the probability that x ≤ 2 (query) is: P(ϕ,w)(x ≤ 2|A1 = ⊥) = WMIold(ϕ ∧ ¬A1 ∧ (x ≤ 2), w) WMIold(ϕ ∧ ¬A1, w) = 0.125 1.0 = 0.125 WMIold(ϕ ∧ ¬A1, w) =

  • (1≤x)∧(x≤5)∧(x≤3)

w(A2) dx +

  • (1≤x)∧(x≤5)∧¬(x≤3)

w(A3) dx =

  • [1,3]

0.25 · x − 0.25 dx +

  • (3,5]

1.25 − 0.25 · x dx = (...) = 1.0 WMIold(ϕ ∧ ¬A1 ∧ (x ≤ 2), w) =

  • (1≤x)∧(x≤5)∧(x≤3)∧(x≤2)

w(A2) dx =

  • [1,2]

0.25 · x − 0.25 dx = (...) = 0.125

slide-17
SLIDE 17

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-18
SLIDE 18

Basic case: WMI Without Atomic Propositions

Definition Assume ϕ does not contain atomic propositions and w : RN − → R+. Then we define the Weighted Model Integral of w over ϕ on x as: WMInb(ϕ, w|x)

def

=

  • ϕ(x)

w(x) dx, “nb” meaning “no-Booleans”, that is, as the integral of w(x) over the set {x | ϕ(x) is true}. Proposition WMInb(ϕ, w|x) =

  • µLRA∈TTA(ϕ)

WMInb(µLRA, w|x) =

  • µLRA∈TA(ϕ)

WMInb(µLRA, w|x). Note: WMInb(µLRA, w|x)

def

=

  • µLRA(x) w(x) dx can be computed exactly if w(x) is polynomial [7].
slide-19
SLIDE 19

Basic case: WMI Without Atomic Propositions

Definition Assume ϕ does not contain atomic propositions and w : RN − → R+. Then we define the Weighted Model Integral of w over ϕ on x as: WMInb(ϕ, w|x)

def

=

  • ϕ(x)

w(x) dx, “nb” meaning “no-Booleans”, that is, as the integral of w(x) over the set {x | ϕ(x) is true}. Proposition WMInb(ϕ, w|x) =

  • µLRA∈TTA(ϕ)

WMInb(µLRA, w|x) =

  • µLRA∈TA(ϕ)

WMInb(µLRA, w|x). Note: WMInb(µLRA, w|x)

def

=

  • µLRA(x) w(x) dx can be computed exactly if w(x) is polynomial [7].
slide-20
SLIDE 20

General Case: WMI With Atomic Propositions

Definition We consider a LRA-formula ϕ(x, A) and w(x, A) s.t. w : RN × BM − → R+. The Weighted Model Integral of w over ϕ is defined as follows: WMI(ϕ, w|x, A)

def

=

  • µA∈BM

WMInb(ϕ[µA], w[µA]|x) =

  • µA∈BM
  • µLRA∈TA(ϕ[µA])
  • LRA(x)

w[µA](x) dx, the µA’s are all total truth assignments on A, ϕ[µA](x) denotes (any formula equivalent to) the formula obtained from ϕ by substituting every Boolean value Ai with its truth value in µA (thus ϕ[µA] : RN − → B) w[µA](x) is w computed on x and on the truth values of µA (w[µA] : RN − → R+ if µA total) Note w(x, A) generic, not restricted in the form of products of weighs on literals of ϕ. if w[µA](x) polynomial for every µA, then

  • LRA(x) w[µA](x) dx can be computed exactly
slide-21
SLIDE 21

WMI - Example

Example Let ϕ

def

= (A ↔ (x ≥ 0)) ∧ (x ≥ −1) ∧ (x ≤ 1), w(x, A)

def

= If A Then x Else − x. Then: If µA def = {¬A}, then ϕ[µA] = ¬(x ≥ 0) ∧ (x ≥ −1) ∧ (x ≤ 1) and w[µA] = −x. If µA def = { A}, then ϕ[µA] = (x ≥ 0) ∧ (x ≥ −1) ∧ (x ≤ 1) and w[µA] = x. Thus, WMI(ϕ, w|x, A)

def

= WMInb(ϕ[{¬A}], w[{¬A}]|x) + WMInb(ϕ[{A}], w[{A}]|x) =

  • [−1,0)

−x dx +

  • [0,1]

x dx = 1 2 + 1 2 = 1.

slide-22
SLIDE 22

Some Results on WMI

Proposition Given x, A, w(x,A), ϕ(x, A) and TTA(ϕ) as above, we have that: WMI(ϕ, w|x, A) =

  • µA∧µLRA∈TTA(ϕ)

WMInb(µLRA, w[µA]|x) =

  • µA∧µLRA∈TTA(ϕ)
  • µLRA(x)

w[µA](x) dx Proposition Given x, A, w(x,A), ϕ(x, A) and TTA(ϕ) as above, we have that: WMI(ϕ, w|x, A) =

  • µA∈TTA(∃x.ϕ)

WMInb(ϕ[µA], w[µA]|x) =

  • µA∈TTA(∃x.ϕ)
  • µLRA∈TA(ϕ[µA])
  • µLRA(x)

w[µA](x) dx, enumerate only LRA-consistent µA ∧ µLRA’s propositionally satisfying ϕ enumerate only µA’s for which exists a LRA-consistent (partial) µLRA s.t. µA ∧ µLRA | =B ϕ

slide-23
SLIDE 23

Some Results on WMI

Proposition Given x, A, w(x,A), ϕ(x, A) and TTA(ϕ) as above, we have that: WMI(ϕ, w|x, A) =

  • µA∧µLRA∈TTA(ϕ)

WMInb(µLRA, w[µA]|x) =

  • µA∧µLRA∈TTA(ϕ)
  • µLRA(x)

w[µA](x) dx Proposition Given x, A, w(x,A), ϕ(x, A) and TTA(ϕ) as above, we have that: WMI(ϕ, w|x, A) =

  • µA∈TTA(∃x.ϕ)

WMInb(ϕ[µA], w[µA]|x) =

  • µA∈TTA(∃x.ϕ)
  • µLRA∈TA(ϕ[µA])
  • µLRA(x)

w[µA](x) dx, enumerate only LRA-consistent µA ∧ µLRA’s propositionally satisfying ϕ enumerate only µA’s for which exists a LRA-consistent (partial) µLRA s.t. µA ∧ µLRA | =B ϕ

slide-24
SLIDE 24

Feasibly computable WMIs: FIUCLRA weight functions

FILRA weight function A function f(x) is feasibly integrable on a set of LRA constraints (FILRA ) if exists a procedure that can compute the integral

  • µLRA f(x) dx , for all µLRA

example: polynomials [7] Definition (FIUCLRA weight function) Given x, A and a set of LRA-conditions Ψ

def

= {ψ1(x, A), ..., ψK(x, A)}; a support LRA-formula χ(x, A) s.t. w(x, A) = 0 where χ(x, A) holds (⊤ if not present); we say that a weight function w(x, A) is feasibly integrable under LRA conditions (FIUCLRA) iff, for every total truth-value assignment µA on A and µΨ on Ψ, w[µA,µΨ](x) is FILRA. Property WMI(ϕ, w|x, A) = WMI(ϕ ∧ χ, w|x, A).

slide-25
SLIDE 25

Feasibly computable WMIs: FIUCLRA weight functions

FILRA weight function A function f(x) is feasibly integrable on a set of LRA constraints (FILRA ) if exists a procedure that can compute the integral

  • µLRA f(x) dx , for all µLRA

example: polynomials [7] Definition (FIUCLRA weight function) Given x, A and a set of LRA-conditions Ψ

def

= {ψ1(x, A), ..., ψK(x, A)}; a support LRA-formula χ(x, A) s.t. w(x, A) = 0 where χ(x, A) holds (⊤ if not present); we say that a weight function w(x, A) is feasibly integrable under LRA conditions (FIUCLRA) iff, for every total truth-value assignment µA on A and µΨ on Ψ, w[µA,µΨ](x) is FILRA. Property WMI(ϕ, w|x, A) = WMI(ϕ ∧ χ, w|x, A).

slide-26
SLIDE 26

Feasibly computable WMIs: FIUCLRA weight functions

FILRA weight function A function f(x) is feasibly integrable on a set of LRA constraints (FILRA ) if exists a procedure that can compute the integral

  • µLRA f(x) dx , for all µLRA

example: polynomials [7] Definition (FIUCLRA weight function) Given x, A and a set of LRA-conditions Ψ

def

= {ψ1(x, A), ..., ψK(x, A)}; a support LRA-formula χ(x, A) s.t. w(x, A) = 0 where χ(x, A) holds (⊤ if not present); we say that a weight function w(x, A) is feasibly integrable under LRA conditions (FIUCLRA) iff, for every total truth-value assignment µA on A and µΨ on Ψ, w[µA,µΨ](x) is FILRA. Property WMI(ϕ, w|x, A) = WMI(ϕ ∧ χ, w|x, A).

slide-27
SLIDE 27

Support of FIUCLRA weight functions: example

Example Let x

def

= {x, y}, A

def

= {A}, χ(x, A)

def

= (A → x ∈ [0, 2]) ∧ (¬A → (x ∈ [1, 3] ∧ (x + y ≤ 3))) ∧ y ∈ [1, 3] w(x, A)

def

= If A Then (−x2 − y2 + 2x + 3y) Else (−2x − 2y + 6). (Note that outside the support the two polynomials may acquire negative values.)

0.0 0.5 1.0 1.5 2.0 2.5 3.0 x 0.0 0.5 1.0 1.5 2.0 2.5 3.0 y χ(x, y, A) χ(x, y, ¬A) x 0.0 0.5 1.0 1.5 2.0 2.5 3.0 y 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 w(a, y, A) w(x, y, ¬A)

slide-28
SLIDE 28

A very relevant subcase of FIUCLRA functions: PLRA functions

Definition (PLRA weight function) Given x, A, Ψ and χ as in FIUCLRA definition, a weight function w(x, A) is called Polynomial under LRA conditions, PLRA iff, for every total assignment µA on A and µΨ on Ψ, w[µAµΨ](x) is a polynomial whose value is non-negative in the domain defined by µΨ. PLRA functions are FIUCLRA because polynomials can always be integrated exactly on sets of LRA literals [7]. We define a grammar to express PLRA weight functions: w ::= c | x | − w | (w + w) | (w − w) | (w · w) | If ϕ Then w Else w | Case ϕ : w; ϕ : w; ... χ ::= ϕ where c is a real value, x is a real variable, w is a PLRA weight function, ϕ is an LRA formula.

slide-29
SLIDE 29

A very relevant subcase of FIUCLRA functions: PLRA functions

Definition (PLRA weight function) Given x, A, Ψ and χ as in FIUCLRA definition, a weight function w(x, A) is called Polynomial under LRA conditions, PLRA iff, for every total assignment µA on A and µΨ on Ψ, w[µAµΨ](x) is a polynomial whose value is non-negative in the domain defined by µΨ. PLRA functions are FIUCLRA because polynomials can always be integrated exactly on sets of LRA literals [7]. We define a grammar to express PLRA weight functions: w ::= c | x | − w | (w + w) | (w − w) | (w · w) | If ϕ Then w Else w | Case ϕ : w; ϕ : w; ... χ ::= ϕ where c is a real value, x is a real variable, w is a PLRA weight function, ϕ is an LRA formula.

slide-30
SLIDE 30

A very relevant subcase of FIUCLRA functions: PLRA functions

Definition (PLRA weight function) Given x, A, Ψ and χ as in FIUCLRA definition, a weight function w(x, A) is called Polynomial under LRA conditions, PLRA iff, for every total assignment µA on A and µΨ on Ψ, w[µAµΨ](x) is a polynomial whose value is non-negative in the domain defined by µΨ. PLRA functions are FIUCLRA because polynomials can always be integrated exactly on sets of LRA literals [7]. We define a grammar to express PLRA weight functions: w ::= c | x | − w | (w + w) | (w − w) | (w · w) | If ϕ Then w Else w | Case ϕ : w; ϕ : w; ... χ ::= ϕ where c is a real value, x is a real variable, w is a PLRA weight function, ϕ is an LRA formula.

slide-31
SLIDE 31

Computing WMI with FIUCLRA weight functions

Theorem Let w(x, A), Ψ

def

= {ψ1, ..., ψK} and χ be as above. Let B

def

= {B1, ..., BK} be fresh propositional atoms and let w∗(x, A ∪ B) be the weight function obtained by substituting in w(x, A) each condition ψk with Bk, for every k ∈ [1..K]. Let ϕ∗ def = ϕ ∧ χ ∧ K

k=1(Bk ↔ ψk). Then:

WMI(ϕ ∧ χ, w|x, A) = WMI(ϕ∗, w∗|x, A ∪ B).

x,A x,A w w∗

WMI(ϕ, w|x, A) WMI(ϕ ∧ (B ↔ ψ), w∗|x, A ∪ {B})

ψ = ⊤ ψ = ⊥ B B = ⊤ B = ⊥

slide-32
SLIDE 32

Computing WMI with FIUCLRA weight functions - Example

Example Let A = ∅, x = {x} χ

def

= x ∈ [−1, 1], ϕ

def

= ⊤, Ψ

def

= {(x ≥ 0)}, w(x)

def

= If (x ≥ 0) Then x Else − x (i.e., w(x)

def

= |x|.) Then WMI(ϕ, w|x, ∅) = WMInb(ϕ, w|x) =

  • [−1,1] |x| dx = 1.

ϕ∗ = x ∈ [−1, 1] ∧ (B ↔ (x ≥ 0)) and w∗ = If B Then x Else − x. Then WMI(ϕ∗, w∗|x, B) = 1. (See previous example, modulo reordering and variable renaming).

slide-33
SLIDE 33

From WMIold to WMI and vice versa

From WMIold to WMI We can easily express and compute WMIold as WMI by an equivalent FIUCLRA weight function: WMI(ϕ,

  • ψ∈Atoms(ϕ)

If ψ Then w(ψ) Else w(¬ψ) |x, A). From WMI to WMIold? AFAIK, there is no obvious general way to encode an arbitrary FIUCLRA weight function into a WMIold one while always preventing an explosion in the size of its representation. Ex: w(x, A) =

Aj∈A If Aj Then wj1(x) Else wj2(x).

a trivial general solution:

for every total truth assignment µ ∈ TTA(ϕ) add a fresh Boolean atom Bµ w(Bµ)

def

= wµ(x), w(¬Bµ)

def

= 1, w(l)

def

= 1 for every other literal l.

= ⇒ blows up in size wrt. ||A||

slide-34
SLIDE 34

From WMIold to WMI and vice versa

From WMIold to WMI We can easily express and compute WMIold as WMI by an equivalent FIUCLRA weight function: WMI(ϕ,

  • ψ∈Atoms(ϕ)

If ψ Then w(ψ) Else w(¬ψ) |x, A). From WMI to WMIold? AFAIK, there is no obvious general way to encode an arbitrary FIUCLRA weight function into a WMIold one while always preventing an explosion in the size of its representation. Ex: w(x, A) =

Aj∈A If Aj Then wj1(x) Else wj2(x).

a trivial general solution:

for every total truth assignment µ ∈ TTA(ϕ) add a fresh Boolean atom Bµ w(Bµ)

def

= wµ(x), w(¬Bµ)

def

= 1, w(l)

def

= 1 for every other literal l.

= ⇒ blows up in size wrt. ||A||

slide-35
SLIDE 35

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-36
SLIDE 36

Baseline

The Problem Compute efficiently the WMI of a FIUCLRA weight function w(x, A), with support formula χ and set of conditions Ψ

def

= {ψ1, ..., ψK}, over a formula ϕ(x, A). Preprocessing The problem is transformed into ϕ∗ def = ϕ ∧ χ ∧ K

k=1(Bk ↔ ψk), w∗ def

= w[B ← Ψ], and A∗ def = A ∪ B by applying the Theorem: WMI(ϕ ∧ χ, w|x, A) = WMI(ϕ∗, w∗|x, A ∪ B). Baseline Procedure: WMI-AllSMT Based on the proposition: WMI(ϕ∗, w∗|x, A∗) =

µA∗∧µLRA∈TTA(ϕ∗) WMInb(µLRA, w∗ [µA∗]|x).

TTA(ϕ∗) is computed by AllSMT (e.g., in MATHSAT5) without assignment-reduction WMInb(µLRA, w∗

[µA∗]|x) is computed by invoking our background integration procedure for

FILRA functions (e.g., by LATTE INTEGRALE [18])

slide-37
SLIDE 37

Baseline

The Problem Compute efficiently the WMI of a FIUCLRA weight function w(x, A), with support formula χ and set of conditions Ψ

def

= {ψ1, ..., ψK}, over a formula ϕ(x, A). Preprocessing The problem is transformed into ϕ∗ def = ϕ ∧ χ ∧ K

k=1(Bk ↔ ψk), w∗ def

= w[B ← Ψ], and A∗ def = A ∪ B by applying the Theorem: WMI(ϕ ∧ χ, w|x, A) = WMI(ϕ∗, w∗|x, A ∪ B). Baseline Procedure: WMI-AllSMT Based on the proposition: WMI(ϕ∗, w∗|x, A∗) =

µA∗∧µLRA∈TTA(ϕ∗) WMInb(µLRA, w∗ [µA∗]|x).

TTA(ϕ∗) is computed by AllSMT (e.g., in MATHSAT5) without assignment-reduction WMInb(µLRA, w∗

[µA∗]|x) is computed by invoking our background integration procedure for

FILRA functions (e.g., by LATTE INTEGRALE [18])

slide-38
SLIDE 38

Baseline

The Problem Compute efficiently the WMI of a FIUCLRA weight function w(x, A), with support formula χ and set of conditions Ψ

def

= {ψ1, ..., ψK}, over a formula ϕ(x, A). Preprocessing The problem is transformed into ϕ∗ def = ϕ ∧ χ ∧ K

k=1(Bk ↔ ψk), w∗ def

= w[B ← Ψ], and A∗ def = A ∪ B by applying the Theorem: WMI(ϕ ∧ χ, w|x, A) = WMI(ϕ∗, w∗|x, A ∪ B). Baseline Procedure: WMI-AllSMT Based on the proposition: WMI(ϕ∗, w∗|x, A∗) =

µA∗∧µLRA∈TTA(ϕ∗) WMInb(µLRA, w∗ [µA∗]|x).

TTA(ϕ∗) is computed by AllSMT (e.g., in MATHSAT5) without assignment-reduction WMInb(µLRA, w∗

[µA∗]|x) is computed by invoking our background integration procedure for

FILRA functions (e.g., by LATTE INTEGRALE [18])

slide-39
SLIDE 39

Efficient WMI procedure: WMI-PA

Efficient WMI procedure: WMI-PA Based on the propositions: WMI(ϕ∗, w∗|x, A∗) =

  • µA∗∈TTA(∃x.ϕ∗) WMInb(ϕ∗

[µA∗], w∗ [µA∗]|x)

WMInb(ϕ∗

[µA∗], w∗ [µA∗]|x)

=

  • µLRA∈TA(ϕ∗

[µA∗ ]) WMInb(µLRA, w∗

[µA∗]|x).

TTA(∃x.ϕ∗) is computed by Predicate Abstraction TTA(PredAbs[ϕ∗](A∗)) [17] (in MATHSAT5) WMInb(ϕ∗

[µA∗], w∗ [µA∗]|x) is computed by AllSMT with assignment-reduction [17]

(in MATHSAT5) ϕ∗

[µA∗] aggressively simplified before invoking TA() on it

reduces number of assignments in TA(ϕ∗

[µA∗ ])

if ϕ∗

[µA∗ ] reduced to a conjunction of LRA-literals, then no need to invoke TA()

WMInb(µLRA, w∗

[µA∗]|x) can exploit caching of integral values

slide-40
SLIDE 40

Efficient WMI procedure: WMI-PA (cont.)

WMI-PA(ϕ, w, x, A) ϕ∗, w∗, A∗ ← LabelConditions(ϕ, w, x, A) // Apply Theorem MA∗ ← TTA(PredAbs[ϕ∗](A∗)) // TTA(∃x.ϕ∗) vol ← 0 for µA∗ ∈ MA∗ do Simplify(ϕ∗

[µA∗]) // remove as many LRA-atoms as possible from ϕ∗ [µA∗]

if (IsLiteralConjunction(ϕ∗

[µA∗])) then

vol ← vol + WMInb(ϕ∗

[µA∗], w∗ [µA∗]|x)

else MLRA ← TA(PredAbs[ϕ∗

[µA∗ ]](Atoms(ϕ∗

[µA∗]))) // AllSMT with assignment-reduction

for µLRA ∈ MLRA do vol ← vol + WMInb(µLRA, w∗

[µA∗]|x)

end for end if end for return vol

slide-41
SLIDE 41

WMI-PA vs. WMI-ALLSMT

WMI-PA decouples the enumeration of the µA∗s from that of the µLRAs TTA(∃x.ϕ∗) removes a priori all the assignments µA∗ which cannot be expanded by any LRA-satisfiable assignment µLRA s.t. µA∗ ∧ µLRA propositionally satisfies ϕ∗ Atoms(ϕ∗

[µA∗]) can be much smaller than Atoms(ϕ∗) by Simplify

(E.g., (x ≤ 1) ∧ (A2 ∨ (x ≥ 0)))[A2] is simplified into (x ≤ 1), so that (x ≥ 0) is eliminated.) = ⇒ the number of assignments µA∗ ∧ µLRA can be drastically reduced

search for a set TA(...) of partial assignments µLRA, each substituting 2(...) total ones

slide-42
SLIDE 42

WMI-PA vs. WMI-ALLSMT: Example

Example w(x, A) = If (y ≤ 1) Then f(x, y) Else g(x, y) χ(x, A) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ( (y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) ϕ(x, A) = ⊤ Note: f(x, y) defined on x ∈ [0, 2], y ∈ [0, 1], g(x, y) defined on x ∈ [1, 3], y ∈ [1, 2] After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3)))

slide-43
SLIDE 43

WMI-PA vs. WMI-ALLSMT: Example

Example w(x, A) = If (y ≤ 1) Then f(x, y) Else g(x, y) χ(x, A) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ( (y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) ϕ(x, A) = ⊤ Note: f(x, y) defined on x ∈ [0, 2], y ∈ [0, 1], g(x, y) defined on x ∈ [1, 3], y ∈ [1, 2] After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3)))

slide-44
SLIDE 44

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-45
SLIDE 45

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-46
SLIDE 46

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-47
SLIDE 47

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-48
SLIDE 48

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-49
SLIDE 49

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-ALLSMT With WMI-ALLSMT, the integration on 4 total truth assignments is needed:        { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} { B1, (0 ≤ y), (y ≤ 2), (y ≤ 1), (0 ≤ x), (x ≤ 2), ¬(1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), (x ≤ 2), (1 ≤ x), (x ≤ 3)} {¬B1, (0 ≤ y), (y ≤ 2), ¬(y ≤ 1), (0 ≤ x), ¬(x ≤ 2), (1 ≤ x), (x ≤ 3)}        1 1 f(x, y) dx dy + 1 2

1

f(x, y) dx dy + 2

1

2

1

g(x, y) dx dy + 2

1

3

2

g(x, y) dx dy two useless partitions: on (1 ≤ x) and on (x ≤ 2)

slide-50
SLIDE 50

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-PA Computation MA∗ = {

µ1

{B1}, {¬B1}} w∗

[µ1](x, A∗) = f(x, y)

ϕ∗

[µ1](x, A∗) = (⊤ ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2)

∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧(¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) Simplify(ϕ∗

[µ1]) =

(y ≤ 1) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((0 ≤ x) ∧ (x ≤ 2))

  • ϕ∗

[µ1]

w∗

[µ1] dx =

1 2 f(x, y) dx dy

slide-51
SLIDE 51

WMI-PA vs. WMI-ALLSMT: Example (cont.)

After labelling w∗(x, A∗) = If B1 Then f(x, y) Else g(x, y) ϕ∗(x, A∗) = (B1 ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) WMI-PA Computation MA∗ = {{B1},

µ2

{¬B1}} w∗

[µ2](x, A∗) = g(x, y)

ϕ∗

[µ2](x, A∗) = (⊥ ↔ (y ≤ 1)) ∧ (0 ≤ y) ∧ (y ≤ 2)

∧((y ≤ 1) → ((0 ≤ x) ∧ (x ≤ 2))) ∧ (¬(y ≤ 1) → ((1 ≤ x) ∧ (x ≤ 3))) Simplify(ϕ∗

[µ2]) = ¬(y ≤ 1) ∧ (0 ≤ y) ∧ (y ≤ 2) ∧ ((1 ≤ x) ∧ (x ≤ 3))

  • ϕ∗

[µ2]

w∗

[µ2] dx =

2

1

3

1

g(x, y) dx dy

slide-52
SLIDE 52

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-53
SLIDE 53

Case Study 1: The Road Network Problem, Fixed Path

Given : a path of N + 1 consecutive adjacent locations {l0, ..., lN} in a road network (implicit) (the part of interest of) the day partitioned into disjoint consecutive intervals {I1, ..., IM} for each pair li, lj of adjacent locations, for each time interval Im def = [cm, cm+1), the distribution of the journey time from li to lj at any time t ∈ Im:

f m

li ,lj : R → R+ is such distribution

Rm

li ,lj

def

= [am

li ,lj , bm li ,lj ) is its support.

Note: the time slots Ims are disjoint, the supports Rm

li,ljs are not disjoint.

two values: departure time tdep and maximum arrival time tarr variables: for n ∈ [0...N],

xn the journey time between ln−1 and ln, tn is the time at step n, i.e., tn

def

= t0 + n−1

i=1 xi

Query: P(tN ≤ tarr | t0 = tdep, {li}N

i=0). (The locations {li}N i=0 are left implicit.)

slide-54
SLIDE 54

Case Study 1: The Road Network Problem, Fixed Path (cont.)

. . . . . .

Journey time densities for a pair of consecutive time steps, from location li to li+2. Each edge shows the corresponding journey time distribution for each of the intervals.

slide-55
SLIDE 55

The Road Network Problem, Fixed Path: Encoding

Let x

def

= {x1, ..., xN}, A

def

= ∅, and “tn” be a shortcut for the term “n

i=1 xi + t0”. Then:

w(x)

def

= N

n=1

  • Case tn−1 ∈ I1 : f 1

ln−1,ln(xn); ... tn−1 ∈ IM : f M ln−1,ln(xn)

  • χ(x)

def

= N

n=0tn ∈ M m=1 Im

∧ N

n=1

M

m=1(tn−1 ∈ Im → xn ∈ Rm ln−1,ln)

ϕ(x)

def

= ⊤ P(tN ≤ tarr | t0 = tdep, {li}N

i=0) = WMInb(χ(x) ∧ (tN ≤ tarr) ∧ (t0 = tdep), w(x)|x)

WMInb(χ(x) ∧ (t0 = tdep), w(x)|x) If each f m

li,lj(x) is polynomial in x ∈ Rm li,lj, then w(x) is PLRA and hence FIUCLRA. Thus we can

apply the theorem: ϕ∗(x, B)

def

=ϕ(x) ∧ χ(x) ∧ N

n=1

M

m=1(Bm n−1 ↔ tn−1 ∈ Im)

w∗(x, B)

def

=

N

  • n=1
  • Case B1

n−1 : f 1 ln−1,ln(xn); ... BM n−1 : f M ln−1,ln(xn)

  • .
slide-56
SLIDE 56

The Road Network Problem, Fixed Path: Encoding

Let x

def

= {x1, ..., xN}, A

def

= ∅, and “tn” be a shortcut for the term “n

i=1 xi + t0”. Then:

w(x)

def

= N

n=1

  • Case tn−1 ∈ I1 : f 1

ln−1,ln(xn); ... tn−1 ∈ IM : f M ln−1,ln(xn)

  • χ(x)

def

= N

n=0tn ∈ M m=1 Im

∧ N

n=1

M

m=1(tn−1 ∈ Im → xn ∈ Rm ln−1,ln)

ϕ(x)

def

= ⊤ P(tN ≤ tarr | t0 = tdep, {li}N

i=0) = WMInb(χ(x) ∧ (tN ≤ tarr) ∧ (t0 = tdep), w(x)|x)

WMInb(χ(x) ∧ (t0 = tdep), w(x)|x) If each f m

li,lj(x) is polynomial in x ∈ Rm li,lj, then w(x) is PLRA and hence FIUCLRA. Thus we can

apply the theorem: ϕ∗(x, B)

def

=ϕ(x) ∧ χ(x) ∧ N

n=1

M

m=1(Bm n−1 ↔ tn−1 ∈ Im)

w∗(x, B)

def

=

N

  • n=1
  • Case B1

n−1 : f 1 ln−1,ln(xn); ... BM n−1 : f M ln−1,ln(xn)

  • .
slide-57
SLIDE 57

The Road Network Problem, Fixed Path: Example

Example

χ(x)

def

=t0 ∈ [7, 10) ∧t0 + x1 ∈ [7, 10) ∧t0 ∈ [7, 8) → x1 ∈ [0.5, 1) ∧t0 ∈ [8, 9) → x1 ∈ [1, 1.5) ∧t0 ∈ [9, 10) → x1 ∈ [1, 2) ∧t0 + x1 ∈ [7, 8) → x2 ∈ [1, 1.5) ∧t0 + x1 ∈ [8, 9) → x2 ∈ [1.5, 2) ∧t0 + x1 ∈ [9, 10) → x2 ∈ [1, 2) w(x)

def

=

  • Case

t0 ∈ [7, 8) : w1

[l0l1](x1);

t0 ∈ [8, 9) : w2

[l0l1](x1);

t0 ∈ [9, 10) : w3

[l0l1](x1);

  • ·
  • Case

t0 + x1 ∈ [7, 8) : w1

[l1l2](x2);

t0 + x1 ∈ [8, 9) : w2

[l1l2](x2);

t0 + x1 ∈ [9, 10) : w3

[l1l2](x2);

  • ϕ(x)

def

=⊤ where the wm

[ln−1ln](xn) are functions which are integrable and positive in their respective domain stated in χ(x)

(e.g., w1

[l0l1](x1) is integrable and positive in x1 ∈ [0.5, 1)).

slide-58
SLIDE 58

The Road Network Problem, Fixed Path: Example (cont.)

Example

Then, by applying the theorem: ϕ∗(x, B)

def

= ϕ(x) ∧ χ(x) ∧ (B1

0 ↔ t0 ∈ [7, 8))

∧ ... ∧ (B3

1 ↔ t0 + x1 ∈ [9, 10))

w∗(x, B)

def

=

  • Case

B1 : w1

[l0l1](x1);

B2 : w2

[l0l1](x1);

B3 : w3

[l0l1](x1);

  • ·
  • Case

B1

1

: w1

[l1l2](x2);

B2

1

: w2

[l1l2](x2);

B3

1

: w3

[l1l2](x2);

slide-59
SLIDE 59

Case Study 2: The Road Network Problem under Conditional Plan

Given: Time intervals Ims, variables xns and tns, values tdep and tarr, distributions of journey times f m

li,ljs and supports Rm li,lj (for all li, lj s in the network): as with the fixed-path case.

The path in the road network is not given in advance. Instead it is given:

a maximum path length N an initial location ldep and final target location ltarget a conditional plan, s.t., for any current location l and time interval index m, next(l, m, ltarget) is the next location in the path (mimics empirical knowledge of the driver)

(for ltarget, next(ltarget, m, ltarget)

def

= ltarget and Rm

ltarget,ltarget

def

= [0, 0]) Query: P(tN ≤ tarr | t0 = tdep, ldep, ltarget, next).

slide-60
SLIDE 60

Case Study 2: The Road Network Problem under Cond. Plan (cont.)

Two alternative (sub)paths from lcurr to ltarget. The successor of lcurr is selected according to the time interval at which the node is reached.

slide-61
SLIDE 61

The Road Network Problem with Conditional Plan: Encoding

Let x

def

= {x1, ..., xN}, A

def

= {A01, ..., ANL}, and “tn” be a shortcut for the term “n

i=1 xi + t0”.

χ(x, A)

def

= N

n=0tn ∈ M m=1 Im ∧ N n=0OneOf{An,l | l ∈ [1, L]}

∧ N

n=1

L

l=1

  • An−1,l → M

m=1(tn−1 ∈ Im → xn ∈ Rm l,next(l,m,ltarget))

  • ,

ϕ(x, A)

def

= A0,l0 ∧ N

n=1

L

l=1

  • An−1,l → M

m=1(tn−1 ∈ Im → An,next(l,m,ltarget))

  • w(x, A)

def

=

N

  • n=1
  • Case

(An−1,l1 ∧ An,l2) : Case tn−1 ∈ I1 : f 1

l1,l2(xn); ... ; tn−1 ∈ IM : f M l1,l2(xn) ;

(An−1,l1 ∧ An,l3) : Case tn−1 ∈ I1 : f 1

l1,l3(xn); ... ; tn−1 ∈ IM : f M l1,l3(xn) ;

... (An−1,lL ∧ An,lL−1) : Case tn−1 ∈ I1 : f 1

lL,lL−1(xn); ... ; tn−1 ∈ IM : f M lL,lL−1(xn) ;

  • Note: in w(x, A) the case “An−1,li ∧ An,lj ” is considered only if li, lj adjacent and lj = next(li, m, ltarget) for some m.

P(tN ≤ tarr | t0 = tdep, ldep, ltarget, next) = WMI(ϕ(x, A) ∧ χ(x, A) ∧ (tN ≤ tarr) ∧ (t0 = tdep), w(x, A)|x, A) WMI(ϕ(x, A) ∧ χ(x, A) ∧ (t0 = tdep), w(x, A)|x, A)

slide-62
SLIDE 62

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-63
SLIDE 63

Experimental Evaluation: Description

We compared the following tools:

WMI-BC is our re-implementation of the WMIold procedure in [8]; WMI-ALLSMT and WMI-PA; SVE-XADD is the tool in [24] we adapted to parse our input format; PRAISE is the tool of Probabilistic Inference Modulo Theories [13].

In WMI-BC, WMI-ALLSMT and WMI-PA we use

MATHSAT5 [11, 1] for SMT reasoning LATTE INTEGRALE [18, 2] to compute integrals of polynomials SYMPY [3], a Python library for symbolic mathematics, for weight manipulations

Experiments:

synthetic settings [19, 20] real-world Strategic Road Network Dataset [4] by the English Highways Agency (both fixed-path and conditional-plan)

run on 7-core Virtual Machine, 2.2 GHz and 94 GB of RAM timeout at 10,000 seconds for each query, tool job pair if terminating, all tools returned the same values on the same queries (modulo roundings) tools, data, and scripts used for experiments are publicly available [5]

slide-64
SLIDE 64

Results on Synthetic Settings [19]

(left): Query execution times for all methods (center): Query execution times for the three most performing algorithms (right): Number of integrals for the three most performing algorithms

slide-65
SLIDE 65

Results on Synthetic Settings [20]

(left): Query execution times (in seconds) for all methods on the synthetic experiment; (right): Number of integrals for WMI-BC, WMI-ALLSMT and WMI-PA on the same instances.

slide-66
SLIDE 66

Strategic Road Network with Fixed Path

# PRAiSE WMI BC AllSMT PA 1 2 1 2 3 10 8 3 7 425 253 4 22 > 10000 3994 2 5 174 > 10000 > 10000 8 6 6722 > 10000 > 10000 86 7 > 10000 > 10000 > 10000 850 8 > 10000 > 10000 > 10000 8884

(left): Query execution times in seconds (1st quartile, median and 3rd quartile). (right): Table showing the medians for each length (right).

slide-67
SLIDE 67

Strategic Road Network with Conditional Plan

# PRAiSE WMI-PA 1 799 1 2 > 10000 2 3 > 10000 4 4 > 10000 6 5 > 10000 14 6 > 10000 77 7 > 10000 708 8 > 10000 6203

(left): Query execution times in seconds (1st quartile, median and 3rd quartile). (right): Table showing the medians for each length (right).

slide-68
SLIDE 68

Outline

1

Background

2

Weighted Model Integration, Revisited

3

SMT-Based WMI Computation

4

A Case Study: The Road Network Problem

5

Experimental Evaluations

6

Ongoing and Future Work

slide-69
SLIDE 69

Conclusion

Novel WMI formulation

easily captures the previous definition (not vice versa); works with weight functions w(x, A) rather than w(lit(x, A)) w not restricted to products of weights over literals = ⇒ allows for much more general forms, FIUCLRA

Novel (WMI-ALLSMT and) WMI-PA procedure

2 step: predicate abstraction + partial-assignment AllSMT, interleaved with formula simplification = ⇒ reduces drastically the number of integrals to compute

Empirical evaluation on both synthetic and real-word problems

WMI-PA outperforms WMI-ALLSMT and previous approaches

A WMI-ALLSMT & WMI-PA tool available: pywmi [16] (https://pypi.org/project/pywmi/) Note: CPU times for WMI-PA largely dominated by WMInb(µLRA, w|x)

def

=

  • µLRA(x) w(x) dx calls

= ⇒ computation of integrals current bottleneck

slide-70
SLIDE 70

Ongoing & Future Work

Efficiency look for more efficient basic integrator for WMInb(ϕ, w|x)

def

=

  • µLRA(x) w(x) dx

TA(ϕ): more effective partial-assignment reduction techniques exploiting w(x, µA) with partial µAs investigate forms of approximated enumeration [14] investigate forms of component caching [22, 6] Expressiveness Extend WMI to integers and mixed real/integers Extend WMI integration domains to (subcases of) non-linear arithmetic constraints? Others find other applications, other than probabilistic reasoning?

slide-71
SLIDE 71

c Warner Bros. Inc.

slide-72
SLIDE 72

References I

[1] http://mathsat.fbk.eu/. [2] https://www.math.ucdavis.edu/$\sim$latte/. [3] http://www.sympy.org/. [4] https://data.gov.uk/dataset/dft-eng-srn-routes-journey-times. [5] https://github.com/unitn-sml/wmi-pa. [6]

  • F. Bacchus, S. Dalmao, and T. Pitassi.

Solving #SAT and Bayesian inference with backtracking search. Journal of Artificial Intelligence Research, 34(1):391–442, 2009. [7]

  • V. Baldoni, N. Berline, J. D. Loera, M. Köppe, and M. Vergne.

How to integrate a polynomial over a simplex. Mathematics of Computation, 80(273):297–325, 2011. [8]

  • V. Belle, A. Passerini, and G. V. den Broeck.

Probabilistic inference in hybrid domains by weighted model integration. In IJCAI, 2015. [9]

  • R. Cavada, A. Cimatti, A. Franzén, K. Kalyanasundaram, M. Roveri, and R. Shyamasundar.

Computing Predicate Abstractions by Integrating BDDs and SMT Solvers. In FMCAD, 2007. [10]

  • M. Chavira and A. Darwiche.

On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6-7):772–799, 2008. [11]

  • A. Cimatti, A. Griggio, B. J. Schaafsma, and R. Sebastiani.

The MathSAT 5 SMT Solver. In TACAS, 2013. [12]

  • A. Darwiche.

New advances in compiling CNF to decomposable negation normal form. In Proceedings of ECAI, pages 328–332, 2004.

slide-73
SLIDE 73

References II

[13]

  • R. de Salvo Braz, C. O’Reilly, V. Gogate, and R. Dechter.

Probabilistic Inference Modulo Theories. In IJCAI, 2016. [14]

  • S. Ermon, C. P

. Gomes, A. Sabharwal, and B. Selman. Embed and project: Discrete sampling with universal hashing. In NIPS, pages 2085–2093, 2013. [15]

  • S. Graf and H. Saïdi.

Construction of abstract state graphs with pvs. In CAV, 1997. [16]

  • S. Kolb, P

. Morettin, P . Z. D. Martires, F. Sommavilla, A. Passerini, and R. S. L. D. Raedt. The pywmi framework and toolbox for probabilistic inference using weighted model integration. In Proc IJCAI, 2019. To appear. [17]

  • S. K. Lahiri, R. Nieuwenhuis, and A. Oliveras.

SMT techniques for fast predicate abstraction. In CAV, 2006. [18]

  • J. D. Loera, B. Dutra, M. Koeppe, S. Moreinis, G. Pinto, and J. Wu.

Software for exact integration of polynomials over polyhedra. ACM Communications in Computer Algebra, 45(3/4):169–172, 2012. [19] P . Morettin, A. Passerini, and R. Sebastiani. Efficient weighted model integration via SMT-based predicate abstraction. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 720–728, 2017. [20] P . Morettin, A. Passerini, and R. Sebastiani. Advanced smt techniques for weighted model integration. Artificial Intelligence, 2019. https://doi.org/10.1016/j.artint.2019.04.003. [21]

  • C. Muise, S. A. McIlraith, J. C. Beck, and E. I. Hsu.

Dsharp: fast d-dnnf compilation with sharpsat. In Advances in Artificial Intelligence, pages 356–361. Springer, 2012.

slide-74
SLIDE 74

References III

[22]

  • T. Sang, F. Bacchus, P

. Beame, H. A. Kautz, and T. Pitassi. Combining component caching and clause learning for effective model counting. In SAT, 2004. [23]

  • T. Sang, P

. Beame, and H. A. Kautz. Performing bayesian inference by weighted model counting. In AAAI, volume 5, pages 475–481, 2005. [24]

  • S. Sanner, K. V. Delgado, and L. N. de Barros.

Symbolic dynamic programming for discrete and continuous state mdps. In UAI 2011, Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, Barcelona, Spain, July 14-17, 2011, pages 643–652, 2011.