Towards Machine Learning for Quantification Mikol Janota AITP, 28 - - PowerPoint PPT Presentation

towards machine learning for quantification
SMART_READER_LITE
LIVE PREVIEW

Towards Machine Learning for Quantification Mikol Janota AITP, 28 - - PowerPoint PPT Presentation

Towards Machine Learning for Quantification Mikol Janota AITP, 28 March 2018 IST/INESC-ID, University of Lisbon, Portugal Janota Towards Machine Learning for Quantification 1 / 28 Outline Intro: QBF, Expansion, Games, Careful expansion


slide-1
SLIDE 1

Towards Machine Learning for Quantification

Mikoláš Janota AITP, 28 March 2018

IST/INESC-ID, University of Lisbon, Portugal

Janota Towards Machine Learning for Quantification 1 / 28

slide-2
SLIDE 2

Outline

Intro: QBF, Expansion, Games, Careful expansion Solving QBF Learning in QBF Bernays–Schönfinkel (“Effectively Propositional Logic”) — Finite Models

Janota Towards Machine Learning for Quantification 2 / 28

slide-3
SLIDE 3

Intro: QBF, Expansion, Games, Careful expansion

slide-4
SLIDE 4

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example:

x 1 y x y x y

  • QBF — for a Quantified Boolean formula
  • Example:

x y x y

  • Quantifications as shorthands for connectives

( , ) Example:

(1) x y x y (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-5
SLIDE 5

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example:

x y x y

  • Quantifications as shorthands for connectives

( , ) Example:

(1) x y x y (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-6
SLIDE 6

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example:

x y x y

  • Quantifications as shorthands for connectives

( , ) Example:

(1) x y x y (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-7
SLIDE 7

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

( , ) Example:

(1) x y x y (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-8
SLIDE 8

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

(∀ = ∧, ∃ = ∨) Example:

(1) x y x y (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-9
SLIDE 9

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

(∀ = ∧, ∃ = ∨) Example:

(1) ∀x∃y. (x ↔ y) (2) x x x 1 (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-10
SLIDE 10

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

(∀ = ∧, ∃ = ∨) Example:

(1) ∀x∃y. (x ↔ y) (2) ∀x. (x ↔ 0) ∨ (x ↔ 1) (3) 1 1 1 1 (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-11
SLIDE 11

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

(∀ = ∧, ∃ = ∨) Example:

(1) ∀x∃y. (x ↔ y) (2) ∀x. (x ↔ 0) ∨ (x ↔ 1) (3) ((0 ↔ 0) ∨ (0 ↔ 1)) ∧ ((1 ↔ 0) ∨ (1 ↔ 1)) (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-12
SLIDE 12

SAT and QBF

  • SAT — for a Boolean formula, determine if it is satisfiable
  • Example: {x = 1, y = 0} |

= (x ∨ y) ∧ (x ∨ ¬y)

  • QBF — for a Quantified Boolean formula
  • Example: ∀x∃y. (x ↔ y)
  • Quantifications as shorthands for connectives

(∀ = ∧, ∃ = ∨) Example:

(1) ∀x∃y. (x ↔ y) (2) ∀x. (x ↔ 0) ∨ (x ↔ 1) (3) ((0 ↔ 0) ∨ (0 ↔ 1)) ∧ ((1 ↔ 0) ∨ (1 ↔ 1)) (4) 1 (True)

Janota Towards Machine Learning for Quantification 3 / 28

slide-13
SLIDE 13

QBF is a strict subset of Bernays–Schönfinkel (EPR)

  • Consider the QBF:

∀u∃e. u ↔ e

  • 1. Introduce a predicate for truth,
  • 2. each existential variable replace by a predicate,
  • 3. universal variables wrapped by the truth predicate:

is-true t is-true f Xu is-true Xu pe Xu

  • Alternatively, use equality:

t f Xu Xu t pe Xu

Janota Towards Machine Learning for Quantification 4 / 28

slide-14
SLIDE 14

QBF is a strict subset of Bernays–Schönfinkel (EPR)

  • Consider the QBF:

∀u∃e. u ↔ e

  • 1. Introduce a predicate for truth,
  • 2. each existential variable replace by a predicate,
  • 3. universal variables wrapped by the truth predicate:

is-true t is-true f Xu is-true Xu pe Xu

  • Alternatively, use equality:

t f Xu Xu t pe Xu

Janota Towards Machine Learning for Quantification 4 / 28

slide-15
SLIDE 15

QBF is a strict subset of Bernays–Schönfinkel (EPR)

  • Consider the QBF:

∀u∃e. u ↔ e

  • 1. Introduce a predicate for truth,
  • 2. each existential variable replace by a predicate,
  • 3. universal variables wrapped by the truth predicate:

is-true t is-true f Xu is-true Xu pe Xu

  • Alternatively, use equality:

t f Xu Xu t pe Xu

Janota Towards Machine Learning for Quantification 4 / 28

slide-16
SLIDE 16

QBF is a strict subset of Bernays–Schönfinkel (EPR)

  • Consider the QBF:

∀u∃e. u ↔ e

  • 1. Introduce a predicate for truth,
  • 2. each existential variable replace by a predicate,
  • 3. universal variables wrapped by the truth predicate:

is-true(t) ∧ ¬is-true(f) ∧ (∀Xu. is-true(Xu) ↔ pe(Xu))

  • Alternatively, use equality:

t f Xu Xu t pe Xu

Janota Towards Machine Learning for Quantification 4 / 28

slide-17
SLIDE 17

QBF is a strict subset of Bernays–Schönfinkel (EPR)

  • Consider the QBF:

∀u∃e. u ↔ e

  • 1. Introduce a predicate for truth,
  • 2. each existential variable replace by a predicate,
  • 3. universal variables wrapped by the truth predicate:

is-true(t) ∧ ¬is-true(f) ∧ (∀Xu. is-true(Xu) ↔ pe(Xu))

  • Alternatively, use equality:

t ̸= f ∧ (∀Xu. (Xu = t) ↔ pe(Xu))

Janota Towards Machine Learning for Quantification 4 / 28

slide-18
SLIDE 18

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example u1u2 e1e2 u1 e1 u2 e2

  • A QBF represents a two-player game between

and .

  • wins a game if the matrix becomes false.
  • wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for

.

  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-19
SLIDE 19

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between

and .

  • wins a game if the matrix becomes false.
  • wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for

.

  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-20
SLIDE 20

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • wins a game if the matrix becomes false.
  • wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for

.

  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-21
SLIDE 21

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • ∀ wins a game if the matrix becomes false.
  • wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for

.

  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-22
SLIDE 22

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • ∀ wins a game if the matrix becomes false.
  • ∃ wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for

.

  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-23
SLIDE 23

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • ∀ wins a game if the matrix becomes false.
  • ∃ wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for ∀.
  • A QBF is true iff there exists a winning strategy for

. Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-24
SLIDE 24

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • ∀ wins a game if the matrix becomes false.
  • ∃ wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for ∀.
  • A QBF is true iff there exists a winning strategy for ∃.

Example u e u e

  • player wins by playing e

u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-25
SLIDE 25

Quantification and Two-player Games

  • In this talk we consider prenex form:

Quantifier-prefix. Matrix Example ∀u1u2∃e1e2. (¬u1 ∨ e1) ∧ (u2 ∨ ¬e2)

  • A QBF represents a two-player game between ∀ and ∃.
  • ∀ wins a game if the matrix becomes false.
  • ∃ wins a game if the matrix becomes true.
  • A QBF is false iff there exists a winning strategy for ∀.
  • A QBF is true iff there exists a winning strategy for ∃.

Example ∀u∃e. (u ↔ e) ∃-player wins by playing e u.

Janota Towards Machine Learning for Quantification 5 / 28

slide-26
SLIDE 26

Solving QBF

slide-27
SLIDE 27

Solving by CEGAR Expansion

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Can be solved by SAT

2

. Impractical! Observe:

2

for some 2

What is a good ?

Janota Towards Machine Learning for Quantification 6 / 28

slide-28
SLIDE 28

Solving by CEGAR Expansion

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Can be solved by SAT (∧

µ∈2U φ[µ]

) . Impractical! Observe:

2

for some 2

What is a good ?

Janota Towards Machine Learning for Quantification 6 / 28

slide-29
SLIDE 29

Solving by CEGAR Expansion

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Can be solved by SAT (∧

µ∈2U φ[µ]

) . Impractical! Observe:

∃E. ∧

µ∈2U φ[µ] ⇒ ∃E. ∧ µ∈ω φ[µ]

for some ω ⊆ 2U

What is a good ω?

Janota Towards Machine Learning for Quantification 6 / 28

slide-30
SLIDE 30

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT

0 assignment to

  • SAT

1 assignment to

  • SAT

1 2 assignment to

  • SAT

1 2 assignment to

  • After n iterations

i 1 n i

Janota Towards Machine Learning for Quantification 7 / 28

slide-31
SLIDE 31

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT(¬φ[τ0]) = µ0 assignment to U
  • SAT

1 assignment to

  • SAT

1 2 assignment to

  • SAT

1 2 assignment to

  • After n iterations

i 1 n i

Janota Towards Machine Learning for Quantification 7 / 28

slide-32
SLIDE 32

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT(¬φ[τ0]) = µ0 assignment to U
  • SAT(φ[µ0]) = τ1 assignment to E
  • SAT

1 2 assignment to

  • SAT

1 2 assignment to

  • After n iterations

i 1 n i

Janota Towards Machine Learning for Quantification 7 / 28

slide-33
SLIDE 33

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT(¬φ[τ0]) = µ0 assignment to U
  • SAT(φ[µ0]) = τ1 assignment to E
  • SAT(¬φ[τ1]) = µ2 assignment to U
  • SAT

1 2 assignment to

  • After n iterations

i 1 n i

Janota Towards Machine Learning for Quantification 7 / 28

slide-34
SLIDE 34

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT(¬φ[τ0]) = µ0 assignment to U
  • SAT(φ[µ0]) = τ1 assignment to E
  • SAT(¬φ[τ1]) = µ2 assignment to U
  • SAT(φ[µ0] ∧ φ[µ1]) = τ2 assignment to E
  • After n iterations

i 1 n i

Janota Towards Machine Learning for Quantification 7 / 28

slide-35
SLIDE 35

Solving by CEGAR Expansion Contd.

∃E ∀U. φ ≡ ∃E. ∧

µ∈2U φ[µ]

Expand gradually instead: [Janota and Marques-Silva, 2011]

  • Pick τ0 arbitrary assignment to E
  • SAT(¬φ[τ0]) = µ0 assignment to U
  • SAT(φ[µ0]) = τ1 assignment to E
  • SAT(¬φ[τ1]) = µ2 assignment to U
  • SAT(φ[µ0] ∧ φ[µ1]) = τ2 assignment to E
  • After n iterations

∃E. ∧

i∈1..n φ[τi]

Janota Towards Machine Learning for Quantification 7 / 28

slide-36
SLIDE 36

Abstraction-Based Algorithm for a Winning Move

Algorithm for ∃∀. Generalize to arbitrary number of alternations using recursion. [Janota et al., 2012].

1 Function Solve(∃X∀Y. φ) 2 α ← true

// start with an empty abstraction

3 while true do 4

τ ← SAT(α) // find a candidate

5

if τ = ⊥ then return ⊥

6

µ ← Solve(¬φ[X ← τ]) // find a countermove

7

if µ = ⊥ then return τ

8

α ← α ∧ φ[Y ← µ] // refine abstraction

Janota Towards Machine Learning for Quantification 8 / 28

slide-37
SLIDE 37

Results, QBF-Gallery ’14, Application Track

Janota Towards Machine Learning for Quantification 9 / 28

slide-38
SLIDE 38

Careful Expansion: Good Example

∃x . . . ∀y . . . . φ ∧ y

Setting countermove y ← 0 yields false. Stop.

x y x

Setting candidate x 1 yields true (impossible to falsify). Stop.

Janota Towards Machine Learning for Quantification 10 / 28

slide-39
SLIDE 39

Careful Expansion: Good Example

∃x . . . ∀y . . . . φ ∧ y

Setting countermove y ← 0 yields false. Stop.

∃x . . . ∀y . . . . x ∨ φ

Setting candidate x ← 1 yields true (impossible to falsify). Stop.

Janota Towards Machine Learning for Quantification 10 / 28

slide-40
SLIDE 40

Careful Expansion: Bad Example

∃x∀y. x ⇔ y

  • 1. x ← 1

candidate

  • 2. SAT

1 y y countermove

  • 3. SAT x

x candidate

  • 4. SAT

y y 1 countermove

  • 5. SAT x

x 1 UNSAT Stop

Janota Towards Machine Learning for Quantification 11 / 28

slide-41
SLIDE 41

Careful Expansion: Bad Example

∃x∀y. x ⇔ y

  • 1. x ← 1

candidate

  • 2. SAT(¬(1 ⇔ y)) . . . y ← 0

countermove

  • 3. SAT x

x candidate

  • 4. SAT

y y 1 countermove

  • 5. SAT x

x 1 UNSAT Stop

Janota Towards Machine Learning for Quantification 11 / 28

slide-42
SLIDE 42

Careful Expansion: Bad Example

∃x∀y. x ⇔ y

  • 1. x ← 1

candidate

  • 2. SAT(¬(1 ⇔ y)) . . . y ← 0

countermove

  • 3. SAT(x ⇔ 0) . . . x ← 0

candidate

  • 4. SAT

y y 1 countermove

  • 5. SAT x

x 1 UNSAT Stop

Janota Towards Machine Learning for Quantification 11 / 28

slide-43
SLIDE 43

Careful Expansion: Bad Example

∃x∀y. x ⇔ y

  • 1. x ← 1

candidate

  • 2. SAT(¬(1 ⇔ y)) . . . y ← 0

countermove

  • 3. SAT(x ⇔ 0) . . . x ← 0

candidate

  • 4. SAT(¬(0 ⇔ y)) . . . y ← 1

countermove

  • 5. SAT x

x 1 UNSAT Stop

Janota Towards Machine Learning for Quantification 11 / 28

slide-44
SLIDE 44

Careful Expansion: Bad Example

∃x∀y. x ⇔ y

  • 1. x ← 1

candidate

  • 2. SAT(¬(1 ⇔ y)) . . . y ← 0

countermove

  • 3. SAT(x ⇔ 0) . . . x ← 0

candidate

  • 4. SAT(¬(0 ⇔ y)) . . . y ← 1

countermove

  • 5. SAT(x ⇔ 0 ∧ x ⇔ 1) . . . UNSAT

Stop

Janota Towards Machine Learning for Quantification 11 / 28

slide-45
SLIDE 45

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT

y1 y2 y1 1 y2 1

  • 3. SAT x1

1 x2 1 x1 x2 0 1

  • 4. SAT

y1 1 y2 y1 1 y2

  • 5. SAT

x1 1 x2 1 x1 1 x2 6.

Janota Towards Machine Learning for Quantification 12 / 28

slide-46
SLIDE 46

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT(¬(0 ⇔ y1 ∨ ¬0 ⇔ y2)) . . . y1 ← 1, y2 ← 1
  • 3. SAT x1

1 x2 1 x1 x2 0 1

  • 4. SAT

y1 1 y2 y1 1 y2

  • 5. SAT

x1 1 x2 1 x1 1 x2 6.

Janota Towards Machine Learning for Quantification 12 / 28

slide-47
SLIDE 47

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT(¬(0 ⇔ y1 ∨ ¬0 ⇔ y2)) . . . y1 ← 1, y2 ← 1
  • 3. SAT(x1 ⇔ 1 ∨ x2 ⇔ 1) . . . x1, x2 ← 0, 1
  • 4. SAT

y1 1 y2 y1 1 y2

  • 5. SAT

x1 1 x2 1 x1 1 x2 6.

Janota Towards Machine Learning for Quantification 12 / 28

slide-48
SLIDE 48

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT(¬(0 ⇔ y1 ∨ ¬0 ⇔ y2)) . . . y1 ← 1, y2 ← 1
  • 3. SAT(x1 ⇔ 1 ∨ x2 ⇔ 1) . . . x1, x2 ← 0, 1
  • 4. SAT(¬(0 ⇔ y1 ∨ 1 ⇔ y2)) . . . y1 ← 1, y2 ← 0
  • 5. SAT

x1 1 x2 1 x1 1 x2 6.

Janota Towards Machine Learning for Quantification 12 / 28

slide-49
SLIDE 49

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT(¬(0 ⇔ y1 ∨ ¬0 ⇔ y2)) . . . y1 ← 1, y2 ← 1
  • 3. SAT(x1 ⇔ 1 ∨ x2 ⇔ 1) . . . x1, x2 ← 0, 1
  • 4. SAT(¬(0 ⇔ y1 ∨ 1 ⇔ y2)) . . . y1 ← 1, y2 ← 0
  • 5. SAT

( (x1 ⇔ 1 ∨ x2 ⇔ 1) ∧ (x1 ⇔ 1 ∨ x2 ⇔ 0) ) . . . 6.

Janota Towards Machine Learning for Quantification 12 / 28

slide-50
SLIDE 50

Careful Expansion: Ugly Example

∃x1x2∀y1y2. x1 ⇔ y1 ∨ x2 ⇔ y2

  • 1. x1, x2 ← 0, 0
  • 2. SAT(¬(0 ⇔ y1 ∨ ¬0 ⇔ y2)) . . . y1 ← 1, y2 ← 1
  • 3. SAT(x1 ⇔ 1 ∨ x2 ⇔ 1) . . . x1, x2 ← 0, 1
  • 4. SAT(¬(0 ⇔ y1 ∨ 1 ⇔ y2)) . . . y1 ← 1, y2 ← 0
  • 5. SAT

( (x1 ⇔ 1 ∨ x2 ⇔ 1) ∧ (x1 ⇔ 1 ∨ x2 ⇔ 0) ) . . .

  • 6. . . .

Janota Towards Machine Learning for Quantification 12 / 28

slide-51
SLIDE 51

Learning in QBF

slide-52
SLIDE 52

Issue

  • CEGAR requires 2n SAT calls for the formula

∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ yi

  • BUT: We know that the formula is immediately false if we

set yi xi. x1 xn y1 yn

i 1 n

xi xi x1 xn 0

  • Idea: instead of plugging in constants, plug in functions.
  • Where do we get the functions?

Janota Towards Machine Learning for Quantification 13 / 28

slide-53
SLIDE 53

Issue

  • CEGAR requires 2n SAT calls for the formula

∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ yi

  • BUT: We know that the formula is immediately false if we

set yi ← ¬xi. ( ∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ ¬xi ) ≡ ( ∃x1 . . . xn. 0 )

  • Idea: instead of plugging in constants, plug in functions.
  • Where do we get the functions?

Janota Towards Machine Learning for Quantification 13 / 28

slide-54
SLIDE 54

Issue

  • CEGAR requires 2n SAT calls for the formula

∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ yi

  • BUT: We know that the formula is immediately false if we

set yi ← ¬xi. ( ∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ ¬xi ) ≡ ( ∃x1 . . . xn. 0 )

  • Idea: instead of plugging in constants, plug in functions.
  • Where do we get the functions?

Janota Towards Machine Learning for Quantification 13 / 28

slide-55
SLIDE 55

Issue

  • CEGAR requires 2n SAT calls for the formula

∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ yi

  • BUT: We know that the formula is immediately false if we

set yi ← ¬xi. ( ∃x1 . . . xn∀y1 . . . yn. ∨

i∈1..n

xi ⇔ ¬xi ) ≡ ( ∃x1 . . . xn. 0 )

  • Idea: instead of plugging in constants, plug in functions.
  • Where do we get the functions?

Janota Towards Machine Learning for Quantification 13 / 28

slide-56
SLIDE 56

Use Machine Learning

[Janota, 2018]

  • 1. Enumerate some number of candidate–countermove

pairs.

  • 2. Run a machine learning algorithm to learn a Boolean

function for each variable in the inner quantifier.

  • 3. Strengthen abstraction with the functions.
  • 4. Repeat.
  • 5. Additional heuristic: If a learned function still works, keep
  • it. “Don’t fix what ain’t broke.”

Janota Towards Machine Learning for Quantification 14 / 28

slide-57
SLIDE 57

Use Machine Learning

[Janota, 2018]

  • 1. Enumerate some number of candidate–countermove

pairs.

  • 2. Run a machine learning algorithm to learn a Boolean

function for each variable in the inner quantifier.

  • 3. Strengthen abstraction with the functions.
  • 4. Repeat.
  • 5. Additional heuristic: If a learned function still works, keep
  • it. “Don’t fix what ain’t broke.”

Janota Towards Machine Learning for Quantification 14 / 28

slide-58
SLIDE 58

Use Machine Learning

[Janota, 2018]

  • 1. Enumerate some number of candidate–countermove

pairs.

  • 2. Run a machine learning algorithm to learn a Boolean

function for each variable in the inner quantifier.

  • 3. Strengthen abstraction with the functions.
  • 4. Repeat.
  • 5. Additional heuristic: If a learned function still works, keep
  • it. “Don’t fix what ain’t broke.”

Janota Towards Machine Learning for Quantification 14 / 28

slide-59
SLIDE 59

Use Machine Learning

[Janota, 2018]

  • 1. Enumerate some number of candidate–countermove

pairs.

  • 2. Run a machine learning algorithm to learn a Boolean

function for each variable in the inner quantifier.

  • 3. Strengthen abstraction with the functions.
  • 4. Repeat.
  • 5. Additional heuristic: If a learned function still works, keep
  • it. “Don’t fix what ain’t broke.”

Janota Towards Machine Learning for Quantification 14 / 28

slide-60
SLIDE 60

Use Machine Learning

[Janota, 2018]

  • 1. Enumerate some number of candidate–countermove

pairs.

  • 2. Run a machine learning algorithm to learn a Boolean

function for each variable in the inner quantifier.

  • 3. Strengthen abstraction with the functions.
  • 4. Repeat.
  • 5. Additional heuristic: If a learned function still works, keep
  • it. “Don’t fix what ain’t broke.”

Janota Towards Machine Learning for Quantification 14 / 28

slide-61
SLIDE 61

Machine Learning Example

x1 x2 . . . xn y1 y2 . . . yn . . . 1 1 . . . 1 1 . . . 1 . . . 1 . . . 1 1 1 . . . 1 . . . 1 1 . . .

Janota Towards Machine Learning for Quantification 15 / 28

slide-62
SLIDE 62

Machine Learning Example

x1 x2 . . . xn y1 y2 . . . yn . . . 1 1 . . . 1 1 . . . 1 . . . 1 . . . 1 1 1 . . . 1 . . . 1 1 . . .

  • After 2 steps: y1 ← ¬x1, yi ← 1 for i ∈ 2..n.
  • SAT x1

x1

i 2 n x2

1

  • After 4 steps: y1

x1 y2 x2

  • Eventually we learn the right functions.

Janota Towards Machine Learning for Quantification 15 / 28

slide-63
SLIDE 63

Machine Learning Example

x1 x2 . . . xn y1 y2 . . . yn . . . 1 1 . . . 1 1 . . . 1 . . . 1 . . . 1 1 1 . . . 1 . . . 1 1 . . .

  • After 2 steps: y1 ← ¬x1, yi ← 1 for i ∈ 2..n.
  • SAT(x1 ⇔ ¬x1 ∨ ∨

i∈2..n x2 ⇔ 1)

  • After 4 steps: y1

x1 y2 x2

  • Eventually we learn the right functions.

Janota Towards Machine Learning for Quantification 15 / 28

slide-64
SLIDE 64

Machine Learning Example

x1 x2 . . . xn y1 y2 . . . yn . . . 1 1 . . . 1 1 . . . 1 . . . 1 . . . 1 1 1 . . . 1 . . . 1 1 . . .

  • After 2 steps: y1 ← ¬x1, yi ← 1 for i ∈ 2..n.
  • SAT(x1 ⇔ ¬x1 ∨ ∨

i∈2..n x2 ⇔ 1)

  • After 4 steps: y1 ← ¬x1 y2 ← ¬x2 . . .
  • Eventually we learn the right functions.

Janota Towards Machine Learning for Quantification 15 / 28

slide-65
SLIDE 65

Machine Learning Example

x1 x2 . . . xn y1 y2 . . . yn . . . 1 1 . . . 1 1 . . . 1 . . . 1 . . . 1 1 1 . . . 1 . . . 1 1 . . .

  • After 2 steps: y1 ← ¬x1, yi ← 1 for i ∈ 2..n.
  • SAT(x1 ⇔ ¬x1 ∨ ∨

i∈2..n x2 ⇔ 1)

  • After 4 steps: y1 ← ¬x1 y2 ← ¬x2 . . .
  • Eventually we learn the right functions.

Janota Towards Machine Learning for Quantification 15 / 28

slide-66
SLIDE 66

Current Implementation

  • Use CEGAR as before.
  • Recursion to generalize to multiple levels as before.
  • Refinement as before.
  • Every K refinements, learn new functions from last K
  • samples. Refine with them.
  • Learning using decision trees by ID3 algorithm.

Janota Towards Machine Learning for Quantification 16 / 28

slide-67
SLIDE 67

Current Implementation

  • Use CEGAR as before.
  • Recursion to generalize to multiple levels as before.
  • Refinement as before.
  • Every K refinements, learn new functions from last K
  • samples. Refine with them.
  • Learning using decision trees by ID3 algorithm.

Janota Towards Machine Learning for Quantification 16 / 28

slide-68
SLIDE 68

Current Implementation

  • Use CEGAR as before.
  • Recursion to generalize to multiple levels as before.
  • Refinement as before.
  • Every K refinements, learn new functions from last K
  • samples. Refine with them.
  • Learning using decision trees by ID3 algorithm.

Janota Towards Machine Learning for Quantification 16 / 28

slide-69
SLIDE 69

Current Implementation

  • Use CEGAR as before.
  • Recursion to generalize to multiple levels as before.
  • Refinement as before.
  • Every K refinements, learn new functions from last K
  • samples. Refine with them.
  • Learning using decision trees by ID3 algorithm.

Janota Towards Machine Learning for Quantification 16 / 28

slide-70
SLIDE 70

Current Implementation

  • Use CEGAR as before.
  • Recursion to generalize to multiple levels as before.
  • Refinement as before.
  • Every K refinements, learn new functions from last K
  • samples. Refine with them.
  • Learning using decision trees by ID3 algorithm.

Janota Towards Machine Learning for Quantification 16 / 28

slide-71
SLIDE 71

Current Implementation: Experiments

100 200 300 400 500 600 700 800 20 40 60 80 100 120 CPU time (s) instances qfun-64 qfun-128 rareqs qfun-64-f quabs gq

Janota Towards Machine Learning for Quantification 17 / 28

slide-72
SLIDE 72

Bernays–Schönfinkel (“Effectively Propositional Logic”) — Finite Models

slide-73
SLIDE 73

Bernays–Schönfinkel (EPR)

∀X. φ

  • φ has no further quantifiers and no functions (just

predicates and constants)

  • uses predicates p1

pm and constants c1 cn.

  • Finite model property: formulas has a model iff it has a

model of size n.

  • Therefore we can look for a model with the universe

1 n , n

n.

Janota Towards Machine Learning for Quantification 18 / 28

slide-74
SLIDE 74

Bernays–Schönfinkel (EPR)

∀X. φ

  • φ has no further quantifiers and no functions (just

predicates and constants)

  • φ uses predicates p1, . . . , pm and constants c1, . . . , cn.
  • Finite model property: formulas has a model iff it has a

model of size n.

  • Therefore we can look for a model with the universe

1 n , n

n.

Janota Towards Machine Learning for Quantification 18 / 28

slide-75
SLIDE 75

Bernays–Schönfinkel (EPR)

∀X. φ

  • φ has no further quantifiers and no functions (just

predicates and constants)

  • φ uses predicates p1, . . . , pm and constants c1, . . . , cn.
  • Finite model property: formulas has a model iff it has a

model of size ≤ n.

  • Therefore we can look for a model with the universe

1 n , n

n.

Janota Towards Machine Learning for Quantification 18 / 28

slide-76
SLIDE 76

Bernays–Schönfinkel (EPR)

∀X. φ

  • φ has no further quantifiers and no functions (just

predicates and constants)

  • φ uses predicates p1, . . . , pm and constants c1, . . . , cn.
  • Finite model property: formulas has a model iff it has a

model of size ≤ n.

  • Therefore we can look for a model with the universe

∗1, . . . , ∗n′, n′ ≤ n.

Janota Towards Machine Learning for Quantification 18 / 28

slide-77
SLIDE 77

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for

: SAT

  • 3. Test interpretation:

SAT X

  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction:

X

  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-78
SLIDE 78

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for α: I ← SAT(α)
  • 3. Test interpretation:

SAT X

  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction:

X

  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-79
SLIDE 79

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for α: I ← SAT(α)
  • 3. Test interpretation: µ ← SAT(∃X. ¬φ[I])
  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction:

X

  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-80
SLIDE 80

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for α: I ← SAT(α)
  • 3. Test interpretation: µ ← SAT(∃X. ¬φ[I])
  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction:

X

  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-81
SLIDE 81

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for α: I ← SAT(α)
  • 3. Test interpretation: µ ← SAT(∃X. ¬φ[I])
  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction: α ← α ∧ φ[µ/X]
  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-82
SLIDE 82

CEGAR for Finite Models

∃p1 . . . pm∃c1 . . . cn∀X. φ pi predicates, ci constants, X variables

  • 1. α ← true
  • 2. Find interpretation for α: I ← SAT(α)
  • 3. Test interpretation: µ ← SAT(∃X. ¬φ[I])
  • 4. If no counterexample, formula is true. STOP.
  • 5. Strengthen abstraction: α ← α ∧ φ[µ/X]
  • 6. GOTO 2

Janota Towards Machine Learning for Quantification 19 / 28

slide-83
SLIDE 83

Learning in Finite Models’ CEGAR

  • 1. Consider some finite grounding:

∃p1 . . . pm∃c1 . . . cn ∧

µ∈ω . φ[µ]

pi predicates, ci constants,

  • 2. Calculate interpretation by e.g. Ackermanization.
  • 3. The interpretation only matters on the existing ground

terms.

  • 4. Learn entire interpretation from observing values of

existing terms.

Janota Towards Machine Learning for Quantification 20 / 28

slide-84
SLIDE 84

Learning in Finite Models’ CEGAR

  • 1. Consider some finite grounding:

∃p1 . . . pm∃c1 . . . cn ∧

µ∈ω . φ[µ]

pi predicates, ci constants,

  • 2. Calculate interpretation by e.g. Ackermanization.
  • 3. The interpretation only matters on the existing ground

terms.

  • 4. Learn entire interpretation from observing values of

existing terms.

Janota Towards Machine Learning for Quantification 20 / 28

slide-85
SLIDE 85

Learning in Finite Models’ CEGAR

  • 1. Consider some finite grounding:

∃p1 . . . pm∃c1 . . . cn ∧

µ∈ω . φ[µ]

pi predicates, ci constants,

  • 2. Calculate interpretation by e.g. Ackermanization.
  • 3. The interpretation only matters on the existing ground

terms.

  • 4. Learn entire interpretation from observing values of

existing terms.

Janota Towards Machine Learning for Quantification 20 / 28

slide-86
SLIDE 86

Learning in Finite Models’ CEGAR

  • 1. Consider some finite grounding:

∃p1 . . . pm∃c1 . . . cn ∧

µ∈ω . φ[µ]

pi predicates, ci constants,

  • 2. Calculate interpretation by e.g. Ackermanization.
  • 3. The interpretation only matters on the existing ground

terms.

  • 4. Learn entire interpretation from observing values of

existing terms.

Janota Towards Machine Learning for Quantification 20 / 28

slide-87
SLIDE 87

Learning in Finite Models’ CEGAR, Example

  • 1. ∀X. p(X1, . . . , Xn) ⇔ (X1 = t)
  • 2. Ground by Xi

and X1

1 X1

Xn

0 :

3. p t p

1 1

t

  • 4. Partial interpretation:

t

1 p

False p

1

True

  • 5. Learn: t

1, p X1

Xn X1

1 ,

Janota Towards Machine Learning for Quantification 21 / 28

slide-88
SLIDE 88

Learning in Finite Models’ CEGAR, Example

  • 1. ∀X. p(X1, . . . , Xn) ⇔ (X1 = t)
  • 2. Ground by {Xi ∗0} and {X1 ∗1, X1 ∗0 . . . Xn ∗0}:

3. p t p

1 1

t

  • 4. Partial interpretation:

t

1 p

False p

1

True

  • 5. Learn: t

1, p X1

Xn X1

1 ,

Janota Towards Machine Learning for Quantification 21 / 28

slide-89
SLIDE 89

Learning in Finite Models’ CEGAR, Example

  • 1. ∀X. p(X1, . . . , Xn) ⇔ (X1 = t)
  • 2. Ground by {Xi ∗0} and {X1 ∗1, X1 ∗0 . . . Xn ∗0}:
  • 3. (p(∗0, . . . , ∗0) ⇔ ∗0 = t) ∧ (p(∗1, . . . , ∗0) ⇔ ∗1 = t)
  • 4. Partial interpretation:

t

1 p

False p

1

True

  • 5. Learn: t

1, p X1

Xn X1

1 ,

Janota Towards Machine Learning for Quantification 21 / 28

slide-90
SLIDE 90

Learning in Finite Models’ CEGAR, Example

  • 1. ∀X. p(X1, . . . , Xn) ⇔ (X1 = t)
  • 2. Ground by {Xi ∗0} and {X1 ∗1, X1 ∗0 . . . Xn ∗0}:
  • 3. (p(∗0, . . . , ∗0) ⇔ ∗0 = t) ∧ (p(∗1, . . . , ∗0) ⇔ ∗1 = t)
  • 4. Partial interpretation:

t ∗1, p(∗0 . . . , ∗0) False, p(∗1 . . . , ∗0) True

  • 5. Learn: t

1, p X1

Xn X1

1 ,

Janota Towards Machine Learning for Quantification 21 / 28

slide-91
SLIDE 91

Learning in Finite Models’ CEGAR, Example

  • 1. ∀X. p(X1, . . . , Xn) ⇔ (X1 = t)
  • 2. Ground by {Xi ∗0} and {X1 ∗1, X1 ∗0 . . . Xn ∗0}:
  • 3. (p(∗0, . . . , ∗0) ⇔ ∗0 = t) ∧ (p(∗1, . . . , ∗0) ⇔ ∗1 = t)
  • 4. Partial interpretation:

t ∗1, p(∗0 . . . , ∗0) False, p(∗1 . . . , ∗0) True

  • 5. Learn: t ∗1, p(X1, . . . , Xn) (X1 = ∗1),

Janota Towards Machine Learning for Quantification 21 / 28

slide-92
SLIDE 92

Preliminary Results

100 200 300 400 500 600 100 200 300 400 500 600 CPU time (s) instances iprover vam-fm cegar+learn cegar expand cvc4

Janota Towards Machine Learning for Quantification 22 / 28

slide-93
SLIDE 93

Preliminary Results

100 200 300 400 500 600 50 100 150 200 250 300 350 400 CPU time (s) instances cegar+learn cegar expand

Janota Towards Machine Learning for Quantification 23 / 28

slide-94
SLIDE 94

Preliminary Results (Hard) - more then 1 sec

100 200 300 400 500 600 10 20 30 40 50 60 70 80 CPU time (s) instances cegar+learn cegar expand

Janota Towards Machine Learning for Quantification 24 / 28

slide-95
SLIDE 95

Learn vs. CEGAR, Iterations

1 10 100 1000 10000 100000 1 10 100 1000 10000 100000 cegar+learn (its) cegar (its)

Janota Towards Machine Learning for Quantification 25 / 28

slide-96
SLIDE 96

Learn vs. CEGAR, Iterations — Only True

1 10 100 1000 10000 1 10 100 1000 10000 cegar+learn (its) cegar (its)

Janota Towards Machine Learning for Quantification 26 / 28

slide-97
SLIDE 97

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For

n m

, learning

n

  • Learning interpretations in finite models from partial

interpretations: For D1 Dk F1 Fl , learning D1 Dk

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-98
SLIDE 98

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For

n m

, learning

n

  • Learning interpretations in finite models from partial

interpretations: For D1 Dk F1 Fl , learning D1 Dk

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-99
SLIDE 99

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For . . . ∃Bn∀Bm . . . , learning Bn → B

  • Learning interpretations in finite models from partial

interpretations: For D1 Dk F1 Fl , learning D1 Dk

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-100
SLIDE 100

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For . . . ∃Bn∀Bm . . . , learning Bn → B

  • Learning interpretations in finite models from partial

interpretations: For ∃(D1 × · · · × Dk → B)∀F1 × · · · × Fl . . . , learning D1 × · · · × Dk → B

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-101
SLIDE 101

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For . . . ∃Bn∀Bm . . . , learning Bn → B

  • Learning interpretations in finite models from partial

interpretations: For ∃(D1 × · · · × Dk → B)∀F1 × · · · × Fl . . . , learning D1 × · · · × Dk → B

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-102
SLIDE 102

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For . . . ∃Bn∀Bm . . . , learning Bn → B

  • Learning interpretations in finite models from partial

interpretations: For ∃(D1 × · · · × Dk → B)∀F1 × · · · × Fl . . . , learning D1 × · · · × Dk → B

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-103
SLIDE 103

Summary and Future

  • Observing a formula while solving, learn from that.
  • Learning objects in the considered theory. (rather than

strategies, etc.)

  • Learning from Booleans:

For . . . ∃Bn∀Bm . . . , learning Bn → B

  • Learning interpretations in finite models from partial

interpretations: For ∃(D1 × · · · × Dk → B)∀F1 × · · · × Fl . . . , learning D1 × · · · × Dk → B

  • How can we learn strategies based on functions?
  • Infinite domains?
  • Learning in the presence of theories?

Janota Towards Machine Learning for Quantification 27 / 28

slide-104
SLIDE 104

Thank You for Your Attention! Questions?

Janota Towards Machine Learning for Quantification 28 / 28

slide-105
SLIDE 105

Janota, M. (2018). Towards generalization in QBF solving via machine learning. In AAAI Conference on Artificial Intelligence. Janota, M., Klieber, W., Marques-Silva, J., and Clarke, E. M. (2012). Solving QBF with counterexample guided refinement. In SAT, pages 114–128. Janota, M. and Marques-Silva, J. (2011). Abstraction-based algorithm for 2QBF. In SAT.

Janota Towards Machine Learning for Quantification 28 / 28