Encoding Challenges for Hard Problems Marijn J.H. Heule Starting at - - PowerPoint PPT Presentation

encoding challenges for hard problems
SMART_READER_LITE
LIVE PREVIEW

Encoding Challenges for Hard Problems Marijn J.H. Heule Starting at - - PowerPoint PPT Presentation

Encoding Challenges for Hard Problems Marijn J.H. Heule Starting at in August Matryoshka workshop June 12, 2019 1/33 Automated Reasoning Has Many Applications security planning and formal verification bioinformatics scheduling train


slide-1
SLIDE 1

1/33

Encoding Challenges for Hard Problems

Marijn J.H. Heule Starting at in August

Matryoshka workshop June 12, 2019

slide-2
SLIDE 2

2/33

Automated Reasoning Has Many Applications

formal verification train safety exploit generation automated theorem proving bioinformatics security planning and scheduling term rewriting termination

encode decode automated reasoner

slide-3
SLIDE 3

3/33

Breakthrough in SAT Solving in the Last 20 Years

Satisfiability (SAT) problem: Can a Boolean formula be satisfied?

mid ’90s: formulas solvable with thousands of variables and clauses now: formulas solvable with millions of variables and clauses Edmund Clarke: “a key technology of the 21st century”

[Biere, Heule, vanMaaren, and Walsh ’09]

Donald Knuth: “evidently a killer app, because it is key to the solution of so many other problems” [Knuth ’15]

slide-4
SLIDE 4

4/33

Representations Matrix Multiplication The Collatz Conjecture

slide-5
SLIDE 5

5/33

Representations Matrix Multiplication The Collatz Conjecture

slide-6
SLIDE 6

6/33

The Right Representation is Crucial

What makes a problem hard? New angle: does the representation enable efficient reasoning? The famous pigeonhole principle: Can n + 1 pigeons be placed in n holes such that no hole contains multiple pigeons?

◮ Hard for many automated reasoning approaches ◮ Easy for a little kid given the right representation source: pecanpartnership.co.uk/2016/01/05/beware-pigeon-hole-

  • vercoming-stereotypes-build-collaborative-culture
slide-7
SLIDE 7

7/33

Artisan Representations (joint work)

Architectural 3D Layout [VSMM ’07] Henriette Bier Edge-matching Puzzles [LaSh ’08] Graceful Graphs [AAAI ’10] Toby Walsh Clique-Width [SAT ’13, TOCL ’15] Stefan Szeider Firewall Verification [SSS ’16] Mohamed Gouda Open Knight Tours Moshe Vardi Van der Waerden numbers [EJoC ’07] Software Model Synthesis [ICGI ’10, ESE ’13] Sicco Verwer Conway’s Game of Life [EJoC ’13] Willem van der Poel Connect the Pairs Donald Knuth Pythagorean Triples [SAT ’16, CACM ’17] Victor Marek Collatz conjecture [Open] Scott Aaronson

slide-8
SLIDE 8

7/33

Artisan Representations (joint work)

Architectural 3D Layout [VSMM ’07] Henriette Bier Edge-matching Puzzles [LaSh ’08] Graceful Graphs [AAAI ’10] Toby Walsh Clique-Width [SAT ’13, TOCL ’15] Stefan Szeider Firewall Verification [SSS ’16] Mohamed Gouda Open Knight Tours Moshe Vardi Van der Waerden numbers [EJoC ’07] Software Model Synthesis [ICGI ’10, ESE ’13] Sicco Verwer Conway’s Game of Life [EJoC ’13] Willem van der Poel Connect the Pairs Donald Knuth Pythagorean Triples [SAT ’16, CACM ’17] Victor Marek Collatz conjecture [Open] Scott Aaronson

slide-9
SLIDE 9

8/33

Inprocessing [J¨

arvisalo, Heule, and Biere ’12] How to fix a poor representation fully automatically? Reformulate CDCL Reformulate

slide-10
SLIDE 10

8/33

Inprocessing [J¨

arvisalo, Heule, and Biere ’12] How to fix a poor representation fully automatically? Reformulate CDCL Reformulate Example: Bounded Variable Addition [Manthey, Heule, and Biere ’12] Replace

(a ∨ d) (a ∨ e) (b ∨ d) (b ∨ e) (c ∨ d) (c ∨ e)

by

(x ∨ a) (x ∨ b) (x ∨ c) (x ∨ d) (x ∨ e)

adds 1 variable removes 1 clause This technique is crucial for hard bioinformatics problems and turns the naive encoding of AtMostOne into the optimal one.

slide-11
SLIDE 11

8/33

Inprocessing [J¨

arvisalo, Heule, and Biere ’12] How to fix a poor representation fully automatically? Reformulate CDCL Reformulate Example: Bounded Variable Addition [Manthey, Heule, and Biere ’12] Replace

(a ∨ d) (a ∨ e) (b ∨ d) (b ∨ e) (c ∨ d) (c ∨ e)

by

(x ∨ a) (x ∨ b) (x ∨ c) (x ∨ d) (x ∨ e)

adds 1 variable removes 1 clause This technique is crucial for hard bioinformatics problems and turns the naive encoding of AtMostOne into the optimal one.

slide-12
SLIDE 12

9/33

Matrix Multiplication

joint work with Manuel Kauers and Martina Seidl

slide-13
SLIDE 13

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • c1,1 = a1,1·b1,1 + a1,2·b2,1

c1,2 = a1,1·b1,2 + a1,2·b2,2 c2,1 = a2,1·b1,1 + a2,2·b2,1 c2,2 = a2,1·b1,2 + a2,2·b2,2

slide-14
SLIDE 14

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • c1,1 = M1 + M4 − M5 + M7

c1,2 = M3 + M5 c2,1 = M2 + M4 c2,2 = M1 − M2 + M3 + M6

slide-15
SLIDE 15

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • . . . where

M1 = (a1,1 + a2,2)·(b1,1 + b2,2) M2 = (a2,1 + a2,2)·b1,1 M3 = a1,1·(b1,2 − b2,2) M4 = a2,2·(b2,1 − b1,1) M5 = (a1,1 + a1,2)·b2,2 M6 = (a2,1 − a1,1)·(b1,1 + b1,2) M7 = (a1,2 − a2,2)·(b2,1 + b2,2)

slide-16
SLIDE 16

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • ◮ This scheme needs 7 multiplications instead of 8.
slide-17
SLIDE 17

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • ◮ This scheme needs 7 multiplications instead of 8.

◮ Recursive application allows to multiply n × n matrices

with O(nlog2 7) operations in the ground ring.

slide-18
SLIDE 18

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • ◮ This scheme needs 7 multiplications instead of 8.

◮ Recursive application allows to multiply n × n matrices

with O(nlog2 7) operations in the ground ring.

◮ Let ω be the smallest number so that n × n matrices can

be multiplied using O(nω) operations in the ground domain.

slide-19
SLIDE 19

10/33

Matrix Multiplication: Introduction

  • a1,1

a1,2 a2,1 a2,2 b1,1 b1,2 b2,1 b2,2

  • =
  • c1,1

c1,2 c2,1 c2,2

  • ◮ This scheme needs 7 multiplications instead of 8.

◮ Recursive application allows to multiply n × n matrices

with O(nlog2 7) operations in the ground ring.

◮ Let ω be the smallest number so that n × n matrices can

be multiplied using O(nω) operations in the ground domain.

◮ Then 2 ≤ ω < 3. What is the exact value?

slide-20
SLIDE 20

11/33

Efficient Matrix Multiplication: Theory

◮ Strassen 1969:

ω ≤ log2 7 ≤ 2.807

slide-21
SLIDE 21

11/33

Efficient Matrix Multiplication: Theory

◮ Strassen 1969:

ω ≤ log2 7 ≤ 2.807

◮ Pan 1978:

ω ≤ 2.796

◮ Bini et al. 1979:

ω ≤ 2.7799

◮ Sch¨

  • nhage 1981:

ω ≤ 2.522

◮ Romani 1982:

ω ≤ 2.517

◮ Coppersmith/Winograd 1981:

ω ≤ 2.496

◮ Strassen 1986:

ω ≤ 2.479

◮ Coppersmith/Winograd 1990:

ω ≤ 2.376

slide-22
SLIDE 22

11/33

Efficient Matrix Multiplication: Theory

◮ Strassen 1969:

ω ≤ log2 7 ≤ 2.807

◮ Pan 1978:

ω ≤ 2.796

◮ Bini et al. 1979:

ω ≤ 2.7799

◮ Sch¨

  • nhage 1981:

ω ≤ 2.522

◮ Romani 1982:

ω ≤ 2.517

◮ Coppersmith/Winograd 1981:

ω ≤ 2.496

◮ Strassen 1986:

ω ≤ 2.479

◮ Coppersmith/Winograd 1990:

ω ≤ 2.376

◮ Stothers 2010:

ω ≤ 2.374

◮ Williams 2011:

ω ≤ 2.3728642

◮ Le Gall 2014:

ω ≤ 2.3728639

slide-23
SLIDE 23

12/33

Efficient Matrix Multiplication: Practice

◮ Only Strassen’s algorithm beats the classical algorithm for

reasonable problem sizes.

slide-24
SLIDE 24

12/33

Efficient Matrix Multiplication: Practice

◮ Only Strassen’s algorithm beats the classical algorithm for

reasonable problem sizes.

◮ Want: a matrix multiplication algorithm that beats

Strassen’s algorithm for matrices of moderate size.

slide-25
SLIDE 25

12/33

Efficient Matrix Multiplication: Practice

◮ Only Strassen’s algorithm beats the classical algorithm for

reasonable problem sizes.

◮ Want: a matrix multiplication algorithm that beats

Strassen’s algorithm for matrices of moderate size.

◮ Idea: instead of dividing the matrices into 2 × 2-block

matrices, divide them into 3 × 3-block matrices.

slide-26
SLIDE 26

12/33

Efficient Matrix Multiplication: Practice

◮ Only Strassen’s algorithm beats the classical algorithm for

reasonable problem sizes.

◮ Want: a matrix multiplication algorithm that beats

Strassen’s algorithm for matrices of moderate size.

◮ Idea: instead of dividing the matrices into 2 × 2-block

matrices, divide them into 3 × 3-block matrices.

◮ Question: What’s the minimal number of multiplications

needed to multiply two 3 × 3 matrices?

slide-27
SLIDE 27

12/33

Efficient Matrix Multiplication: Practice

◮ Only Strassen’s algorithm beats the classical algorithm for

reasonable problem sizes.

◮ Want: a matrix multiplication algorithm that beats

Strassen’s algorithm for matrices of moderate size.

◮ Idea: instead of dividing the matrices into 2 × 2-block

matrices, divide them into 3 × 3-block matrices.

◮ Question: What’s the minimal number of multiplications

needed to multiply two 3 × 3 matrices?

◮ Answer: Nobody knows.

slide-28
SLIDE 28

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

slide-29
SLIDE 29

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

◮ naive algorithm: 27

slide-30
SLIDE 30

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

◮ naive algorithm: 27 ◮ padd with zeros, use Strassen twice, cleanup: 25

slide-31
SLIDE 31

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

◮ naive algorithm: 27 ◮ padd with zeros, use Strassen twice, cleanup: 25 ◮ best known upper bound: 23 (Laderman 1976)

slide-32
SLIDE 32

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

◮ naive algorithm: 27 ◮ padd with zeros, use Strassen twice, cleanup: 25 ◮ best known upper bound: 23 (Laderman 1976) ◮ best known lower bound: 19 (Bl¨

aser 2003)

slide-33
SLIDE 33

13/33

The 3x3 Case is Still Open

Question: What’s the minimal number of multiplications needed to multiply two 3 × 3 matrices?

◮ naive algorithm: 27 ◮ padd with zeros, use Strassen twice, cleanup: 25 ◮ best known upper bound: 23 (Laderman 1976) ◮ best known lower bound: 19 (Bl¨

aser 2003)

◮ maximal number of multiplications allowed if we want to

beat Strassen: 21 (because log3 21 < log2 7 < log3 22).

slide-34
SLIDE 34

14/33

Laderman’s scheme from 1976

  a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3     b1,1 b1,2 b1,3 b2,1 b2,2 b2,3 b3,1 b3,2 b3,3   =   c1,1 c1,2 c1,3 c2,1 c2,2 c2,3 c3,1 c3,2 c3,3   c1,1 = −M6 + M14 + M19 c2,1 = M2 + M3 + M4 + M6 + M14 + M16 + M17 c3,1 = M6 + M7 − M8 + M11 + M12 + M13 − M14 c1,2 = M1 − M4 + M5 − M6 − M12 + M14 + M15 c2,2 = M2 + M4 − M5 + M6 + M20 c3,2 = M12 + M13 − M14 − M15 + M22 c1,3 = −M6 − M7 + M9 + M10 + M14 + M16 + M18 c2,3 = M14 + M16 + M17 + M18 + M21 c3,3 = M6 + M7 − M8 − M9 + M23

slide-35
SLIDE 35

14/33

Laderman’s scheme from 1976

  a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3     b1,1 b1,2 b1,3 b2,1 b2,2 b2,3 b3,1 b3,2 b3,3   =   c1,1 c1,2 c1,3 c2,1 c2,2 c2,3 c3,1 c3,2 c3,3   where . . . M1 = (−a1,1 + a1,2 + a1,3 − a2,1 + a2,2 + a3,2 + a3,3)·b2,2 M2 = (a1,1 + a2,1)·(b1,2 + b2,2) M3 = a2,2·(b1,1 − b1,2 + b2,1 − b2,2 − b2,3 + b3,1 − b3,3) M4 = (−a1,1 − a2,1 + a2,2)·(−b1,1 + b1,2 + b2,2) M5 = (−a2,1 + a2,2)·(−b1,1 + b1,2) M6 = −a1,1·b1,1 M7 = (a1,1 + a3,1 + a3,2)·(b1,1 − b1,3 + b2,3) M8 = (a1,1 + a3,1)·(−b1,3 + b2,3) M9 = (a3,1 + a3,2)·(b1,1 − b1,3)

slide-36
SLIDE 36

14/33

Laderman’s scheme from 1976

  a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3     b1,1 b1,2 b1,3 b2,1 b2,2 b2,3 b3,1 b3,2 b3,3   =   c1,1 c1,2 c1,3 c2,1 c2,2 c2,3 c3,1 c3,2 c3,3   where . . . M10 = (a1,1 + a1,2 − a1,3 − a2,2 + a2,3 + a3,1 + a3,2)·b2,3 M11 = (a3,2)·(−b1,1 + b1,3 + b2,1 − b2,2 − b2,3 − b3,1 + b3,2) M12 = (a1,3 + a3,2 + a3,3)·(b2,2 + b3,1 − b3,2) M13 = (a1,3 + a3,3)·(−b2,2 + b3,2) M14 = a1,3·b3,1 M15 = (−a3,2 − a3,3)·(−b3,1 + b3,2) M16 = (a1,3 + a2,2 − a2,3)·(b2,3 − b3,1 + b3,3) M17 = (−a1,3 + a2,3)·(b2,3 + b3,3) M18 = (a2,2 − a2,3)·(b3,1 − b3,3)

slide-37
SLIDE 37

14/33

Laderman’s scheme from 1976

  a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3     b1,1 b1,2 b1,3 b2,1 b2,2 b2,3 b3,1 b3,2 b3,3   =   c1,1 c1,2 c1,3 c2,1 c2,2 c2,3 c3,1 c3,2 c3,3   where . . . M19 = a1,2·b2,1 M20 = a2,3·b3,2 M21 = a2,1·b1,3 M22 = a3,1·b1,2 M23 = a3,3·b3,3

slide-38
SLIDE 38

15/33

Other schemes with 23 multiplications

◮ While Strassen’s scheme is essentially the only way to do

the 2 × 2 case with 7 multiplications, there are several distinct schemes for 3 × 3 matrices using 23 multiplications.

slide-39
SLIDE 39

15/33

Other schemes with 23 multiplications

◮ While Strassen’s scheme is essentially the only way to do

the 2 × 2 case with 7 multiplications, there are several distinct schemes for 3 × 3 matrices using 23 multiplications.

◮ If we insist in integer coefficients, there have so far (and

to our knowledge) been only three other schemes for 3 × 3 matrices and 23 multiplications.

slide-40
SLIDE 40

15/33

Other schemes with 23 multiplications

◮ While Strassen’s scheme is essentially the only way to do

the 2 × 2 case with 7 multiplications, there are several distinct schemes for 3 × 3 matrices using 23 multiplications.

◮ If we insist in integer coefficients, there have so far (and

to our knowledge) been only three other schemes for 3 × 3 matrices and 23 multiplications.

◮ Using altogether about 35 years of computation time, we

found more than 13000 new schemes for 3 × 3 and 23, and we expect that there are many others.

slide-41
SLIDE 41

15/33

Other schemes with 23 multiplications

◮ While Strassen’s scheme is essentially the only way to do

the 2 × 2 case with 7 multiplications, there are several distinct schemes for 3 × 3 matrices using 23 multiplications.

◮ If we insist in integer coefficients, there have so far (and

to our knowledge) been only three other schemes for 3 × 3 matrices and 23 multiplications.

◮ Using altogether about 35 years of computation time, we

found more than 13000 new schemes for 3 × 3 and 23, and we expect that there are many others.

◮ Unfortunately we found no scheme with only 22

multiplications

slide-42
SLIDE 42

16/33

How to Search for a Matrix Multiplication Scheme? (1)

M1 = (α(1)

1,1a1,1 + α(1) 1,2a1,2 + · · · )(β(1) 1,1b1,1 + · · · )

M2 = (α(2)

1,1a1,1 + α(2) 1,2a1,2 + · · · )(β(2) 1,1b1,1 + · · · )

. . . c1,1 = γ(1)

1,1M1 + γ(2) 1,1M2 + · · ·

. . . Set ci,j = ∑k ai,kbk,j for all i, j and compare coefficients.

slide-43
SLIDE 43

17/33

How to Search for a Matrix Multiplication Scheme? (2)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

The δu,v on the right refer to the Kronecker-delta, i.e., δu,v = 1 if u = v and δu,v = 0 otherwise. 36 = 729 cubic equations 23 · 9 · 3 = 621 variables

slide-44
SLIDE 44

17/33

How to Search for a Matrix Multiplication Scheme? (2)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

The δu,v on the right refer to the Kronecker-delta, i.e., δu,v = 1 if u = v and δu,v = 0 otherwise. 36 = 729 cubic equations 23 · 9 · 3 = 621 variables Laderman claims that he solved this system by hand, but he doesn’t say exactly how.

slide-45
SLIDE 45

18/33

How to Search for a Matrix Multiplication Scheme? (3)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

The search space of the 3 × 3 case is enormous, even if α(q)

i,j , β(q) k,l , γ(q) m,n are restricted to the values in {−1, 0, 1}

slide-46
SLIDE 46

18/33

How to Search for a Matrix Multiplication Scheme? (3)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

The search space of the 3 × 3 case is enormous, even if α(q)

i,j , β(q) k,l , γ(q) m,n are restricted to the values in {−1, 0, 1}

Solution: Solve this system in Z2. Reading α(q)

i,j , β(q) k,l , γ(q) m,n as boolean variables and + as XOR,

the problem becomes a SAT problem. Notice that solutions in Z2 may not be solutions in Z

slide-47
SLIDE 47

19/33

Lifting

Remember the Brent equations:

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

◮ Suppose we know a solution in Z2. ◮ Assume it came from a solution in Z with coefficients in {−1, 0, +1}. ◮ Then each 0 ∈ Z2 was 0 ∈ Z and each 1 ∈ Z2 was −1 ∈ Z or +1 ∈ Z. ◮ Plug the 0s of the Z2-solution into the Brent equations. ◮ Solve the resulting equations.

slide-48
SLIDE 48

19/33

Lifting

Remember the Brent equations:

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

◮ Suppose we know a solution in Z2. ◮ Assume it came from a solution in Z with coefficients in {−1, 0, +1}. ◮ Then each 0 ∈ Z2 was 0 ∈ Z and each 1 ∈ Z2 was −1 ∈ Z or +1 ∈ Z. ◮ Plug the 0s of the Z2-solution into the Brent equations. ◮ Solve the resulting equations.

Can every Z2-solution be lifted to a Z-solution in this way?

◮ No, and we found some which don’t admit a lifting. ◮ But they are very rare. In almost all cases, the lifting succeeds.

slide-49
SLIDE 49

20/33

How to Search for a Matrix Multiplication Scheme? (4)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

Another solution: Solve this system by restricting equations with a zero righthand side to zero or two. Still treat α(q)

i,j , β(q) k,l , γ(q) m,n as boolean variables.

Notice that this restriction removes solutions, but it even works for Laderman.

slide-50
SLIDE 50

20/33

How to Search for a Matrix Multiplication Scheme? (4)

This gives the Brent equations (for 3 × 3 with 23 multiplications)

∀ i, j, k, l, m, n ∈ {1, 2, 3} :

23

q=1

α(q)

i,j β(q) k,l γ(q) m,n = δj,kδi,mδl,n

Another solution: Solve this system by restricting equations with a zero righthand side to zero or two. Still treat α(q)

i,j , β(q) k,l , γ(q) m,n as boolean variables.

Notice that this restriction removes solutions, but it even works for Laderman. Important challenge: how to break the symmetries? Most effective approach so far: sort the δj,kδi,mδl,n = 1 terms

slide-51
SLIDE 51

21/33

So what?

◮ Okay, so there are many more matrix multiplication

methods for 3 × 3 matrices with 23 coefficient multiplications than previously known.

slide-52
SLIDE 52

21/33

So what?

◮ Okay, so there are many more matrix multiplication

methods for 3 × 3 matrices with 23 coefficient multiplications than previously known.

◮ In fact, we have shown that the dimension of the algebraic

set defined by the Brent equation is much larger than was previously known.

slide-53
SLIDE 53

21/33

So what?

◮ Okay, so there are many more matrix multiplication

methods for 3 × 3 matrices with 23 coefficient multiplications than previously known.

◮ In fact, we have shown that the dimension of the algebraic

set defined by the Brent equation is much larger than was previously known.

◮ But none of this has any immediate implications on the

complexity of matrix multiplication, neither theoretically nor practically.

slide-54
SLIDE 54

21/33

So what?

◮ Okay, so there are many more matrix multiplication

methods for 3 × 3 matrices with 23 coefficient multiplications than previously known.

◮ In fact, we have shown that the dimension of the algebraic

set defined by the Brent equation is much larger than was previously known.

◮ But none of this has any immediate implications on the

complexity of matrix multiplication, neither theoretically nor practically.

◮ In particular, it remains open whether there is a

multiplication method for 3 × 3 matrices with 22 coefficient multiplications. If you find one, let us know.

slide-55
SLIDE 55

22/33

Scheme Database

Check out our website for browsing through the schemes and families we found: http:/ /www.algebra.uni-linz.ac.at/research/matrix-multiplication/

slide-56
SLIDE 56

23/33

The Collatz Conjecture

joint work with Scott Aaronson

slide-57
SLIDE 57

24/33

Beyond NP: The Collatz Conjecture

Resolving foundational algorithm questions Col(n) =

  • n/2

if n is even

(3n + 1)/2

if n is odd Does while(n > 1) n = Col(n); terminate? Find a non-negative function fun(n) s.t.

∀n > 1 : fun(n) > fun(Col(n))

source: xkcd.com/710

slide-58
SLIDE 58

24/33

Beyond NP: The Collatz Conjecture

Resolving foundational algorithm questions Col(n) =

  • n/2

if n is even

(3n + 1)/2

if n is odd Does while(n > 1) n = Col(n); terminate? Find a non-negative function fun(n) s.t.

∀n > 1 : fun(n) > fun(Col(n))

source: xkcd.com/710

Can we construct a function for which fun(n) > fun(Col(n)) holds? fun(3) fun(5) fun(8) fun(4) fun(2) fun(1) 5 4 3 2 1

slide-59
SLIDE 59

25/33

Term Rewriting Termination Example

Given a set of rewriting rules, will rewriting always terminate?

slide-60
SLIDE 60

25/33

Term Rewriting Termination Example

Given a set of rewriting rules, will rewriting always terminate? Example set of rules (Zantema’s “other” Problem):

◮ aa →R bc ◮ bb →R ac ◮ cc →R ab

slide-61
SLIDE 61

25/33

Term Rewriting Termination Example

Given a set of rewriting rules, will rewriting always terminate? Example set of rules (Zantema’s “other” Problem):

◮ aa →R bc ◮ bb →R ac ◮ cc →R ab

bbaa →R bbbc →R bacc →R baab →R bbcb →R accb →R aabb →R aaac →R abcc →R abab

slide-62
SLIDE 62

25/33

Term Rewriting Termination Example

Given a set of rewriting rules, will rewriting always terminate? Example set of rules (Zantema’s “other” Problem):

◮ aa →R bc ◮ bb →R ac ◮ cc →R ab

bbaa →R bbbc →R bacc →R baab →R bbcb →R accb →R aabb →R aaac →R abcc →R abab Strongest rewriting solvers use SAT (e.g. AProVE) Example first solved by Hofbauer and Waldmann (2006)

slide-63
SLIDE 63

26/33

Term Rewriting Proof Outline

Prove termination of Zantema’s “other” Problem:

◮ aa →R bc ◮ bb →R ac ◮ cc →R ab

Proof outline:

◮ Interpret a,b,c by linear functions [a], [b], [c] from N4 to N4 ◮ Interpret string concatenation by function composition ◮ Show that if [uaav] (0, 0, 0, 0) = (x1, x2, x3, x4) and

[ubcv] (0, 0, 0, 0) = (y1, y2, y3, y4) then x1 > y1

◮ Similar for bb → ac and cc → ab ◮ Hence every rewrite step gives a decrease of x1 ∈ N, so

rewriting terminates

slide-64
SLIDE 64

27/33

Term Rewriting Termination Proof

The linear functions:

[a](

x) =     1 3 2 1 1 1     x +     1 1    

[b](

x) =     1 2 2 1 1     x +     2    

[c](

x) =     1 1 1 1 2     x +     1 3     Checking decrease properties using linear algebra

slide-65
SLIDE 65

28/33

Collatz Conjecture (2)

Resolving foundational algorithm questions Col(n) =

  • n/2

if n is even

(3n + 1)/2

if n is odd Does while(n > 1) n = Col(n); terminate? Find a non-negative function fun(n) s.t.

∀n > 1 : fun(n) > fun(Col(n))

source: xkcd.com/710

slide-66
SLIDE 66

28/33

Collatz Conjecture (2)

Resolving foundational algorithm questions Col(n) =

  • n/2

if n is even

(3n + 1)/2

if n is odd Does while(n > 1) n = Col(n); terminate? Find a non-negative function fun(n) s.t.

∀n > 1 : fun(n) > fun(Col(n))

source: xkcd.com/710

fun(3) fun(5) fun(8) fun(4) fun(2) fun(1) t(t(

  • 0))

t(f(t(

  • 0)))

t(f(f(f(

  • 0))))

t(f(f(

  • 0)))

t(f(

  • 0))

t(

  • 0)

5 1

  • 4

1

  • 3

1

  • 2

1

  • 1

1

  • 1
  • using t(

x) =

  • 1 5

0 0

  • x +
  • 1
  • and f(

x) =

  • 1 3

0 0

  • x +
  • 1
slide-67
SLIDE 67

29/33

The Collatz Conjecture as Rewriting System

Consider the following functions:

◮ Binary system: f (x) = 2x, t(x) = 2x + 1 ◮ Ternary system: p(x) = 3x, q(x) = 3x + 1, r(x) = 3x + 2 ◮ Start and end symbols: c(x) = 1, d(x) = x

D1 : fd →R d D2 : td →R rd F1 : fp →R pf F2 : fq →R pt F3 : fr →R qf T1 : tp →R qt T2 : tq →R rf T3 : tr →R rt C1 : cp →R ct C2 : cq →R cff C3 : cr →R cft

Interpretation using the functions above: D1 : 2x → x D2 : 2x + 1 → 3x + 2

(= (3(2x + 1) + 1)/2)

F1 : 6x → 6x T3 : 6x + 5 → 6x + 5

slide-68
SLIDE 68

30/33

Collatz Rewriting Example

D1 : fd →R d D2 : td →R rd F1 : fp →R pf F2 : fq →R pt F3 : fr →R qf T1 : tp →R qt T2 : tq →R rf T3 : tr →R rt C1 : cp →R ct C2 : cq →R cff C3 : cr →R cft ctd → crd → cftd → cfrd → cqfd → cf f fd → cf fd → cfd → cd D2 C3 D2 F3 C2 D1 D1 D1 3 → 5 → 5 → 8 → 8 → 8 → 4 → 2 → 1

slide-69
SLIDE 69

30/33

Collatz Rewriting Example

D1 : fd →R d D2 : td →R rd F1 : fp →R pf F2 : fq →R pt F3 : fr →R qf T1 : tp →R qt T2 : tq →R rf T3 : tr →R rt C1 : cp →R ct C2 : cq →R cff C3 : cr →R cft ctd → crd → cftd → cfrd → cqfd → cf f fd → cf fd → cfd → cd D2 C3 D2 F3 C2 D1 D1 D1 3 → 5 → 5 → 8 → 8 → 8 → 4 → 2 → 1

Can we prove termination of the Collatz rewriting system?

slide-70
SLIDE 70

30/33

Collatz Rewriting Example

D1 : fd →R d D2 : td →R rd F1 : fp →R pf F2 : fq →R pt F3 : fr →R qf T1 : tp →R qt T2 : tq →R rf T3 : tr →R rt C1 : cp →R ct C2 : cq →R cff C3 : cr →R cft ctd → crd → cftd → cfrd → cqfd → cf f fd → cf fd → cfd → cd D2 C3 D2 F3 C2 D1 D1 D1 3 → 5 → 5 → 8 → 8 → 8 → 4 → 2 → 1

Can we prove termination of the Collatz rewriting system? The full system is still too hard, but subsystems (removing one

  • f the rules) are doable (although not with existing tools).
slide-71
SLIDE 71

31/33

Results on Proving Termination of Subsystems

missing rule dimension value limit runtime D1 3 3 1.40 D2 1 1 0.00 F1 4 5 5 828.36 F2 2 3 0.02 F3 2 2 0.01 T1 4 7 25 340.99 T2 5 7 44 056.35 T3 4 6 37 071.33 C1 2 2 0.01 C2 3 4 23.35 C3 4 5 75.89

slide-72
SLIDE 72

32/33

Challenges for Collatz Conjecture

The presented system is just one of many possible rewriting systems that captures the Collatz conjecture. Which system facilitates efficient reasoning?

slide-73
SLIDE 73

32/33

Challenges for Collatz Conjecture

The presented system is just one of many possible rewriting systems that captures the Collatz conjecture. Which system facilitates efficient reasoning? How to encode the SAT formula?

◮ The order encoding for multiplication is very effective ◮ Reduce the size of the encoding my reusing calculations

slide-74
SLIDE 74

32/33

Challenges for Collatz Conjecture

The presented system is just one of many possible rewriting systems that captures the Collatz conjecture. Which system facilitates efficient reasoning? How to encode the SAT formula?

◮ The order encoding for multiplication is very effective ◮ Reduce the size of the encoding my reusing calculations

Which SAT solving techniques are effective?

◮ Surprisingly old SAT solvers work better than new ones ◮ Can local search be effective (we only look for solutions)?

slide-75
SLIDE 75

33/33

Encoding Challenges for Hard Problems

Marijn J.H. Heule Starting at in August

Matryoshka workshop June 12, 2019