July 14, 2010 FLoC/SAT10 Edinburgh, Scotland, UK What is - - PowerPoint PPT Presentation

july 14 2010 floc sat 10 edinburgh scotland uk what is
SMART_READER_LITE
LIVE PREVIEW

July 14, 2010 FLoC/SAT10 Edinburgh, Scotland, UK What is - - PowerPoint PPT Presentation

July 14, 2010 FLoC/SAT10 Edinburgh, Scotland, UK What is SAT-Race? Competition for sequential/parallel SAT solvers Only industrial/application category benchmarks (no handcrafted or random) Short run-times (15 minutes timeout


slide-1
SLIDE 1

July 14, 2010 FLoC/SAT’10 – Edinburgh, Scotland, UK

slide-2
SLIDE 2

What is SAT-Race?

 Competition for sequential/parallel SAT solvers

 Only industrial/application category benchmarks (no handcrafted or random)  Short run-times (15 minutes timeout per instance)  Mixture of satisfiable / unsatisfiable instances (thus not suitable for local-search solvers)  „Black-box“ solvers permitted  3 tracks:

 Main Track: Sequential CNF  Special Track 1: Parallel CNF  Special Track 2: Sequential AIG

slide-3
SLIDE 3

Organizers

 Chair

 Carsten Sinz (Karlsruhe Institute of Technology, Germany)

 Advisory Panel

 Aarti Gupta (NEC Labs America, USA)  Youssef Hamadi (Microsoft Research, UK)  Himanshu Jain (Synopsys, USA)  Daniel Le Berre (Université d'Artois, France)  Panagiotis Manolios (Northeastern University, USA)  Yakov Novikov (OneSpin Solutions, Germany)

 Technical Management

 Florian Merz (Karlsruhe Institute of Technology, Germany)

slide-4
SLIDE 4

Entrants

 Received 32 solvers by 23 submitters from 9 nations  SAT-Race 2008: 43 solvers by 36 submitters from 16 nations  SAT-Race 2006: 29 solvers by 23 submitters from 13 nations  2 industrial solvers, 27 academic, 3 mixed  21 solvers in Main Track, 8 in Parallel Track, 3 in AIG Track

Australia 1 Austria 4 China 1 France 7 France / UK 4 Germany 4 Iran 1 Spain 1 Sweden 4 USA 5

slide-5
SLIDE 5

Qualification

 To ascertain solver correctness and efficiency  One qualification round

 100 benchmark instances (SAT-Race 2008)  Successful participation required to participate in finals

 Qualification round took place in May

slide-6
SLIDE 6

Results Qualification Round

 Main Track

 19 solvers qualified (out of 21) by solving at least 70 out of 100

instances (no solver produced errors)

 2 solvers produced wrong results during finals

 Parallel Track

 6 solvers qualified (out of 8) by solving at least 70 out of 100 instances

(1 solver had produced wrong results and was withdrawn)

 1 solver produced wrong results during finals

 AIG Track:

 All 3 solvers qualified by solving more than 50 out of 100 instances

 Overall result: 28 (out of 32) solvers participated in finals

 17 in Main Track (plus 3 parallel solvers running in sequential mode),

5 in Parallel Track, 3 in AIG Track

 One solver withdrawn, 3 solvers with wrong results during finals

slide-7
SLIDE 7

Solvers Participating in Finals: Main Track

Solver Affiliation Barcelogic TU Catalonia, Spain borg-sat U Texas, USA CircleSAT Donghua U, China CryptoMiniSat INRIA, France glucose CRIL, France glucosER CRIL-CNRS, France lingeling JKU Linz, Austria LySAT INRIA-Microsoft JC, France MiniSat Sörensson R&D, Sweden Solver Affiliation

  • prailleur

CRIL-CNRS, France PicoSAT JKU Linz, Austria PrecoSAT JKU Linz, Austria riss TU Dresden, Germany rcl CRIL-CNRS, France SApperloT U Tübingen, Germany SAT-Power U Isfahan, Iran SATHYS CRIL-CNRS, France red: new solvers

slide-8
SLIDE 8

Solvers Participating in Finals: Special Tracks

Solver Affiliation antom U Freiburg, Germany ManySAT 1.1 INRIA-Microsoft JC, France ManySAT 1.5 INRIA-Microsoft JC, France plingeling JKU Linz, Austria SArTagnan U Tübingen, Germany Solver Affiliation kw_aig Oepir, Sweden MiniSat++ Sörensson R&D, Sweden NFLSAT CMU, USA

Parallel Track: AIG Track:

slide-9
SLIDE 9

Benchmark Instances: CNF

 Corpus of 490 instances  Hardware verification / software verification / cryptography /

mixed

 Mainly from former SAT Competitions/Races  Additional software verification instances from NEC  Selected 100 instances randomly  30 hardware verification (IBM, Velev, Manolios)  30 software verification (Babic, Bitverif, Fuhs, NEC, Post)  15 cryptography (desgen, md5gen, Mironov-Zhang)  25 mixed (Anbulagan, Bioinformatics, Diagnosis, …)  Up to 10,950,109 variables, 32,697,150 clauses  Smallest instance: 1694 variables, 5726 clauses

slide-10
SLIDE 10

Sizes of CNF Benchmark Instances

1000 10000 100000 1e+06 1e+07 1e+08 100 1000 10000 100000 1e+06 1e+07 1e+08 Hardware Verification Software Verification Cryptanalysis Mixed

#variables #clauses

slide-11
SLIDE 11

Benchmark Instances: AIG

 Corpus of 538 instances

 9 Groups of Benchmark Sets (Anbulagan / Babic /

c32sat / Mironov-Zhang / IBM / Intel / Manolios / Palacios / Mixed)

 Selected 100 instances randomly

slide-12
SLIDE 12

Parallel Track: Special Rules

 Solver can use all 8 cores of a machine (2x Intel

Xeon Quad-Core)

 Measured wall-clock time instead of CPU usage

time

 Run-times for multi-threaded solvers can have high

deviations (especially for satisfiable instances)

 3 runs for each solver on each instance  Instance considered solved, if solved in first run

(SAT-Race 2008: at least 1 out of 3 runs)

slide-13
SLIDE 13

Scoring

 Main criterion: number of solved instances  Average run-time on solved instances to break ties

slide-14
SLIDE 14

Computing Environment

 Linux-Cluster at Karlsruhe Institute of Technology (KIT)  20 compute nodes  2 Intel Xeon E5430 Processors (Quad-Core, 2.66 GHz) per

node

 32 GB of main memory per node  Both 32-bit and 64-bit binaries supported

 Sequential/AIG Track: only one core per solver  Parallel Track: 8 cores per solver

slide-15
SLIDE 15

Results

slide-16
SLIDE 16

Special Track 2 (AIG Sequential)

1 2 3

53 solved instances 58 solved instances 54 solved instances

slide-17
SLIDE 17

Runtime Comparison: AIG Track

100 200 300 400 500 600 700 800 900 10 20 30 40 50 60 70 80 90 MiniSat++ kw_aig NFLSAT

runtime (sec.) #solved instances

slide-18
SLIDE 18

Special Track 1 (CNF Parallel)

1 2 3

72 solved instances 78 solved instances 75 solved instances next best solver: 70 solved

slide-19
SLIDE 19

Runtime Comparison: Parallel Track

100 200 300 400 500 600 700 800 900 10 20 30 40 50 60 70 80 90 plingeling ManySAT-1.5 ManySAT-1.1 SArTagnan antom

runtime (sec.) #solved instances

slide-20
SLIDE 20

Main Track (CNF Sequential)

1 2 3

71 solved instances 74 solved instances 73 solved instances next best solver: 69 solved

slide-21
SLIDE 21

Runtime Comparison: Main Track

100 200 300 400 500 600 700 800 900 10 20 30 40 50 60 70 80 CryptoMiniSat lingeling SAT-Power PrecoSAT MiniSat Barcelogic LySAT rcl borg-sat CircleSAT ManySAT-1.1 SApperloT antom PicoSAT glucose SATHYS ManySAT-1.5 glucosER riss

  • rpailleur

runtime (sec.) #solved instances

slide-22
SLIDE 22

Runtime Comparison: CNF Seq.+Par.

runtime (sec.) #solved instances

100 200 300 400 500 600 700 800 900 10 20 30 40 50 60 70 80 90 plingeling ManySAT-1.5_par CryptoMiniSat lingeling ManySAT-1.1_par SAT-Power SArTagnan antom_par PrecoSAT MiniSat Barcelogic LySAT rcl borg-sat CircleSAT ManySAT-1.1_seq SApperloT antom_seq PicoSAT glucose SATHYS ManySAT-1.5_seq glucosER riss

  • rpailleur
slide-23
SLIDE 23

Student Prize

 Special prize for a solver submitted by a (team of)

(PhD) student(s)

 Two prizes:

 Main Track: SAT-Power by Abdorrahim Bahrami

(3rd place in Main Track)

 Parallel Track: SArTagnan by Stephan Kottler

(4th place in Parallel Track)

slide-24
SLIDE 24

Conclusion

 Any Progress compared to SAT-Competition 2009?

 SAT-Race 2010 winner can solve 5 more instances than

SAT-Competition 2009 winner (SAT+UNSAT Application Category) on our benchmark set

 3 solvers (plus 4 parallel solvers) outperform SAT-

Competition 2009 winner

 Parallel solvers gain importance; improved

robustness (only small differences on 3 runs)

 Many new solvers and participants

slide-25
SLIDE 25

SAT-Race 2010 on the Web: http://baldur.iti.kit.edu/sat-race-2010