SLIDE 1 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?”
SLIDE 2 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.”
SLIDE 3 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?”
SLIDE 4 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.”
SLIDE 5 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?”
SLIDE 6 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.”
SLIDE 7 Trapdoor simulation
Daniel J. Bernstein University of Illinois at Chicago & Technische Universiteit Eindhoven Joint work with: Tung Chou Technische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.”
SLIDE 8 door simulation quantum algorithms
University of Illinois at Chicago & echnische Universiteit Eindhoven
Chou echnische Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms Critical question How hard
SLIDE 9
simulation rithms Bernstein Illinois at Chicago & Universiteit Eindhoven Universiteit Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard Critical question fo How hard is ECDLP?
SLIDE 10
Chicago & Eindhoven Eindhoven Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard problems Critical question for ECC securit How hard is ECDLP?
SLIDE 11
Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP?
SLIDE 12
Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions.
SLIDE 13
Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not!
SLIDE 14
Algorithms in CS courses “WHAT is your algorithm?” “Heapsort. Here’s the code.” “WHAT does it accomplish?” “It sorts the input array in place. Here’s a proof.” “WHAT is its run time?” “O(n lg n) comparisons; and Θ(n lg n) comparisons for most inputs. Here’s a proof.” “You may pass.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true?
SLIDE 15 rithms in CS courses T is your algorithm?” “Heapsort. Here’s the code.” T does it accomplish?” rts the input array in place. a proof.” T is its run time?” lg n) comparisons; Θ(n lg n) comparisons
- st inputs. Here’s a proof.”
may pass.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lamb inadequately
SLIDE 16 courses algorithm?” Here’s the code.” accomplish?” input array in place. run time?” comparisons; comparisons Here’s a proof.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lamb inadequately specified
SLIDE 17 rithm?” de.” accomplish?” place. proof.” Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statem
- f a negating rho algorithm.
SLIDE 18 Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
SLIDE 19 Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional.
SLIDE 20 Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms.
SLIDE 21 Algorithms for hard problems Critical question for ECC security: How hard is ECDLP? Standard estimate for “strong” ECC groups of prime order ‘: Latest “negating” variants of “distinguished point” rho methods break an average ECDLP instance using ≈0:886 √ ‘ additions. Is this proven? No! Is this provable? Maybe not! So why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments!
SLIDE 22 rithms for hard problems Critical question for ECC security: hard is ECDLP? Standard estimate for “strong” groups of prime order ‘: “negating” variants of “distinguished point” rho methods an average ECDLP instance ≈0:886 √ ‘ additions. proven? No! provable? Maybe not! why do we think it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar s we don’t best facto
SLIDE 23 hard problems for ECC security: ECDLP? estimate for “strong” rime order ‘: “negating” variants of
average ECDLP instance additions. No! Maybe not! ink it’s true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for R we don’t have proofs best factoring algo
SLIDE 24 roblems security: “strong” rder ‘:
methods instance additions. not! true? 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA securit we don’t have proofs for the best factoring algorithms.
SLIDE 25 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms.
SLIDE 26 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms.
SLIDE 27 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms.
SLIDE 28 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms.
SLIDE 29 2000 Gallant–Lambert–Vanstone: inadequately specified statement
- f a negating rho algorithm.
2010 Bos–Kleinjung–Lenstra: a plausible interpretation of that algorithm is non-functional. See 2011 Bernstein–Lange– Schwabe for more history and better algorithms. Why do we believe that the latest algorithms work at the claimed speeds? Experiments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms. Confidence relies on experiments.
SLIDE 30
Gallant–Lambert–Vanstone: inadequately specified statement negating rho algorithm. Bos–Kleinjung–Lenstra: plausible interpretation of algorithm is non-functional. 2011 Bernstein–Lange– abe for more history etter algorithms. do we believe that latest algorithms work claimed speeds? eriments! Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms. Confidence relies on experiments. Where’s Quantum-algo is moving into algo Example: exponent Bernstein–Jeffery–Lange–Meurer. Don’t exp for the b to attack How do in analysis Quantum
SLIDE 31
Gallant–Lambert–Vanstone: ecified statement rho algorithm. Bos–Kleinjung–Lenstra: pretation of non-functional. Bernstein–Lange– re history rithms. elieve that rithms work speeds? Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms. Confidence relies on experiments. Where’s my quantum Quantum-algorithm is moving beyond textb into algorithms without Example: subset-sum exponent ≈0:241 from Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs for the best quantum to attack post-quantum How do we obtain in analysis of these Quantum experiments
SLIDE 32 anstone: statement rithm. Bos–Kleinjung–Lenstra:
non-functional. Bernstein–Lange– Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms. Confidence relies on experiments. Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provabilit for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are ha
SLIDE 33
Similar story for RSA security: we don’t have proofs for the best factoring algorithms. Code-based cryptography: we don’t have proofs for the best decoding algorithms. Lattice-based cryptography: we don’t have proofs for the best lattice algorithms. MQ-based cryptography: we don’t have proofs for the best system-solving algorithms. Confidence relies on experiments. Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard.
SLIDE 34
r story for RSA security: don’t have proofs for the factoring algorithms. de-based cryptography: don’t have proofs for the decoding algorithms. Lattice-based cryptography: don’t have proofs for the lattice algorithms. MQ-based cryptography: don’t have proofs for the system-solving algorithms. Confidence relies on experiments. Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s Analogy: a 280 NFS
SLIDE 35
RSA security: roofs for the algorithms. cryptography: roofs for the algorithms. cryptography: roofs for the rithms. cryptography: roofs for the system-solving algorithms. relies on experiments. Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t a 280 NFS RSA-1024
SLIDE 36
security: the the cryptography: the the rithms. eriments. Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t carried a 280 NFS RSA-1024 experime
SLIDE 37
Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment.
SLIDE 38
Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280.
SLIDE 39 Where’s my quantum computer? Quantum-algorithm design is moving beyond textbook stage into algorithms without proofs. Example: subset-sum exponent ≈0:241 from 2013 Bernstein–Jeffery–Lange–Meurer. Don’t expect proofs or provability for the best quantum algorithms to attack post-quantum crypto. How do we obtain confidence in analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
SLIDE 40 Where’s my quantum computer? Quantum-algorithm design moving beyond textbook stage algorithms without proofs. Example: subset-sum
Bernstein–Jeffery–Lange–Meurer. expect proofs or provability best quantum algorithms attack post-quantum crypto. do we obtain confidence analysis of these algorithms? Quantum experiments are hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
Simulation An algorithm is a computer-assisted
for a particula
SLIDE 41 quantum computer? rithm design
without proofs. subset-sum 241 from 2013 Bernstein–Jeffery–Lange–Meurer.
quantum algorithms
tain confidence these algorithms? eriments are hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
Simulation An algorithm simulation is a computer-assisted
for a particular input
SLIDE 42 computer?
roofs. 2013 Bernstein–Jeffery–Lange–Meurer. rovability rithms crypto. confidence rithms? hard. Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input.
SLIDE 43 Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input.
SLIDE 44 Where’s my big computer? Analogy: Public hasn’t carried out a 280 NFS RSA-1024 experiment. But public has carried out 250, 260, 270 NFS experiments. Hopefully not too much extrapolation error for 280. Vastly larger extrapolation for the quantum situation. Imagine attacker performing 280 operations on 240 qubits; compare to today’s challenges
- f 21, 22, 23, 24, 25, 26 qubits.
Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input. Compared to traditional proofs: Theorem statement is easier. Steps in proof are easier. Don’t need to generalize beyond a single input. Provability is guaranteed. Proof has computer assistance, so less chance of error.
SLIDE 45 Where’s my big computer? Analogy: Public hasn’t carried out NFS RSA-1024 experiment. public has carried out
60, 270 NFS experiments.
efully not too much
larger extrapolation quantum situation. Imagine attacker performing erations on 240 qubits; re to today’s challenges 22, 23, 24, 25, 26 qubits. Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input. Compared to traditional proofs: Theorem statement is easier. Steps in proof are easier. Don’t need to generalize beyond a single input. Provability is guaranteed. Proof has computer assistance, so less chance of error. The standa
Compute and t0; t1 such that algorithm Prove that matches Special case: The computation the origina plus printouts Particula
SLIDE 46 computer? hasn’t carried out RSA-1024 experiment. carried out NFS experiments.
error for 280. extrapolation situation. er performing
y’s challenges , 25, 26 qubits. Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input. Compared to traditional proofs: Theorem statement is easier. Steps in proof are easier. Don’t need to generalize beyond a single input. Provability is guaranteed. Proof has computer assistance, so less chance of error. The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; and t0; t1; t2; : : : such that si represents algorithm state at Prove that the computation matches the original Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy pro
SLIDE 47 computer? rried out eriment. eriments. . situation. rming qubits; challenges qubits. Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input. Compared to traditional proofs: Theorem statement is easier. Steps in proof are easier. Don’t need to generalize beyond a single input. Provability is guaranteed. Proof has computer assistance, so less chance of error. The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof.
SLIDE 48 Simulation An algorithm simulation is a computer-assisted proof
- f the algorithm’s performance
for a particular input. Compared to traditional proofs: Theorem statement is easier. Steps in proof are easier. Don’t need to generalize beyond a single input. Provability is guaranteed. Proof has computer assistance, so less chance of error. The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof.
SLIDE 49 Simulation algorithm simulation computer-assisted proof algorithm’s performance particular input. Compared to traditional proofs: rem statement is easier. in proof are easier. need to generalize
Provability is guaranteed. has computer assistance, chance of error. The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation “If you can a quantum pre-quantum have an algorithm
SLIDE 50 simulation computer-assisted proof rithm’s performance input. traditional proofs: statement is easier. re easier. generalize input. guaranteed. computer assistance,
The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum “If you can efficiently a quantum algorithm pre-quantum computer have an efficient p algorithm for the same
SLIDE 51
rmance roofs: easier. assistance, The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then have an efficient pre-quantum algorithm for the same problem.”
SLIDE 52 The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.”
SLIDE 53 The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily!
SLIDE 54 The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
SLIDE 55 The standard structure
- f an algorithm simulation:
Compute s0; s1; s2; : : : and t0; t1; t2; : : : such that si represents algorithm state at time ti. Prove that the computation matches the original algorithm. Special case: experiment. The computation is the original algorithm plus printouts of state. Particularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
Ah, but did I say that the simulation takes only this input?
SLIDE 56 standard structure algorithm simulation: Compute s0; s1; s2; : : : ; t1; t2; : : : that si represents rithm state at time ti. that the computation matches the original algorithm. ecial case: experiment. computation is riginal algorithm rintouts of state. rticularly easy proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
Ah, but did I say that the simulation takes only this input? Trapdoor Input to to be input Simulation that mak faster than Typical example:
- Algorithm
- Algorithm
- Simulation
This is still can try many understand
SLIDE 57 structure simulation: s2; : : : resents at time ti. computation riginal algorithm. experiment. computation is rithm state. proof. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
Ah, but did I say that the simulation takes only this input? Trapdoor simulation Input to simulation to be input to original Simulation can use that makes simulation faster than original Typical example:
- Algorithm input:
- Algorithm output:
- Simulation input:
This is still useful: can try many choices understand algorithm
SLIDE 58 simulation: computation rithm. Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
Ah, but did I say that the simulation takes only this input? Trapdoor simulation Input to simulation doesn’t have to be input to original algori Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x
SLIDE 59 Simulation of quantum algorithms “If you can efficiently simulate a quantum algorithm using a pre-quantum computer then you have an efficient pre-quantum algorithm for the same problem.” No, not necessarily! “Yes, you do! Simply run the simulation on the same input and extract the original algorithm’s
- utput from the final state.”
Ah, but did I say that the simulation takes only this input? Trapdoor simulation Input to simulation doesn’t have to be input to original algorithm. Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x).
SLIDE 60 Simulation of quantum algorithms can efficiently simulate quantum algorithm using a re-quantum computer then you an efficient pre-quantum rithm for the same problem.” not necessarily! you do! Simply run the simulation on the same input and extract the original algorithm’s from the final state.” but did I say that the simulation takes only this input? Trapdoor simulation Input to simulation doesn’t have to be input to original algorithm. Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x). For compa Often see in traditional Typical p (x; i) → Formula Simulation Given x, for each simulation Doesn’t that works Proof can
SLIDE 61 quantum algorithms efficiently simulate rithm using a
pre-quantum the same problem.” rily! Simply run the the same input and riginal algorithm’s final state.” that the
Trapdoor simulation Input to simulation doesn’t have to be input to original algorithm. Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x). For comparison: Often see x inside in traditional algorithm Typical proof has fo (x; i) → (si; ti). Formula is proven Simulation is more Given x, for each i, simulation computes Doesn’t need unifie that works for all x Proof can work “lo
SLIDE 62 algorithms late using a then you re-quantum roblem.” the input and rithm’s state.” input? Trapdoor simulation Input to simulation doesn’t have to be input to original algorithm. Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x). For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”.
SLIDE 63 Trapdoor simulation Input to simulation doesn’t have to be input to original algorithm. Simulation can use extra input that makes simulation much faster than original algorithm. Typical example:
- Algorithm input: f (x).
- Algorithm output: x.
- Simulation input: x.
This is still useful: can try many choices of x, understand algorithm for f (x). For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”.
SLIDE 64
door simulation to simulation doesn’t have input to original algorithm. Simulation can use extra input makes simulation much than original algorithm. ypical example: rithm input: f (x). rithm output: x. Simulation input: x. still useful: try many choices of x, understand algorithm for f (x). For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of 2014.04 Simulation proof of distinctness
SLIDE 65
simulation simulation doesn’t have riginal algorithm. se extra input simulation much riginal algorithm. example: nput: f (x). tput: x. input: x. useful: choices of x, rithm for f (x). For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows erro proof of 2003 Ambainis distinctness algorithm.
SLIDE 66
esn’t have algorithm. input much rithm. , (x). For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows error in proof of 2003 Ambainis distinctness algorithm.
SLIDE 67
For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows error in proof of 2003 Ambainis distinctness algorithm.
SLIDE 68
For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows error in proof of 2003 Ambainis distinctness algorithm. Ambainis: Yes, thanks, will fix.
SLIDE 69
For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows error in proof of 2003 Ambainis distinctness algorithm. Ambainis: Yes, thanks, will fix. 2014.04 Chou → Childs: Simulation shows that 2003 Childs–Eisenberg distinctness algorithm is non-functional; need to take half angle.
SLIDE 70
For comparison: Often see x inside proofs in traditional algorithm analyses. Typical proof has formula (x; i) → (si; ti). Formula is proven inductively. Simulation is more flexible. Given x, for each i, simulation computes (si; ti). Doesn’t need unified formula that works for all x; i. Proof can work “locally”. Proof of concept 2014.04 Chou → Ambainis: Simulation shows error in proof of 2003 Ambainis distinctness algorithm. Ambainis: Yes, thanks, will fix. 2014.04 Chou → Childs: Simulation shows that 2003 Childs–Eisenberg distinctness algorithm is non-functional; need to take half angle. Childs: Yes. Typo, already fixed in 2005 journal version.