Quantum neurons
Yudong Cao
with Gian Giacomo Guerreschi, Alรกn Aspuru-Guzik
Quantum Techniques in Machine Learning 2017, Verona, Italy.
Quantum neurons Yudong Cao with Gian Giacomo Guerreschi, Aln - - PowerPoint PPT Presentation
Quantum neurons Yudong Cao with Gian Giacomo Guerreschi, Aln Aspuru-Guzik Quantum Techniques in Machine Learning 2017, Verona, Italy. The quest for quantum neural nets Parametrized quantum system that can be trained to accomplish tasks
with Gian Giacomo Guerreschi, Alรกn Aspuru-Guzik
Quantum Techniques in Machine Learning 2017, Verona, Italy.
a machine that is designed to mimic the way in which the brain performs a particular task or function of interest
e.g. attractor dynamics, synaptic connections, integrate & fire, training rules, structure of a NN 01001 01001 Schuld, M., Sinayskiy, I. & Petruccione, F. Quantum Inf Process (2014) 13: 2567 Superposition and entanglement
๐ ๐ =
๐
๐ฅ๐๐ฆ๐ + ๐ ๐ 1 ๐ ๐ ๐
compression etc
How to realize on quantum computers, whose dynamics is lin linear?
Dissipative dynamics
Story of quantum error correction
Reversible circuits Cost scaling?
Neuron Qubit Activation Rotation angle
rest active Activation ๐ง = ๐(๐)
rest active 1
๐๐ง(๐) ๐
๐ =
๐
๐ฅ๐๐ฆ๐ + ๐ โฆ ๐ฆ1 ๐ฆ2 ๐ฆ๐ ๐ฅ1 ๐ฅ2 ๐ฅ๐ 1 ๐ ๐ ๐ Information from previous layer
๐ = ๐ฟ๐ + ๐ 4
๐ ๐ฆ = arctan tan2 ๐ฆ
๐ฆ Measure 0: ๐๐ง(๐(๐ฆ)) ๐ Measure 1: ๐๐ง(๐/4) ๐ Su Success Fail ail bu but eas easil ily cor
Nonlinear!
Repeat until success
๐๐ง ๐ = cos ๐ 2 โsin ๐ 2 sin ๐ 2 cos ๐ 2
๐ times
๐๐ง(2๐(๐)) 1 ๐๐ง(๐โ๐(๐)) ๐๐ง(฿ ) 0 ๐
|010โฆ> RUS x k Controlled rotations by angle ๐ฅ๐, ๐ โฆ ๐ฆ1 = 0 ๐ฆ2 = 1 ๐ฆ3 = 0 Close to either 0 or 1 due to nonlinear function ๐ Weighted sum Nonlinear
๐ =
๐
๐ฅ๐๐ฆ๐ + ๐ โฆ ๐ฆ1 ๐ฆ2 ๐ฆ3 ๐ฅ1 ๐ฅ2 ๐ฅ3 ๐ =
๐
๐ฅ๐๐ฆ๐ + ๐ ๐ง = ๐(๐) Prev. layer Weighted sum Nonlinear
โcatโ
๐ฆ1 ๐ฆ2
Train the network such that ๐ก = ๐ฆ1โจ๐ฆ2
๐ก 1 2 00 1 + 01 0 + 10 0 + 11 1 ๐๐ ๐๐ ๐ 1 1 1 1 1 1 Input Correct output ๐๐
1+ ๐๐ 2
1 2 00 1 + 01 0 + 10 0 + 11 1 Solid: training on Dashed: testing on 00 10 01 11 average
๐ฆ1 ๐ฆ2
๐ก ๐๐
1+ ๐๐ 2
๐ฆ3 ๐ฆ4 ๐ฆ5 ๐ฆ6 ๐ฆ7 ๐ฆ8
1 28
๐=0 28โ1
๐ Parity(๐) Solid: training on Dashed: testing on 28=256 states 00000000 00000001 โฏ 11111111 average
Initial state Update Repeat Final state (attractor)
๐ก๐ = 1 ๐๐ > 0 โ1 ๐๐ < 0
๐
๐๐ =
๐โ ๐
๐ฅ๐๐๐ก
๐ + ๐๐
๐ก1
๐ก๐ = 1 ๐๐ > 0 โ1 ๐๐ < 0 ๐๐ =
๐โ ๐
๐ฅ๐๐๐ก
๐ + ๐๐
๐ก2 ๐ก3 ๐ก4 ๐1
(0)
๐2
(0)
๐3
(0)
๐4
(0)
๐3
(1)
RUS x k ๐4
(2)
RUS x k ๐2
(3)
RUS x k
๐ + ๐ข + ๐ qubits for Hopfield network of ๐ neurons and ๐ข updates
attractors: letters C and Y 3x3 grid
1
+ + +
1 1 1 initial input after 1 update after 2 updates after 3 updates
Neuron <-> Qubit
Sigmoid/step function, attractor
Gian Giacomo Guerreschi Alรกn Aspuru-Guzik
Post
docs cs Peter Johnson Jonathan Olson Gr Grad adua uate te stu stude dent nts Jhonathan Romero Fontalvo Hannah (Sukin) Sim Tim Menke Florian Hase
Thanks!