SLIDE 1
INPUTS We made MNIST images - - PowerPoint PPT Presentation
INPUTS We made MNIST images - - PowerPoint PPT Presentation
INPUTS We made MNIST images binary (white or black) and computed shannon entropy for Max Entropy each pixel Entropy is close to maximum (50% white) for most pixels in
SLIDE 2
SLIDE 3
SLIDE 4
- ○
○ ○
SLIDE 5
INPUTS
- We made MNIST images binary (white or
black) and computed shannon entropy for each pixel
- Entropy is close to maximum (50% white)
for most pixels in the middle but is close to 0 for pixels on edge Network
- We binarized activity of our hidden layer
[active/not active]
- Units with equal probability of being active
(firing) carry the most information
Max Entropy
SLIDE 6
Trial Activity Activity Trial Activity Activity High Entropy 9.9/10
(proportion of values >0.5 = 0.49)
Low Entropy 9.0/10
(proportion of values >0.5 = 0.42)
SLIDE 7
SLIDE 8
- Activation Penalty
SLIDE 9
- Unfortunately, while this does
reduce overall network activation, it disrupts our training process
Moderate Activity Penalty No Activity Penalty
SLIDE 10
SLIDE 11
SLIDE 12
SLIDE 13
SLIDE 14
- Performance is significantly worse
than fully connected network
- Overall activity levels are reduced but
- nly marginally
SLIDE 15
SLIDE 16
SLIDE 17
- Accuracy improves with bottleneck size to an
asymptote
- As expected- entropy and activity levels
increase with bottleneck size
SLIDE 18
SLIDE 19
SLIDE 20
Normal MNIST images Spike trains over time
SLIDE 21
SLIDE 22
SLIDE 23
SLIDE 24
Autoencoder Spiking Neural Network
SLIDE 25
SLIDE 26