INPUTS We made MNIST images - - PowerPoint PPT Presentation

inputs we made mnist images binary white or
SMART_READER_LITE
LIVE PREVIEW

INPUTS We made MNIST images - - PowerPoint PPT Presentation

INPUTS We made MNIST images binary (white or black) and computed shannon entropy for Max Entropy each pixel Entropy is close to maximum (50% white) for most pixels in


slide-1
SLIDE 1
slide-2
SLIDE 2
slide-3
SLIDE 3
slide-4
SLIDE 4

○ ○

slide-5
SLIDE 5

INPUTS

  • We made MNIST images binary (white or

black) and computed shannon entropy for each pixel

  • Entropy is close to maximum (50% white)

for most pixels in the middle but is close to 0 for pixels on edge Network

  • We binarized activity of our hidden layer

[active/not active]

  • Units with equal probability of being active

(firing) carry the most information

Max Entropy

slide-6
SLIDE 6

Trial Activity Activity Trial Activity Activity High Entropy 9.9/10

(proportion of values >0.5 = 0.49)

Low Entropy 9.0/10

(proportion of values >0.5 = 0.42)

slide-7
SLIDE 7
slide-8
SLIDE 8
  • Activation Penalty
slide-9
SLIDE 9
  • Unfortunately, while this does

reduce overall network activation, it disrupts our training process

Moderate Activity Penalty No Activity Penalty

slide-10
SLIDE 10
slide-11
SLIDE 11
slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14
  • Performance is significantly worse

than fully connected network

  • Overall activity levels are reduced but
  • nly marginally
slide-15
SLIDE 15
slide-16
SLIDE 16
slide-17
SLIDE 17
  • Accuracy improves with bottleneck size to an

asymptote

  • As expected- entropy and activity levels

increase with bottleneck size

slide-18
SLIDE 18
slide-19
SLIDE 19
slide-20
SLIDE 20

Normal MNIST images Spike trains over time

slide-21
SLIDE 21
slide-22
SLIDE 22
slide-23
SLIDE 23
slide-24
SLIDE 24

Autoencoder Spiking Neural Network

slide-25
SLIDE 25
slide-26
SLIDE 26