Computing Like the Brain The Path To Machine Intelligence Jeff - - PowerPoint PPT Presentation

computing like the brain
SMART_READER_LITE
LIVE PREVIEW

Computing Like the Brain The Path To Machine Intelligence Jeff - - PowerPoint PPT Presentation

Computing Like the Brain The Path To Machine Intelligence Jeff Hawkins GROK - Numenta jhawkins@groksolutions.com 1) Discover operating principles of neocortex 2) Build systems based on these principles Artificial Intelligence - no


slide-1
SLIDE 1

Computing Like the Brain

The Path To Machine Intelligence

Jeff Hawkins GROK - Numenta

jhawkins@groksolutions.com

slide-2
SLIDE 2

1) Discover operating principles of neocortex 2) Build systems based on these principles

slide-3
SLIDE 3

Alan Turing “Computers are universal machines” 1935 “Human behavior as test for machine intelligence” 1950 Pros:

  • Good solutions

Cons:

  • Task specific
  • Limited or no learning

Artificial Intelligence - no neuroscience

  • MIT AI Lab
  • 5th Generation Computing Project
  • DARPA Strategic Computing Initiative
  • DARPA Grand Challenge

Major AI Initiatives AI Projects

  • ACT-R
  • Asimo
  • CoJACK
  • Cyc
  • Deep Blue
  • Global Workspace Theory
  • Mycin
  • SHRDLU
  • Soar
  • Watson
  • Many more -
slide-4
SLIDE 4

Artificial Neural Networks – minimal neuroscience

Warren McCulloch Walter Pitts “Neurons as logic gates” 1943 Proposed first artificial neural network

ANN techniques

Pros:

  • Good classifiers
  • Learning systems

Cons:

  • Limited
  • Not brain like
  • Back propagation
  • Boltzman machines
  • Hopfield networks
  • Kohonen networks
  • Parallel Distributed Processing
  • Machine learning
  • Deep Learning
slide-5
SLIDE 5

Whole Brain Simulator – maximal neuroscience

The Human Brain Project No theory No attempt at Machine Intelligence

Blue Brain simulation

slide-6
SLIDE 6

1) Discover operating principles of neocortex 2) Build systems based on these principles

Anatomy, Physiology Theory Software Silicon

Good progress is being made 1940s in computing = 2010s in machine intelligence

slide-7
SLIDE 7

The neocortex is a memory system.

data stream retina cochlea somatic

The neocortex learns a model from sensor data

  • predictions
  • anomalies
  • actions

The neocortex learns a sensory-motor model of the world

slide-8
SLIDE 8

Principles of Neocortical Function

retina cochlea somatic

1) On-line learning from streaming data

data stream

slide-9
SLIDE 9

Principles of Neocortical Function

2) Hierarchy of memory regions

  • regions are nearly identical

retina cochlea somatic

1) On-line learning from streaming data

data stream

slide-10
SLIDE 10

Principles of Neocortical Function

2) Hierarchy of memory regions

retina cochlea somatic

3) Sequence memory

  • inference
  • motor

data stream

1) On-line learning from streaming data

slide-11
SLIDE 11

Principles of Neocortical Function

4) Sparse Distributed Representations 2) Hierarchy of memory regions

retina cochlea somatic

3) Sequence memory

data stream

1) On-line learning from streaming data

slide-12
SLIDE 12

Principles of Neocortical Function

retina cochlea somatic

data stream

2) Hierarchy of memory regions 3) Sequence memory 5) All regions are sensory and motor 4) Sparse Distributed Representations

Motor

1) On-line learning from streaming data

slide-13
SLIDE 13

Principles of Neocortical Function

retina cochlea somatic

data stream

x x x x x x x x x x x x x

2) Hierarchy of memory regions 3) Sequence memory 5) All regions are sensory and motor 6) Attention 4) Sparse Distributed Representations 1) On-line learning from streaming data

slide-14
SLIDE 14

Principles of Neocortical Function

4) Sparse Distributed Representations 2) Hierarchy of memory regions

retina cochlea somatic

3) Sequence memory 5) All regions are sensory and motor 6) Attention

data stream

1) On-line learning from streaming data

These six principles are necessary and sufficient for biological and machine intelligence.

  • All mammals from mouse to human have them
  • We can build machines like this
slide-15
SLIDE 15

Sparse Distributed Representations (SDRs)

  • Many bits (thousands)
  • Few 1’s mostly 0’s
  • Example: 2,000 bits, 2% active
  • Each bit has semantic meaning (learned)
  • Representation is semantic

01000000000000000001000000000000000000000000000000000010000…………01000

Dense Representations

  • Few bits (8 to 128)
  • All combinations of 1’s and 0’s
  • Example: 8 bit ASCII
  • Individual bits have no inherent meaning
  • Representation is arbitrary

01101101 = m

slide-16
SLIDE 16

SDR Properties

1) Similarity: shared bits = semantic similarity subsampling is OK 3) Union membership:

Indices

1 2 | 10

Is this SDR a member? 2) Store and Compare: store indices of active bits

Indices

1 2 3 4 5 | 40

1) 2) 3) …. 10) 2% 20%

Union

slide-17
SLIDE 17

Coincidence detectors

How does a layer of neurons learn sequences?

Sequence Memory (for inference and motor)

slide-18
SLIDE 18

Each cell is one bit in our Sparse Distributed Representation SDRs are formed via a local competition between cells. All processes are local across large sheets of cells.

slide-19
SLIDE 19

SDR (time =1)

slide-20
SLIDE 20

SDR (time =2)

slide-21
SLIDE 21

Cells connect to sample of previously active cells to predict their own future activity.

slide-22
SLIDE 22

Multiple Predictions Can Occur at Once.

This is a 1st order memory. We need a high order memory.

slide-23
SLIDE 23

High order sequences are enabled with multiple cells per column.

slide-24
SLIDE 24

High Order Sequence Memory

0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0…………0 1 0 0 0

40 active columns, 10 cells per column = 1040 ways to represent the same input in different contexts A-B-C-D-E X-B’-C’-D’-Y

0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0…………0 1 0 0 0

slide-25
SLIDE 25

High Order Sequence Memory Distributed sequence memory High order, high capacity Noise and fault tolerant Multiple simultaneous predictions Semantic generalization

slide-26
SLIDE 26

Online learning

  • Learn continuously, no batch processing
  • If pattern repeats, reinforce, otherwise forget it

Connection strength is binary Connection permanence is a scalar Training changes permanence

Learning is the growth of new synapses. 1

connected unconnected

Connection permanence 0.2

slide-27
SLIDE 27

“Cortical Learning Algorithm” (CLA) Not your typical computer memory! A building block for

  • neocortex
  • machine intelligence
slide-28
SLIDE 28

Feedforward inference Feedforward inference Motor output Feedback / attention 2 mm 2 mm

sequence memory sequence memory sequence memory sequence memory

CLA CLA CLA CLA

Evidence suggests each layer is implementing a CLA variant

Cortical Region

slide-29
SLIDE 29

1) Commercialization

  • GROK: Predictive analytics using CLA
  • Commercial value accelerates interest and investment

2) Open Source Project

  • NuPIC: CLA open source software and community
  • Improve algorithms, develop applications

3) Custom CLA Hardware

  • Needed for scaling research and commercial applications
  • IBM, Seagate, Sandia Labs, DARPA

What Is Next? Three Current Directions

slide-30
SLIDE 30

Field 1 Field 2 Field 3 Field N Field 1 Field 2 Field 3 Field N Field 1 Field 2 Field 3 Field N

numbers categories text date time

Sequence Memory

2,000 cortical columns 60,000 neurons

  • variable order
  • online learning

encoder encoder encoder encoder

SDRs Predictions Anomalies

GROK: Predictive Analytics Using CLA

Encoders Convert native data type to SDRs CLA Learns spatial/temporal patterns Outputs

  • predictions

anomalies

Actions

slide-31
SLIDE 31

GROK example: Factory Energy Usage

slide-32
SLIDE 32

Customer need

At midnight, make 24 hourly predictions

slide-33
SLIDE 33

GROK Predictions and Actuals

slide-34
SLIDE 34

GROK example: Predicting Server Demand

Date Actual Predicted

Server demand, Actual vs. Predicted

Grok used to predict server demand Approximately 15% reduction in AWS cost

slide-35
SLIDE 35

GROK example: Detecting Anomalous Behavior Grok builds model of data, detects changes in predictability.

Gear bearing temperature & Grok Anomaly Score

GROK going to market for anomaly detection in I.T. 2014

slide-36
SLIDE 36

NuPIC: www.Numenta.org

  • CLA source code (single tree), GPLv3
  • Papers, videos, docs

Community

  • 200+ mail list subscribers, growing
  • 20+ messages per day
  • full time manager, Matt Taylor

What you can do

  • Get educated
  • New applications for CLA
  • Extend CLA: robotics, language, vision
  • Tools, documentation

2nd Hackathon November 2,3 in San Francisco

  • Natural language processing using SDRs
  • Sensory-motor integration discussion
  • 2014 hackathon Ireland?

2) Open Source Project

slide-37
SLIDE 37

HW companies looking “Beyond von Neumann”

  • Distributed memory
  • Fault tolerant
  • Hierarchical

New HW Architectures Needed

  • Speed (research)
  • Cost, power, embedded (commercical)

IBM

  • Almaden Research Labs
  • Joint research agreement

DARPA

  • New Program called “Cortical Processor”
  • HTM (Hierarchical Temporal Memory)
  • CLA is prototype primitive

Seagate Sandia Labs

3) Custom CLA Hardware

slide-38
SLIDE 38

Future of Machine Intelligence

slide-39
SLIDE 39

Future of Machine Intelligence

Definite

  • Faster, Bigger
  • Super senses
  • Fluid robotics
  • Distributed hierarchy

Maybe

  • Humanoid robots
  • Computer/Brain

interfaces for all

Not

  • Uploaded brains
  • Evil robots
slide-40
SLIDE 40

Why Create Intelligent Machines?

Thank You

Live better Learn more