Neural Programs: Towards Adaptive Control in Cyber-Physical Systems - - PowerPoint PPT Presentation

neural programs towards adaptive control in cyber
SMART_READER_LITE
LIVE PREVIEW

Neural Programs: Towards Adaptive Control in Cyber-Physical Systems - - PowerPoint PPT Presentation

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion Neural Programs: Towards Adaptive Control in Cyber-Physical Systems Konstantin Selyunin 1 , Denise Ratasich 1 Ezio Bartocci 1 , M.A. Islam 2


slide-1
SLIDE 1

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Neural Programs: Towards Adaptive Control in Cyber-Physical Systems

Konstantin Selyunin1, Denise Ratasich1 Ezio Bartocci1, M.A. Islam2 Scott A. Smolka2, Radu Grosu1

1Vienna University of Technology

Cyber-Physical Systems Group E182-1 Institut of Computer Engineering

2Stony Brook University, NY, USA

slide-2
SLIDE 2

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Motivation

  • Programs are not robust
  • Case studies: neural circuit simulation & parallel parking
  • Parameter synthesis: plateaus are bad for optimizations
0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Time, s 0.030 0.028 0.026 0.024 0.022 0.020 0.018 0.016 Potential, V AVA AVB
slide-3
SLIDE 3

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Motivation II

This presentation:

  • How to incorporate “smooth” decisions in CPS to make

systems more robust using neural circuits and GBN

  • Technique to learn parameters of a model
  • Application to two case studies and the relation between them
slide-4
SLIDE 4

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Background

  • Bayesian Networks
  • express probabilistic dependencies between variables
  • are represented as DAGs
  • allow compact representation using CPDs

D I G L S

slide-5
SLIDE 5

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Background

  • Bayesian Networks
  • express probabilistic dependencies between variables
  • are represented as DAGs
  • allow compact representation using CPDs
  • Gaussian Distributions
  • Univariate and Multivariate Gaussian distributions

D I G L S

slide-6
SLIDE 6

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Background

  • Bayesian Networks
  • express probabilistic dependencies between variables
  • are represented as DAGs
  • allow compact representation using CPDs
  • Gaussian Distributions
  • Univariate and Multivariate Gaussian distributions
  • Step function vs. sigmoid

D I G L S

slide-7
SLIDE 7

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Background

  • Passing random variables through conditions

if( q > 0.15) { q; } if( p <= 3.0) { p; } q ~ N(0 , 0.5) p ~ N(4 , 0.7)

slide-8
SLIDE 8

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Towards the nif statement

Our setting:

  • Program operates random variables (RVs)
  • RVs are mutually dependent Gaussians
slide-9
SLIDE 9

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Towards the nif statement

Our setting:

  • Program operates random variables (RVs)
  • RVs are mutually dependent Gaussians

Questions:

  • How to incorporate uncertainty of making a decision and

make decisions “smooth”?

  • How to avoid cutting distributions when passing a variable

through a condition or a loop?

slide-10
SLIDE 10

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Towards the nif statement

Our setting:

  • Program operates random variables (RVs)
  • RVs are mutually dependent Gaussians

Questions:

  • How to incorporate uncertainty of making a decision and

make decisions “smooth”?

  • How to avoid cutting distributions when passing a variable

through a condition or a loop? We propose to use nifs instead of traditional if statements.

slide-11
SLIDE 11

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Neural if

The nif statement: nif( x # y, σ2 )

  • Inequality relation {≥, >, <, ≤}
  • Variance (represents our confidence of making a decision)

Example:

nif( x >= a, σ2) S1 else S2

slide-12
SLIDE 12

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nif( x # a, σ2 ) : Evaluation

  • 1. Compute the difference between x, a

diff(x,a) =            x - a − ǫ if # is >, x - a if # is ≥, a - x − ǫ if # is <, a - x if # is ≤ .

slide-13
SLIDE 13

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nif( x # a, σ2 ) : Evaluation

  • 1. Compute the difference between x, a

diff(x,a) =            x - a − ǫ if # is >, x - a if # is ≥, a - x − ǫ if # is <, a - x if # is ≤ .

  • 2. Compute quantiles of the probability density function

q1(0,4) q1(0,0.4) q2(0,0.4) q1(0, ) q2(0, ) q2(0,4)

0.0 0.0

  • 5

5

  • 5

5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

diff(x,a) diff(1,0) x * **

slide-14
SLIDE 14

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nif( x # a, σ2 ) : Evaluation

  • 1. Compute the difference between x, a

diff(x,a) =            x - a − ǫ if # is >, x - a if # is ≥, a - x − ǫ if # is <, a - x if # is ≤ .

  • 2. Compute quantiles of the probability density function

q1(0,4) q1(0,0.4) q2(0,0.4) q1(0, ) q2(0, ) q2(0,4)

0.0 0.0

  • 5

5

  • 5

5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

diff(x,a) diff(1,0) x * **

  • 3. Check if a random sample is within the interval
slide-15
SLIDE 15

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nif: Example

if( x > 0.15) { x; } nif( x > 0.15, 0.1) { x; }

x ~ N(0 , 0.1)

x

  • No. of samples

0.0 0.2

  • 0.2
  • 0.4

0.4

x x

0.0 0.2

  • 0.2
  • 0.4

0.4 0.0 0.2

  • 0.2
  • 0.4

0.4

  • No. of samples
  • No. of samples
slide-16
SLIDE 16

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Limit case σ2 → 0

  • For the case with “no uncertainty” (σ2 → 0) the PDF is

expressed as the Dirac function:

  • δ(x) = +∞ if x = 0 else 0

−∞ δ(x) dx = 1

nif( x >= a, σ2) S1 else S2

slide-17
SLIDE 17

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Limit case σ2 → 0

  • For the case with “no uncertainty” (σ2 → 0) the PDF is

expressed as the Dirac function:

  • δ(x) = +∞ if x = 0 else 0

−∞ δ(x) dx = 1

  • σ2 → 0 : the nif statement is equivalent to the if condition

nif( x >= a, σ2) S1 else S2

slide-18
SLIDE 18

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nwhile

Extension of a traditional while statement that incorporates uncertainty nwhile( x # a, σ2){P1}

slide-19
SLIDE 19

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

nwhile

Extension of a traditional while statement that incorporates uncertainty nwhile( x # a, σ2){P1} Evaluation:

  • 1. Compute diff(x,a), obtain quantiles q1 and q2
  • 2. Check if a random sample is within the interval
  • 3. If sample within the interval, execute P1 and go to 1, else exit
slide-20
SLIDE 20

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Case study 1: C.elegans

C.elegans

  • a 1-mm round worm
  • each adult individual has exactly 302 neurons
  • extensively studied in evolutional- and neurobiology

Tap withdrawal response

  • apply stimulus to mechanosensory (input) neurons
  • observe the behavior: forward / backward movement

Goal

  • express the behavior using neural program
slide-21
SLIDE 21

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Case study 1: C.elegans

C.elegans

  • a 1-mm round worm
  • each adult individual has exactly 302 neurons
  • extensively studied in evolutional- and neurobiology

Tap withdrawal response

  • apply stimulus to mechanosensory (input) neurons
  • observe the behavior: forward / backward movement

Goal

  • express the behavior using neural program
slide-22
SLIDE 22

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Case study 1: C.elegans

C.elegans

  • a 1-mm round worm
  • each adult individual has exactly 302 neurons
  • extensively studied in evolutional- and neurobiology

Tap withdrawal response

  • apply stimulus to mechanosensory (input) neurons
  • observe the behavior: forward / backward movement

Goal

  • express the behavior using neural program
slide-23
SLIDE 23

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Neural connections 101

V(i) V(i) ggap

wgap

V(j) V(j) V(i) V(j) V(i) E(ij) gsyn(V(i))

wsyn

V(j)

Synaptic connection

  • chemical nature
  • either active or not
  • synaptic weight wsyn
  • use nif to model each

synaptic connection Gap junction connection

  • instantaneous resistive connection
  • linear combination of inputs
  • gap junction weight wgap
slide-24
SLIDE 24

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal circuit

REV ¡ FWD ¡ 2 ¡ 2 ¡ AVM ¡ PLM ¡

slide-25
SLIDE 25

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal circuit

56 ¡ REV ¡ 28 ¡ FWD ¡ AVB ¡ AVA ¡ 2 ¡ 2 ¡ PVC ¡ AVD ¡ AVM ¡ PLM ¡ Why ¡the ¡synapses ¡and ¡neurons ¡AVD, ¡PVC? ¡

slide-26
SLIDE 26

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal circuit

56 ¡ REV ¡ 10 ¡ 28 ¡ FWD ¡ 4 ¡ AVB ¡ AVA ¡ 2 ¡ 1 ¡ 2 ¡ PVC ¡ 8 ¡ 1 ¡ AVD ¡ AVM ¡ PLM ¡

slide-27
SLIDE 27

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal circuit

56 ¡ REV ¡ 10 ¡ 28 ¡ FWD ¡ 4 ¡ 6 ¡ 2 ¡ AVB ¡ 10 ¡ 17 ¡ 21 ¡ AVA ¡ 4 ¡ 2 ¡ 13 ¡ 1 ¡ 2 ¡ PVC ¡ 8 ¡ 1 ¡ AVD ¡ 1 ¡ 3 ¡ AVM ¡ PLM ¡ Why ¡both ¡synapses ¡and ¡gap ¡junc>ons ¡between ¡PVC-­‑AVA? ¡

slide-28
SLIDE 28

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal circuit

PLM REV FWD PVD ALM AVM AVA DVA AVB PVC 2 1 2 1 1 2 5 5 5 1 1 28 27 9 10 12 2 27 28 10 8 1 4 4 14 27 2 70 AVD

slide-29
SLIDE 29

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Tap Withdrawal Simulations as Neural Program

Biological Model Neural Program

dV (i) dt = VLeak − V (i) R(i)

m C (i) m

+ N

j=1(I (ij) syn + I (ij) gap) + I (i) stim

C (i)

m

(1) I (ij)

gap = w(ij) gap g(ij) gap (Vj − Vi)

(2) I (ij)

syn = w(ij) syn g(ij) syn (E (ij) − V (j))

(3) g(ij)

syn (V (j)) =

¯ gsyn 1 + e

K

  • V (j)−Veqj

Vrange

  • (4)

1: nwhile ( t ≤ tdur, 0) 2:

compute I (ij)

gap using equation 2

3:

nwhile ( k ≤ w(ij)

syn , 0)

4:

nif (V (j) ≤ Veq, K/Vrange)

5:

g(ij)

syn ← g(ij) syn + gsyn

6:

compute I (ij)

syn using equation 3

7:

compute dV (i) using equation 1

8:

V (i) ← V (i) + dV (i)

9:

t ← t + dt

slide-30
SLIDE 30

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

C.elegans Tap withdrawal simulations

ODE

Neural Program: One execution Neural Program: Average

slide-31
SLIDE 31

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Case study 2: Parallel parking

Given:

  • P3AT-SH Pioneer rover
  • Carma Devkit
  • ROS on Ubuntu 12.04
  • Pi with Gertboard

Goal: Write a parallel parking controller as a neural program

Raspberry PI with accelerometer and gyroscope Carma Board running Ubuntu & ROS Sonars Bumbers

slide-32
SLIDE 32

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Program skeleton

nwhile( currentDistance < targetLocation1 , sigma1){ moving (); currentDistance = getPose (); } updateTargetLocations (); nwhile( currentAngle < targetLocation2 , sigma2){ turning (); currentAngle = getAngle (); } updateTargetLocations (); nwhile( currentDistance < targetLocation3 , sigma3){ moving (); currentDistance = getPose (); } ...

slide-33
SLIDE 33

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Program skeleton

nwhile( currentDistance < targetLocation1 , sigma1){ moving (); currentDistance = getPose (); } updateTargetLocations (); nwhile( currentAngle < targetLocation2 , sigma2){ turning (); currentAngle = getAngle (); } updateTargetLocations (); nwhile( currentDistance < targetLocation3 , sigma3){ moving (); currentDistance = getPose (); } ...

Question: how to find the unknown parameters and how uncertain are we about each of them?

slide-34
SLIDE 34

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning

Parking example:

  • Sequence of moves and turns
  • Each action depends on the previous one
  • The dependence is probabilistic
  • RVs are normally distributed (assumption)
slide-35
SLIDE 35

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning

Parking example:

  • Sequence of moves and turns
  • Each action depends on the previous one
  • The dependence is probabilistic
  • RVs are normally distributed (assumption)

Gaussian Bayesian Network:

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-36
SLIDE 36

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Good traces

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

0.2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1 0.5 0.5

Task: learn the parameters of the GBN from the good traces

slide-37
SLIDE 37

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Good traces

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

0.2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1 0.5 0.5

Task: learn the parameters of the GBN from the good traces

  • 1. Convert the GBN to the MGD[HG95]
  • 2. Update the precision matrix T of the MGD[Nea03]
  • 3. Extract σ2s and bijs from T
slide-38
SLIDE 38

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Update step

  • Iterative learning procedure
  • Incrementally update mean µ and covariance matrix β of the

prior

  • Mean update:

x = M

h=1 x(h)

M µ∗ = vµ + Mx v + M

slide-39
SLIDE 39

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Update step

  • Iterative learning procedure
  • Incrementally update mean µ and covariance matrix β of the

prior

  • Mean update:

x = M

h=1 x(h)

M µ∗ = vµ + Mx v + M

  • Covariance matrix update:

s =

M

  • h=1
  • x(h) − x

x(h) − x T β∗ = β + s + vM v + M

  • x(h) − x

x(h) − x T (T∗)−1 ∼ β∗

slide-40
SLIDE 40

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Data

“Good trajectories”:

  • *.bag files (collection of messages

that are broadcasted in ROS)

  • extract coordinates in the 2-D

space and angle

  • find important points
  • obtain samples in the form:

l1, α1, l2, α2, l3, α3, l4

. . . . . . . . . 1.204

  • 0.911
  • 1.221

1.207

  • 0.920
  • 1.221

1.209

  • 0.927
  • 1.221

1.211

  • 0.930
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.215

. . . . . . . . .

slide-41
SLIDE 41

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Data

“Good trajectories”:

  • *.bag files (collection of messages

that are broadcasted in ROS)

  • extract coordinates in the 2-D

space and angle

  • find important points
  • obtain samples in the form:

l1, α1, l2, α2, l3, α3, l4

. . . . . . . . . 1.204

  • 0.911
  • 1.221

1.207

  • 0.920
  • 1.221

1.209

  • 0.927
  • 1.221

1.211

  • 0.930
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.215

. . . . . . . . .

slide-42
SLIDE 42

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Data

“Good trajectories”:

  • *.bag files (collection of messages

that are broadcasted in ROS)

  • extract coordinates in the 2-D

space and angle

  • find important points
  • obtain samples in the form:

l1, α1, l2, α2, l3, α3, l4

. . . . . . . . . 1.204

  • 0.911
  • 1.221

1.207

  • 0.920
  • 1.221

1.209

  • 0.927
  • 1.221

1.211

  • 0.930
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.215

. . . . . . . . .

slide-43
SLIDE 43

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Data

“Good trajectories”:

  • *.bag files (collection of messages

that are broadcasted in ROS)

  • extract coordinates in the 2-D

space and angle

  • find important points
  • obtain samples in the form:

l1, α1, l2, α2, l3, α3, l4

. . . . . . . . . 1.204

  • 0.911
  • 1.221

1.207

  • 0.920
  • 1.221

1.209

  • 0.927
  • 1.221

1.211

  • 0.930
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.221

1.211

  • 0.931
  • 1.215

. . . . . . . . .

slide-44
SLIDE 44

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Parking system architecture

Engine

nwhile ( . ) moving(); nwhile ( . ) turning(); ...

initial motion commands velocity commands

vl, vr, a, ω

x, y, θ actual position resampled commands

GBN (distributions) Rover Interface Sensor Fusion

slide-45
SLIDE 45

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Conclusion and Future Work

Recap:

  • use of smooth Probit distribution in conditional and loop

statements

  • use of Gaussian Bayesian Network to capture dependencies

between Probit distributions

  • Case studies: robust parking controller and tap withdrawal

simulation Future work:

  • Apply these techniques to monitoring
slide-46
SLIDE 46

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

References I

David Heckerman and Dan Geiger. Learning bayesian networks: A unification for discrete and gaussian domains. In UAI, pages 274–284, 1995. Richard E. Neapolitan. Learning Bayesian Networks. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2003.

slide-47
SLIDE 47

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

slide-48
SLIDE 48

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning Parameters in a Neural Program

Neural Program

with Uncertain Parameters

Set of traces Sampling from GBN with learned parameters Learning procedure Learning phase Execution phase Learned Parameters

slide-49
SLIDE 49

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Integration into ROS

  • Rover Interface
  • Sensor Fusion
  • GBN and Engine
slide-50
SLIDE 50

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Denotational Semantics

E ::= xi | c | bop(E1, E2) | uop(E1) S ::= skip | xi := E | S1; S2 | nif( xi # c, σ2 ) S1 else S2 | nwhile( xi # c, σ2){ S1 }

slide-51
SLIDE 51

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Denotational Semantics

skip(x) = x xi := E (x) = x[E(x) → xi] S1; S2 (x) = S2(S1(x)) nif( xi # c, σ2) S1 else S2(x) = check(xi, a, σ2, #)(x)S1(x) + ¬check(xi, a, σ2, #)(x)S2(x) nwhile( xi # c, σ2){ S1 }(x) = x¬check(xi, a, σ2, # )(x) + check(xi, a, σ2, # )(x)nwhile( xi # c, σ2){ S1 }(S1x)

slide-52
SLIDE 52

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • 1. Define an ordering starting from initial node

t1 = 1 σ2

1 bi =        bi1 . . . bi,i−1        µ =        µ1 . . . µn       

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-53
SLIDE 53

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • 1. Define an ordering starting from initial node

t1 = 1 σ2

1 bi =        bi1 . . . bi,i−1        µ =        µ1 . . . µn       

  • 2. Use iterative algorithm from
  • Heckerman and Geiger, 1995
  • :

T1 = (t1); for(i = 2; i ≤ n; i + +) Ti =    Ti−1 + tibibT

i

−tibi −tibT

i

ti    ; T = Tn;

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-54
SLIDE 54

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • l1, α1:

T1 = (t1) = 1 σ2

1

; T2 =   

1 σ2

1 + b2 21

σ2

2

− b21

σ2

2

− b21

σ2

2

1 σ2

2

   b2 =

  • b21
  • l1

α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-55
SLIDE 55

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • l2:

T3 =       

1 σ2

1 + b2 21

σ2

2

− b21

σ2

2

− b21

σ2

2

1 σ2

2 + b2 32

σ2

3

− b32

σ2

3

− b32

σ2

3

1 σ2

3

       b3 =    b21   

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-56
SLIDE 56

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • α2:

T4 =           

1 σ2

1 + b2 21

σ2

2

− b21

σ2

2

− b21

σ2

2

1 σ2

2 + b2 32

σ2

3

− b32

σ2

3

− b32

σ2

3

1 σ2

3 + b2 43

σ2

4

− b43

σ2

4

− b43

σ2

4

1 σ2

4

           b4 =        b43       

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76

slide-57
SLIDE 57

Motivation Background Neural Code C.elegans tap withdrawal simulation Parallel parking Conclusion

Learning: Conversion step

  • l3:

T5 =              

1 σ2

1 + b2 21

σ2

2

− b21

σ2

2

− b21

σ2

2

1 σ2

2 + b2 32

σ2

3

− b32

σ2

3

− b32

σ2

3

1 σ2

3 + b2 43

σ2

4

− b43

σ2

4

− b43

σ2

4

1 σ2

4 + b2 54

σ2

5

− b54

σ2

5

− b54

σ2

5

1 σ2

5

              b5 =            b54            We can generalize T for arbitrary number of moves

l1 α1 l2 α2 l3 α3 l4 b21 b32 b43 b54 b65 b76