invertebrate learning
play

Invertebrate Learning Computational Models of Neural Systems - PowerPoint PPT Presentation

Invertebrate Learning Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2007 Eric Kandel Nobel Prize in Physiology or Medicine, 2000 10/10/07 Computational Models of Neural Systems 2 Aplysia Californica


  1. Invertebrate Learning Computational Models of Neural Systems Lecture 3.3 David S. Touretzky October, 2007

  2. Eric Kandel Nobel Prize in Physiology or Medicine, 2000 10/10/07 Computational Models of Neural Systems 2

  3. Aplysia Californica 10/10/07 Computational Models of Neural Systems 3

  4. Aplysia 10/10/07 Computational Models of Neural Systems 4

  5. Mantle, Siphon and Gill siphon and gill withdrawal 10/10/07 Computational Models of Neural Systems 5

  6. Abdominal Ganglion 10/10/07 Computational Models of Neural Systems 6

  7. Review of Learning Terms ● Non-associative learning – habituation: response to a repeated stimulus gradually decreases – dishabituation: response restored to a more normal level – sensitization: elevated response to a stimulus ● Associative learning – classical conditioning train: CS + UCS → UCR test: CS → CR – instrumental (operant) conditioning behavior → reinforcement 10/10/07 Computational Models of Neural Systems 7

  8. Habituation in Aplysia ● Tactile stimulus to siphon causes brief withdrawal of gill + siphon ● With repeated exposure, withdrawal response is greatly reduced. ● Effect can last from minutes to weeks, depending on the stimulus protocol. ● Short term mechanism: decreased transmitter release at synapse from sensory to motor neuron, due to decreased Ca 2+ influx, due to inactivation of presynaptic calcium channels. ● Long term mechanism: decrease in number and size of active zones in synapses. 10/10/07 Computational Models of Neural Systems 8

  9. Sensitization ● Animal initially responds weakly to weakly aversive/neutral CS. ● Noxious stimulus (strong shock to the neck or tail) enhances defensive responses to subsequent weak or neutral stimuli. ● Densive responses include siphon- and gill-withdrawal reflexes, inking, and walking. ● Dishabituation shown to be a special case of sensitization. ● Effects last for minutes to weeks. ● Cause: increase in sensory neuron transmitter release onto the motor neurons. ● Mechanism: presynaptic facilitiation by facilitory interneuron. 10/10/07 Computational Models of Neural Systems 9

  10. Inking Behavior 10/10/07 Computational Models of Neural Systems 10

  11. Presynaptic Facilitation Mechanism ● Tail stimulation activates a group of facilitator interneurons. ● Transmitter released by them (may be serotonin or a related amine) activates an adenylate cyclase in the presymaptic terminals and causes elevation of free cAMP. ● Free cAMP activates a cAMP-dependent protein kinase. The protein kinase closes a particular type of K + channel in the presynpatic ● terminal. Reduction in K + current leads to a broadening of action potentials, allowing ● more Ca 2+ to enter the terminal. Increase Ca 2+ leads to a higher probability of transmitter release onto the ● motor neuron. 10/10/07 Computational Models of Neural Systems 11

  12. Classical Conditioning ● CS1: weak stimulus to the siphon, produces feeble response. ● UCS: strong shock to the tail, produces strong response. ● Training: CS1, then UCS. Repeat for 15 trials. ● Conditioning result : CS1 → strong withdrawal response CS2 (weak stimulus to mantle) → feeble response 10/10/07 Computational Models of Neural Systems 12

  13. Classical Conditioning ● Training: CS1 and UCS, mixed with CS2 and no UCS. ● Stimulus specificity result: CS1 → strong response CS2 → less strong response ● Mechanism: presynaptic facilitation is increase is a synapse that has recently been activated. Elevated presynaptic Ca 2+ enhances effect of serotonin. ● Generalization result: strength of CS2 response depends on how similar CS2 is to CS1. – Suggests shared representations. 10/10/07 Computational Models of Neural Systems 13

  14. Stimulus Specificity and Generalization shared representations 10/10/07 Computational Models of Neural Systems 14

  15. Cellular Mechanisms of Conditioning 10/10/07 Computational Models of Neural Systems 15

  16. Classical Conditioning Phenomena ● Basic result: CS1 + UCS → UCR, then CS1 → CR – specificity: CS2 → small CR (if similar to CS1) or no CR – extinction: CS (repeated presentations with no UCS) → no response – recovery: strong stimulus reinstates CS → CR (Dashed line shows normal slow decay of learning in the absence of extinction trials.) 10/10/07 Computational Models of Neural Systems 16

  17. Classical Conditioning Phenomena ● Second order conditioning: – First: CS1 + UCS → UCR – Second: CS2 + CS1 → CR (CS1 may extinguish) – Test: CS2 → CR 10/10/07 Computational Models of Neural Systems 17

  18. Classical Conditioning Phenomena ● Blocking: – First: CS1 + UCS → UCR – Second: (CS1,CS2) + UCS → UCR – Test: CS2 → little or no CR ● Preconditioning: – First: UCS → UCR – Second: CS + UCS → UCR – Test: CS → reduced CR 10/10/07 Computational Models of Neural Systems 18

  19. A Cell-Biological Alphabet ● Hawkins and Kandel (1984): – Habituation, sensitization, and classical conditioning may form the basic “alphabet” from which higher-order learning behavios are synthesized. – Showed how a variety of classical conditioning effects could be implemented in known Aplysia circuitry. – Didn't actually do any simulations. ● Gluck and Thompson (1987): – Simulated the Hawkins and Kandel model. – Found that blocking didn't work. – Proposed modifications that could produce blocking. 10/10/07 Computational Models of Neural Systems 19

  20. Extinction ● After training, presenting CS alone, with no UCS, eventually reduces the CR back to baseline level. ● Explanation: this is the result of habituation (decreased transmitter release due to decreased presynaptic Ca 2+ .) ● Note that habitution involves a presynaptic mechanism that is independent of the sensitization/conditioning mechanism. 10/10/07 Computational Models of Neural Systems 20

  21. Recovery From Extinction ● Trained animals will spontaneously recover from extinction. ● Explanation: after a rest period, habituation wears off, while the separate changes caused by conditioning are still in effect. ● Disinhibition: a strong stimulus can undo the effects of habituation. ● Explanation: disinhibition is a special case of sensitization; the effect counters that of habituation. 10/10/07 Computational Models of Neural Systems 21

  22. Second Order Conditioning ● First: siphon stimulus + tail shock → gill withdrawal Second: mantle stimulus + siphon stimulus → gill withdrawl Test: mantle stimulus → gill withdrawal ? ● To explain second order conditioning, Hawkins & Kandel introduced 3 new features to their model: – Facilitory interneurons can be excited by the CS, not just the UCS – Facilitory interneurons also excite sensory-to-facilitory neuron synapses – Facilitory interneurons also excite motor neurons, either directly or indirectly 10/10/07 Computational Models of Neural Systems 22

  23. Extended Model 10/10/07 Computational Models of Neural Systems 23

  24. Model of Second Order Conditioning 1. Stimulate siphon (CS1) and then shock tail (UCS). 2. Activity-dependent presynaptic facilitation occurs: • at the siphon-to-motor neuron synapse (S-R learning) • at the siphon-to-facilitory neuron synapse (S-S learning) 3. Now, siphon-to-facilitory neuron synapse is strong enough to fire the facilitory neuron. 4. Stimulate mantle (CS2) followed by siphon (CS1). 5. Siphon sensory neuron fires the facilitory interneuron, and also the motor neuron, producing a CR. 6. Activity-dependent facilitation occurs at mantle-to-motor neuron synapse. 7. Now, mantle-to-motor neuron synapse will cause a CR. 8. Note: no way to get 3 rd order conditioning. We've run out of fac. stages. 10/10/07 Computational Models of Neural Systems 24

  25. Rescorla-Wagner Learning Rule Predicted UCS strength: V = ∑ V i X i i V i is the response strength associated with stimulus i . X i is 1 if the stimulus is present, else 0. Learning rule:  V i = − V ⋅ i X i  is the actual strength of the stimulus.  is the learning rate.  i is the salience of stimulus i . 10/10/07 Computational Models of Neural Systems 25

  26. How Rescorla-Wagner Produces Blocking ● Train on CS1 + UCS ● V 1 becomes roughly equal to λ . ● Train on (CS1, CS2) + UCS. ● Since ( λ –V ) is close to zero, ∆ V 2 is also close to zero. ● No learning occurs for CS2. 10/10/07 Computational Models of Neural Systems 26

  27. How Hawkins-Kandel Produce Blocking ● Train on CS1 + UCS ● UCS fires facilitory interneuron, causing facilitation: – of the CS1-to-motor neuron synapse – of the CS1-to-facilitory interneuron synapse ● After training, CS1 can fire the facilitory interneuron, which goes into a refractory state. – When UCS arrives, it can't fire the facilitory inteneuron 10/10/07 Computational Models of Neural Systems 27

  28. How Hawkins-Kandel Produce Blocking ● Now train on (CS1,CS2) + UCS. ● Since CS1 fires the facilitory interneuron at the same time as CS2 is firing, the timing is not right for presynaptic facilitation to help CS2. ● When the UCS arrives, it can't do anything to help CS2 because the facilitory neuron is already in a refractory state. ● Result: no learning for CS2. 10/10/07 Computational Models of Neural Systems 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend