Schematic view of the MNS1 model Posterior cortex Frontal cortex - - PowerPoint PPT Presentation

schematic view of the mns1 model
SMART_READER_LITE
LIVE PREVIEW

Schematic view of the MNS1 model Posterior cortex Frontal cortex - - PowerPoint PPT Presentation

Schematic view of the MNS1 model Posterior cortex Frontal cortex Role of PFC in Neural network models of the mirror neuron action selection not included system Igor Farka Centre for Cognitive Science DAI FMPI Comenius University


slide-1
SLIDE 1

Neural network models of the mirror neuron system

Igor Farkaš Centre for Cognitive Science DAI FMPI Comenius University

Grounded cognition course, 2016

Schematic view of the MNS1 model

(Oztop & Arbib, 2002)

Role of PFC in action selection not included Posterior cortex Frontal cortex

Neural network model (MNS1)

  • 2-layer perceptron for

trajectory recognition

  • error BP learning used
  • artificially preprocessed inp.

(Oztop & Arbib, 2002)

macaque brain Input features extracted (7dim): Input trajectory converted to a spatial pattern (210-dim) to be classified

MNS2 model

(Bonaiuto, Rosta, Arbib, 2007)

  • audio-visual motor neurons added (characteristic sounds of actions)
  • recurrent architecture (with BPTT training)
  • ability of MN to respond for a recently visible but currently hidden object –

enabled by working memory and dynamic remapping

  • Neural correlates of MN congruence identified at hidden layer

mirror canonical

slide-2
SLIDE 2

RNNPB model

(Tani, Ito, Sugita, 2004)

  • Parametric Bias nodes (~ bifurcation parameter in DSS) ~ mirror neurons
  • Goal of learning = self-organized mapping b/w PB nodes and behavioral spatio-temporal

patterns.

  • NN: I/O=12, hid=40, ctx=30, PB=4, back-propagation through time (Rumelhart et al, 1986)
  • PB learning inspired by Miikkulainen (1999)

Robot arm Joint angle User hand positions

RNNPB features

  • dynamic systems approach appears is appealing – continuous state and

action spaces

  • distributed representations lend themselves for generalization (cf. Devlaminck et

al, 2009)

  • PB units connect execution and observation modes
  • motor loop to be interpreted as premotor (rather than motor) activity
  • But: artificial training scheme used: mapping b/w user's and robot's

proprioceptive information exploited → difficult to account for invariance in

  • bserving a motor act
  • Framework extended by link to
  • language (command understanding) (Tani & Sugita, 2005)
  • “authentic” selves (Tani, 2009)

Forward and inverse models

  • Forward model
  • predicts the next state of a dynamic

system given the current action and current state.

  • causal relationship
  • unique (easy to learn)
  • Inverse model
  • predicts the action required to move the

system from the current state to a desired future state

  • anti-causal relationship
  • (mostly) not unique (difficult to learn)

Internal models in motor control

(Kawato, 1999) Feedforward control:

controller

  • forward model - crucial for biological

systems (due to visual loop delay) Feedback control:

(current vis. state, target) → act (visual target,act) → next visual state

  • brain most probably exploits internal models
slide-3
SLIDE 3

Link b/w MNs and forward models

  • 1. visually-guided reach
  • 2. action observation ~ STS-PF-F5 ~ inverse model
  • 3. execution of imitation action ~ F5-PF-STS ~ forward model
  • cerebellum might also be involved, though

(Miall, 2003)

Visual feedback control mechanism

(Oztop, Wolpert, Kawato, 2005)

  • mathematical model

Mental state inference via visual feedback

(Oztop, Wolpert, Kawato, 2005)

MSI features

  • 3 action types to be distinguished (predict before the sequence end)
  • mathematical model used
  • feature extraction in parietal cortex (control variables)
  • bject-centered frame of reference used
  • no need for any coordinate transformation?
  • Mental state search
  • for discrete state (mental) space – exhaustive
  • for continuous state (mental) space – stochastic gradient descent
  • premotor cortex ~ forward model (MN)
  • analogy to mental imagery
slide-4
SLIDE 4

MNS2 model extension

(Bonaiuto & Arbib, 2010)

(Alstermark et al, 1981)

Features of MNS2 extended model

  • New role for MNs suggested: monitoring the execution of one's own actions

(as distinct from recognizing observed action)

  • High-level approach: action schemas modeled (Arbib, 1981)
  • Preconditions – action – effects
  • Concept of reinforcement learning exploited
  • TD learning used (full observability assumed)
  • Optimal action chosen by WTA: with max(priority), where
  • Priority = desirability x executability
  • Account for rapid reorganization (of the motor program) after lesion (cats)

Sketch of our MNS model

  • View dependent data used, 4

visual perspectives, 1 motor

  • visual inputs from right camera
  • 16 DoF in iCub’s right arm
  • bidirectional communication

b/w visual and motor systems assumed (via BAL algorithm)

  • Self-organizing maps (SOMs,

Kohonen, 1990) used as representations

(Rebrová, Pecháč, Farkaš, ICDL 2013)

Open issues in MNS modeling

  • Issue of reference frames (RFr)
  • egocentric, allocentric (observed agent-, or object-centered), absolute
  • In primates multiple RFr used: LIP (retinotopical), VIP (also head-centered),

F5 (hand-centered)

  • for generation of movement vector, hand and target must be in common RFr
  • Does using object-centered RFr alleviate the problem?
  • Selective attention can operate on allocentric RFr (Frischen et al, 2009)
  • Is the need for RFr transformation task-dependent?
  • If necessary, how to achieve positional and rotational invariance?
  • Where do MN come from?
  • adaptation (Arbib,...) vs associative hypotheses (Heyes)
  • How to model acquisition of mirror skills?