Neurodynamics of expression coding in the core face network - - PowerPoint PPT Presentation
Neurodynamics of expression coding in the core face network - - PowerPoint PPT Presentation
Neurodynamics of expression coding in the core face network Yuanning Li, Michael J. Ward, Witold J. Lipski, R. Mark Richardson, and Avniel Singh Ghuman Carnegie Mellon University University of PiDsburgh Does neural activity in fusiform code
Does neural activity in fusiform code for facial expression information?
- Contradictory evidence and theories can be found
in the literature about the coding in fusiform face area (FFA).
Classical model (Haxby et al., 2000) Recent proposed model (Duchaine & Yovel, 2015) FFA coding invariant aspect of faces general structural and shape information of faces FFA contributes to expression recognition No Yes Time of activity not specified ~170 ms after stim onset
2
Does neural activity in fusiform code for facial expression information?
- Contradictory evidence and theories can be found
in the literature about the coding in fusiform face area (FFA).
Classical model (Haxby et al., 2000) Recent proposed model (Duchaine & Yovel, 2015) FFA coding invariant aspect of faces general structural and shape information of faces FFA contributed to expression recognition No Yes Time of activity not specified ~170 ms after stim onset
2
Does neural activity in fusiform code for facial expression information?
- meta-analysis: 53 studies found on Neurosynth.org
with full brain functional mapping and comparison between emotions.
3
Does neural activity in fusiform code for facial expression information?
- meta-analysis: 53 studies found on Neurosynth.org
with full brain functional mapping and comparison between emotions.
- 14/53 report significant contrast in fusiform.
3
Does neural activity in fusiform code for facial expression information?
- meta-analysis: 53 studies found on Neurosynth.org
with full brain functional mapping and comparison between emotions.
- 14/53 report significant contrast in fusiform.
3
Research questions
- Can facial expression information be decoded from
fusiform?
- What are the spatiotemporal dynamics of such
encoding in fusiform?
4
Research questions
- Can facial expression information be decoded from
fusiform?
- What are the spatiotemporal dynamics of such
encoding in fusiform? intracranial EEG:
- 19 subjects, 29 electrodes directly
recording from the human fusiform
- Sensitive multivariate classification
approach
4
Methods: intracranial EEG
- 19 human epileptic patients
- 29 fusiform electrodes selected
- anatomical: electrode located in fusiform area
- functional: face sensitivity over other categories in event-
related potential (ERP) and broadband activity (BB)
5
Methods: intracranial EEG
- 19 human epileptic patients
- 29 fusiform electrodes selected
- anatomical: electrode located in fusiform area
- functional: face sensitivity over other categories in event-
related potential (ERP) and broadband activity (BB)
ERP BB
~200 ms after stim
- nset
5
L R
Methods:
- Cognitive task: gender discriminant
- 40 individuals (20 male), 5 expressions (neutral, angry,
happy, fear, sad)
6
Methods:
- Cognitive task: gender discriminant
- 40 individuals (20 male), 5 expressions (neutral, angry,
happy, fear, sad)
- Data analysis
- sliding time window
- multivariate pattern classification
- consider both ERP and BB
6
Results: expression decoding
- mean binary expression classification across all
fusiform electrodes
- peak accuracy 52.34% at 190 ms after stim
- nset (p < 0.05, Bonferroni corrected)
- 100
100 200 300 400 500 600
time (ms)
49 50 51 52 53
accuracy %
7
L R
Results: spatiotemporal dynamics
- pick the electrodes with significant facial
expression decoding (permutation test) 17/29 electrodes have significant expression decoding
- 100
100 200 300 400 500 600
time (ms)
48 50 52 54 56
accuracy %
8
L R
Research questions
- Can facial expression information be decoded from
fusiform?
- What are the spatiotemporal dynamics of such
encoding in fusiform?
Yes, fusiform activity encodes facial expressions
9
Research questions
- Can facial expression information be decoded from
fusiform?
- What are the spatiotemporal dynamics of such
encoding in fusiform?
Yes, fusiform activity encodes facial expressions
9
Results: spatiotemporal dynamics
- No significant difference between the time courses
- f left fusiform and right fusiform
left vs. right
- 100
100 200 300 400 500 600
time (ms)
48 50 52 54 56
accuracy %
left right 10
L R
Results: spatiotemporal dynamics
- Significant difference between the timecourses of
posterior fusiform and anterior fusiform posterior vs. anterior
- 100
100 200 300 400 500 600
time (ms)
48 50 52 54 56
accuracy %
posterior anterior
***
11
L R
Results: spatiotemporal dynamics
- Fusiform electrodes cluster into posterior and
anterior clusters
- 65
- 60
- 55
- 50
- 45
- 40
- 35
y axis (mm)
100 200 300 400 500 600
peak time (ms)
posterior vs. anterior
early posterior late anterior
12
L R
Research questions
- Can facial expression information be decoded from
fusiform?
- What are the spatiotemporal dynamics of such
encoding in fusiform?
Yes, bilateral fusiform activity encodes facial expressions
Posterior fusiform encodes expressions at the early stage Anterior fusiform encodes expressions at the late stage
13
Discussion
- Timing is an important factor in analyzing facial
expression processing.
- Early (100-200 ms): core processing, intrinsic
coding for structural and general shape info
- Late (300-500 ms): reciprocal, more deliberative
processing (Freiwald & Tsao 2010) Note: (Ghuman et al., 2014) FFA encodes face category in the early stage and individual faces in the late stage.
14
Discussion
- Spatial heterogeneity may explain the discrepancy
- f expression encoding in fusiform from the
literature, esp. in fMRI studies.
- 100
100 200 300 400 500 600
time (ms)
48 50 52 54
accuracy %
posterior fusiform anterior fusiform
15
L R
Acknowledgments Coauthors:
- Dr. Avniel Singh Ghuman (UPMC, CNBC)
- Dr. R. Mark Richardson (UPMC, CNBC)
- Dr. Witold Lipski (UPMC)
- Michael Ward (UPMC)
iEEG data collection and preprocessing:
- EMU staff (UPMC Presbyterian)
- Matthew Boring (CNUP, CNBC)
- Ari Kappel (UPMC)
Institutions: Funding support:
16
Thank you!
Future directions
- Identity X Expression
- FFA encodes face individuation (late stage,
200-500 ms after stim. onset) (Ghuman et al., 2014)
- 100
100 200 300 400 500 600
time (ms)
46 50 54 58
accuracy %
identity expression
Future directions
- What facial features underlie such spatiotemporal
processing?
Methods: intracranial EEG
- 19 human epileptic patients
- 29 fusiform electrodes selected
- anatomical: electrode located in fusiform area
- functional: face sensitivity over other categories in
event-related potential (ERP) and broadband activity (BB)
- 100
100 200 300 400 500
time (ms)
0.5 1 1.5
d'
L R
Results: face sensitivity
- mean face sensitivity across all fusiform electrodes
(face vs. non-face)
- 100
100 200 300 400 500
time (ms)
0.5 1 1.5
d'
posterior anterior
posterior vs. anterior L R
Results: face sensitivity
- mean face sensitivity across all fusiform electrodes
(face vs. non-face) left vs. right
- 100
100 200 300 400 500
time (ms)
0.5 1 1.5
d'
left right
L R
Results: representational dissimilarity matrix (RDM)
early (50-250 ms) late (250-450 ms)
bilateral
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA
left
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA
right
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA
p L R
Results: representational dissimilarity matrix (RDM)
early (50-250 ms) late (250-450 ms)
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA
bilateral
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA
anterior
AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA AF AN HA NE SA