using machine learning to study the neural
play

Using Machine Learning to Study the Neural Representations of - PowerPoint PPT Presentation

Using Machine Learning to Study the Neural Representations of Language Meanings Tom M. Mitchell Carnegie Mellon University June 2017 How does neural activity encode word meanings? How does neural activity encode word meanings? How does


  1. Using Machine Learning to Study the Neural Representations of Language Meanings Tom M. Mitchell Carnegie Mellon University June 2017

  2. How does neural activity encode word meanings?

  3. How does neural activity encode word meanings? How does brain combine word meanings into sentence meanings?

  4. Neurosemantics Research Team Research Scientists Research Scientists Marcel Just Tom Mitchell Erika Laing Kai-Min Chang Dan Howarth Recent/Current PhD Students Leila Wehbe Dan Schwartz Alona Fyshe Mariya Toneva Mark Palatucci Gustavo Sudre Nicole Rafidi funding: NSF, NIH, IARPA, Keck

  5. Functional MRI

  6. Typical stimuli

  7. fMRI activation for “ bottle ”: bottle fMRI activation Mean activation averaged over 60 different stimuli: high average “ bottle ” minus mean activation: below average

  8. Classifiers trained to decode the stimulus word Hammer Trained or Classifier Bottle (SVM, Logistic regression, Deep net,Bayesian classifier ...) (classifier as virtual sensor of mental state)

  9. Classification task: is person viewing a “tool” or “building”? 1 statistically 0.9 Classification accuracy significant 0.8 p<0.05 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 p4 p8 p6 p11 p5 p7 p10 p9 p2 p12 p3 p1 Participants

  10. Are neural representations similar across people? Can we train classifiers on one group of people, then decode from new person?

  11. Are representations similar across people? YES rank accuracy classify which of 60 items

  12. Lessons from fMRI Word Classification Neural representations Easier to decode: similar across • concrete nouns • people • emotion nouns • language • word vs. picture Harder to decode: • abstract nouns • verbs* * except when placed in context

  13. Predictive Model? Predicted fMRI activity Arbitrary noun

  14. Predictive Model? [Mitchell et al., Science , 2008] vector representing word meaning Predicted fMRI activity Retrieve 25 å v = f i ( w ) c vi Input noun: text i = 1 “telephone” statistics trillion word trained on other text collection fMRI data

  15. Represent stimulus noun by co-occurrences with 25 verbs* Semantic feature values: “ celery” Semantic feature values: “ airplane” 0.8368, eat 0.8673, ride 0.3461, taste 0.2891, see 0.3153, fill 0.2851, say 0.2430, see 0.1689, near 0.1145, clean 0.1228, open 0.0600, open 0.0883, hear 0.0586, smell 0.0771, run 0.0286, touch 0.0749, lift … … … … 0.0000, drive 0.0049, smell 0.0000, wear 0.0010, wear 0.0000, lift 0.0000, taste 0.0000, break 0.0000, rub 0.0000, ride 0.0000, manipulate * in a trillion word text collection

  16. Predicted Activation is Sum of Feature Contributions “eat” “taste” “fill” + … Celery = 0.84 + 0.35 + 0.32 f eat (celery) from corpus c 14382,eat statistics learned high 25 å prediction v = f i ( w ) c vi i = 1 low 500,000 learned c vi Predicted “Celery” parameters

  17. “celery” “airplane” fMRI activation Predicted: high average Observed: below average Predicted and observed fMRI images for “celery” and “airplane” after training on other nouns . [Mitchell et al., Science , 2008]

  18. Evaluating the Computational Model • Leave two words out during training celery? airplane? 1770 test pairs in leave-2-out: – Random guessing  0.50 accuracy – Accuracy above 0.61 is significant (p<0.05)

  19. Learned activities associated with meaning components Participant P1 Semantic feature: Eat Push Run “Gustatory cortex” “somato - sensory” “Biological motion” Superior temporal Pars opercularis Postcentral gyrus sulcus (posterior) (z=24mm) (z=30mm) (z=12mm)

  20. Alternative semantic feature sets PREDEFINED corpus features Mean Acc. 25 verb co-occurrences .79 486 verb co-occurrences .79 50,000 word co-occurences .76 300 Latent Semantic Analysis features .73 50 corpus features from Collobert&Weston ICML08 .78

  21. Alternative semantic feature sets PREDEFINED corpus features Mean Acc. 25 verb co-occurrences .79 486 verb co-occurrences .79 50,000 word co-occurences .76 300 Latent Semantic Analysis features .73 50 corpus features from Collobert&Weston ICML08 .78 218 features collected using Mechanical Turk .83 Is it heavy? Can it break? features authored by Is it flat? Can it swim? Dean Pomerleau. Is it curved? Can it change shape? Is it colorful? Can you sit on it? feature values 1 to 5 Is it hollow? Can you pick it up? Is it smooth? Could you fit inside of it? features collected from Is it fast? Does it roll? at least three people Is it bigger than a car? Does it use electricity? Is it usually outside? Does it make a sound? people provided by Does it have corners? Does it have a backbone? Amazon’s Does it have moving parts? Does it have roots? “Mechanical Turk” Does it have seeds? Do you love it? …

  22. Alternative semantic feature sets PREDEFINED corpus features Mean Acc. 25 verb co-occurrences .79 486 verb co-occurrences .79 50,000 word co-occurences .76 300 Latent Semantic Analysis features .73 50 corpus features from Collobert&Weston ICML08 .78 218 features collected using Mechanical Turk* .83 20 features discovered from the data** .86 * developed by Dean Pommerleau ** developed by Indra Rustandi

  23. Discovering shared semantic basis [Rustandi et al., 2009] 1. Use CCA to discover latent features across subjects specific to study/subject CCA abstraction subj 1, word+pict å f k ( w ) = x v c vi v 20 learned … … latent features CCA abstraction f ( w ) subj 9, word+pict å f k ( w ) = x v c vi v CCA abstraction subj 10, word only å f k ( w ) = x v c vi v … … … CCA abstraction subj 20, word only å f k ( w ) = x v c vi v

  24. Each column is one fMRI image [slide courtesy of Indra Rustandi]

  25. Discovering shared semantic basis [Rustandi et al., 2009] 1. Use CCA to discover latent features specific to study/subject CCA abstraction subj 1, word+pict å f k ( w ) = x v c vi v 20 learned … … latent features CCA abstraction f ( w ) subj 9, word+pict å f k ( w ) = x v c vi v CCA abstraction subj 10, word only å f k ( w ) = x v c vi v … … … CCA abstraction subj 20, word only å f k ( w ) = x v c vi v

  26. Discovering shared semantic basis [Rustandi et al., 2009] 1. Use CCA to discover latent features 2. Train regression to predict them specific to study/subject CCA abstraction subj 1, word+pict independent of study/subject å f k ( w ) = x v c vi v 20 learned 20 learned … … 218 MTurk latent latent features features features CCA abstraction f ( w ) f ( w ) b ( w ) subj 9, word+pict å å f k ( w ) = f i ( w ) = x v c vi b k ( w ) c ik v k CCA abstraction subj 10, word only å word w f k ( w ) = x v c vi v … … … … … CCA abstraction subj 20, word only å f k ( w ) = x v c vi v

  27. Discovering shared semantic basis [Rustandi et al., 2009] 1. Use CCA to discover latent features 2. Train regression to predict them specific to study/subject 3. Invert CCA mapping predict representation subj 1, word+pict independent of study/subject å v = f i ( w ) c vi i 20 learned … … 218 MTurk latent features features predict representation f ( w ) b ( w ) subj 9, word+pict å å v = f i ( w ) = f i ( w ) c vi b k ( w ) c ik i k predict representation subj 10, word only å word w v = f i ( w ) c vi i … … … … predict representation subj 20, word only å v = f i ( w ) c vi i

  28. CCA Components: Top Stimulus Words component component component 3 component 4 1 2 Stimuli apartment screwdriver telephone pants that church pliers butterfly dress most closet refrigerator bicycle glass activate house knife beetle coat it barn hammer dog chair things that shelter? manipulation? touch my body?

  29. Timing?

  30. MEG: Stimulus “hand” (word plus line drawing) [Sudre et al., NeuroImage 2012]

  31. word length word length 100 ms 50 ms right diagonalness word length verticality (Sudre et al., under review ) 0 800 ms [Sudre et al., NeuroImage 2012]

  32. word length word length 100 ms right diagonalness word length verticality (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

  33. aspect ratio 150 ms word length internal details (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

  34. aspect ratio internal details 200 ms internal details IS IT HAIRY? (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

  35. white pixel count IS IT HOLLOW? horizontalness 250 ms IS IT MADE OF WOOD? IS IT HAIRY? IS IT AN ANIMAL? (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

  36. WAS IT EVER ALIVE? IS IT MAN-MADE? DOES IT GROW? IS IT ALIVE? CAN IT BITE OR STING? 300 ms IS IT ALIVE? CAN YOU PICK IT UP? CAN YOU HOLD IT? DOES IT GROW? IS IT BIGGER THAN A CAR? IS IT ALIVE? (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

  37. IS IT MAN-MADE? COULD YOU FIT INSIDE IT? WAS IT EVER ALIVE? DOES IT HAVE FOUR LEGS? CAN YOU PICK IT UP? CAN YOU HOLD IT? 350 ms CAN YOU HOLD IT IN ONE HAND? IS IT ALIVE? CAN IT BEND? (Sudre et al., under review ) 0 800 ms [Sudre et al., 2012]

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend