alternative representations a case study of proportional
play

Alternative Representations. A Case Study of Proportional Judgements - PowerPoint PPT Presentation

Alternative Representations. A Case Study of Proportional Judgements Jakub Szymanik Shane Steinert-Threlkeld Gert-Jan Munneke Institute for Logic, Language and Computation University of Amsterdam LCQ15 Outline Introduction


  1. Alternative Representations. A Case Study of Proportional Judgements Jakub Szymanik Shane Steinert-Threlkeld Gert-Jan Munneke Institute for Logic, Language and Computation University of Amsterdam LCQ’15

  2. Outline Introduction Proof-of-concept Semantic automata Experiments Discussion

  3. General background ◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists

  4. General background ◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context?

  5. General background ◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context? ◮ Inseparability of algorithms and the data structures ◮ Sensitivity of tasks in other domains to the manner of presentation

  6. General background ◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context? ◮ Inseparability of algorithms and the data structures ◮ Sensitivity of tasks in other domains to the manner of presentation ◮ Different presentations of the data create different mental representations and thereby trigger different algorithms.

  7. Abstract ◮ Sensitivity of verification to a visual presentation. ◮ A computational model which made empirically verified predictions. ◮ Extending it to handle different representations. ◮ Predicting that they will effect working memory. ◮ Experiments and discussion.

  8. Outline Introduction Proof-of-concept Semantic automata Experiments Discussion

  9. Outline Introduction Proof-of-concept Semantic automata Experiments Discussion

  10. Model-theoretic view of quantifiers U A B c 4 c 2 c 3 c 5 S 1 S 3 S 2 c 1 S 0

  11. How do we encode models? U A B c 4 c 2 c 3 c 5 S 1 S 3 S 2 c 1 S 0 This model is uniquely described by the word 01.

  12. Step by step ◮ Restriction to finite models of the form M = ( U , A , B ) .

  13. Step by step ◮ Restriction to finite models of the form M = ( U , A , B ) . ◮ List of all elements belonging to A : c 2 , c 3 .

  14. Step by step ◮ Restriction to finite models of the form M = ( U , A , B ) . ◮ List of all elements belonging to A : c 2 , c 3 . ◮ Write a 0 for each element of A \ B and a 1 for each element of A ∩ B .

  15. Step by step ◮ Restriction to finite models of the form M = ( U , A , B ) . ◮ List of all elements belonging to A : c 2 , c 3 . ◮ Write a 0 for each element of A \ B and a 1 for each element of A ∩ B . ◮ Q is represented by the set of words describing all elements of the class. ◮ That is, Q is a formal language.

  16. More Formally Defined Definition Let M = � M , A , B � be a model, � a an enumeration of A , and n = | A | . We ∈ { 0 , 1 } n by � � � define τ a , B � 0 a i ∈ A \ B � � � �� τ a , B i = 1 a i ∈ A ∩ B Thus, τ defines the string corresponding to a particular finite model.

  17. More Formally Defined Definition Let M = � M , A , B � be a model, � a an enumeration of A , and n = | A | . We ∈ { 0 , 1 } n by � � � define τ a , B � 0 a i ∈ A \ B � � � �� τ a , B i = 1 a i ∈ A ∩ B Thus, τ defines the string corresponding to a particular finite model. Definition For a type � 1 , 1 � quantifier Q , define the language of Q as s ∈ { 0 , 1 } ∗ | � # 0 ( s ) , # 1 ( s ) � ∈ Q c � � L Q =

  18. Examples of Quantifier Languages ◮ L every = { w | # 0 ( w ) = 0 } ◮ L some = { w | # 1 ( w ) > 0 } ◮ L most = { w | # 1 ( w ) > # 0 ( w ) }

  19. Aristotelian quantifiers “all”, “some”, “no”, and “not all” 0,1 1 0 q 0 q 1 Finite automaton recognizing L All

  20. Cardinal quantifiers E.g. “more than 2”, “less than 7”, and “between 8 and 11” 0,1 0 0 0 1 1 1 q 0 q 1 q 2 q 3 Finite automaton recognizing L More than two

  21. Proportional quantifiers ◮ E.g. “most”, “less than half”. ◮ Most As are B iff card ( A ∩ B ) > card ( A − B ) . ◮ There is no finite automaton recognizing this language. ◮ We need internal memory. ◮ A push-down automata will do. 1 , x / 1 x 1 , 0 /ε 1 , # / # 0 , # / # 0 , x / 0 x 0 , 1 /ε Figure : A PDA computing L most .

  22. Does it say anything about processing? Question Do minimal automata predict differences in verification?

  23. Neurobehavioral studies Differences in brain activity. ◮ All quantifiers are associated with numerosity: recruit right inferior parietal cortex. ◮ Only higher-order activate working-memory capacity: recruit right dorsolateral prefrontal cortex. McMillan et al., Neural basis for generalized quantifiers comprehension, Neuropsychologia, 2005 Szymanik, A Note on some neuroimaging study of natural language quantifiers comprehension, Neuropsychologia, 2007

  24. Schizophrenic patients Zajenkowski et al., A computational approach to quantifiers as an explanation for some language impairments in schizophrenia, Journal of Communication Disorders, 2011.

  25. What about input variations?

  26. Model saliency ◮ We take models of the form � M , A , B , R � where R ⊆ M × M . ◮ τ ′ maps such models into an alphabet containing pairs of symbols in addition to 0s and 1s. ◮ τ ′ will map all pairs in R to pairs of symbols in the natural way, ◮ e.g., � a , b � ∈ R where a ∈ A ∩ B and b ∈ A \ B , then � a , b � will get mapped to � 1 , 0 � . ◮ Any elements of the model that are not paired will get mapped to 0 or 1. ◮ L ′ most is the set of strings where the only pairs are � 1 , 0 � or � 0 , 1 � and all individual symbols are 1s. ◮ This language, however, is paradigmatically regular.

  27. So theoretically it should make a difference . . .

  28. Outline Introduction Proof-of-concept Semantic automata Experiments Discussion

  29. Method 4 experiments: 1. Are more than half of the dots yellow? 2. Are most of the dots yellow? 3. Are most of the letters ‘O’? 4. Are more than half of the letters ‘E’? Manipulations: ◮ random and paired ◮ 8/7, 9/8 and 10/9 proportions

  30. Exp. 1, 2 Exp. 1, 2 Exp. 3 Exp. 4

  31. Manipulating WM ◮ A digit recall task. ◮ A string of 5 digits for 1500ms. ◮ Probing one digit. ◮ Blocks with low memory condition: the same sequence of digits. ◮ Blocks with high memory condition: the digits were randomized.

  32. Participants ◮ M’Turk with HIT approval rate of at least 99%. ◮ Exp. 1: N = 59, 28 male, age 20–59 ( M = 33, SD = 9 . 9) ◮ Exp. 2: N = 57, 28 male, age 20–68 ( M = 35 , SD = 9 . 6) ◮ Exp. 3: N = 56, 18 male, age 19–75 ( M = 40 , SD = 14) ◮ Exp. 4: N = 54, 27 male, age 20–69 ( M = 35 , SD = 12)

  33. Effects of the interaction of stimulus type and WM in the digit recall task Differences in digit task performance (high-low) for paired and random stimuli experiment 4, proportion 87 experiment 4, proportion 98 experiment 1, proportion 87 25 1500 ~ 20 * 1000 15 RT AC (ms) (% error) * 10 500 5 0 0 random paired random paired random paired stimulus type stimulus type stimulus type

  34. Effects of stimulus type on verification RT and accuracy Verification reaction times (ms) and accuracy (% errors), difference (random-paired) Experiment 1 Experiment 2 2000 * * * 1500 * RT * 1000 500 * 0 10 8 6 AC * * 4 * * * 2 0 Experiment 3 Experiment 4 2000 * 1500 * RT * * 1000 * 500 0 10 8 AC 6 4 * * * * * 2 0 87 98 109 all 87 98 109 all proportion proportion

  35. Outline Introduction Proof-of-concept Semantic automata Experiments Discussion

  36. Summary ◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions

  37. Summary ◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions ◮ Why, however, do we only see the interaction effect in certain cases? ◮ a controlled lab setting ◮ approximating/counting ≈ most/more than half ◮ making a speed-accuracy tradeoff ◮ looking for mixed strategies ◮ understanding visual search

  38. Summary ◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions ◮ Why, however, do we only see the interaction effect in certain cases? ◮ a controlled lab setting ◮ approximating/counting ≈ most/more than half ◮ making a speed-accuracy tradeoff ◮ looking for mixed strategies ◮ understanding visual search ◮ Using WM to distinguish verification strategies

  39. Outlook ◮ Back-and-forth between logic and cognition ◮ Logic brings complexity classification ◮ Cognitive science representation/strategies ◮ We need put it together in a form of a cognitive model

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend