Alternative Representations. A Case Study of Proportional Judgements - - PowerPoint PPT Presentation

alternative representations a case study of proportional
SMART_READER_LITE
LIVE PREVIEW

Alternative Representations. A Case Study of Proportional Judgements - - PowerPoint PPT Presentation

Alternative Representations. A Case Study of Proportional Judgements Jakub Szymanik Shane Steinert-Threlkeld Gert-Jan Munneke Institute for Logic, Language and Computation University of Amsterdam LCQ15 Outline Introduction


slide-1
SLIDE 1

Alternative Representations. A Case Study of Proportional Judgements

Jakub Szymanik Shane Steinert-Threlkeld Gert-Jan Munneke

Institute for Logic, Language and Computation University of Amsterdam

LCQ’15

slide-2
SLIDE 2

Outline

Introduction Proof-of-concept Semantic automata Experiments Discussion

slide-3
SLIDE 3

General background

◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists

slide-4
SLIDE 4

General background

◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context?

slide-5
SLIDE 5

General background

◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context? ◮ Inseparability of algorithms and the data structures ◮ Sensitivity of tasks in other domains to the manner of presentation

slide-6
SLIDE 6

General background

◮ Meaning = ‘collection of algorithms’ ◮ Experimental testing by psycholinguists ◮ How are algorithms selected in a particular context? ◮ Inseparability of algorithms and the data structures ◮ Sensitivity of tasks in other domains to the manner of presentation ◮ Different presentations of the data create different mental

representations and thereby trigger different algorithms.

slide-7
SLIDE 7

Abstract

◮ Sensitivity of verification to a visual presentation. ◮ A computational model which made empirically verified predictions. ◮ Extending it to handle different representations. ◮ Predicting that they will effect working memory. ◮ Experiments and discussion.

slide-8
SLIDE 8

Outline

Introduction Proof-of-concept Semantic automata Experiments Discussion

slide-9
SLIDE 9

Outline

Introduction Proof-of-concept Semantic automata Experiments Discussion

slide-10
SLIDE 10

Model-theoretic view of quantifiers

U A B S0 S1 S2 S3 c1 c2 c3 c4 c5

slide-11
SLIDE 11

How do we encode models?

U A B S0 S1 S2 S3 c1 c2 c3 c4 c5 This model is uniquely described by the word 01.

slide-12
SLIDE 12

Step by step

◮ Restriction to finite models of the form M = (U, A, B).

slide-13
SLIDE 13

Step by step

◮ Restriction to finite models of the form M = (U, A, B). ◮ List of all elements belonging to A: c2, c3.

slide-14
SLIDE 14

Step by step

◮ Restriction to finite models of the form M = (U, A, B). ◮ List of all elements belonging to A: c2, c3. ◮ Write a 0 for each element of A \ B and a 1 for each element of A ∩ B.

slide-15
SLIDE 15

Step by step

◮ Restriction to finite models of the form M = (U, A, B). ◮ List of all elements belonging to A: c2, c3. ◮ Write a 0 for each element of A \ B and a 1 for each element of A ∩ B. ◮ Q is represented by the set of words describing all elements of the class. ◮ That is, Q is a formal language.

slide-16
SLIDE 16

More Formally Defined

Definition

Let M = M, A, B be a model, a an enumeration of A, and n = |A|. We define τ

  • a, B
  • ∈ {0, 1}n by
  • τ
  • a, B
  • i =
  • ai ∈ A \ B

1 ai ∈ A ∩ B Thus, τ defines the string corresponding to a particular finite model.

slide-17
SLIDE 17

More Formally Defined

Definition

Let M = M, A, B be a model, a an enumeration of A, and n = |A|. We define τ

  • a, B
  • ∈ {0, 1}n by
  • τ
  • a, B
  • i =
  • ai ∈ A \ B

1 ai ∈ A ∩ B Thus, τ defines the string corresponding to a particular finite model.

Definition

For a type 1, 1 quantifier Q, define the language of Q as LQ =

  • s ∈ {0, 1}∗ | #0(s), #1(s) ∈ Qc
slide-18
SLIDE 18

Examples of Quantifier Languages

◮ Levery = {w | #0 (w) = 0} ◮ Lsome = {w | #1 (w) > 0} ◮ Lmost = {w | #1 (w) > #0 (w)}

slide-19
SLIDE 19

Aristotelian quantifiers

“all”, “some”, “no”, and “not all” q0 q1 1 0,1 Finite automaton recognizing LAll

slide-20
SLIDE 20

Cardinal quantifiers

E.g. “more than 2”, “less than 7”, and “between 8 and 11” q0 q1 q2 q3 0,1 1 1 1 Finite automaton recognizing LMore than two

slide-21
SLIDE 21

Proportional quantifiers

◮ E.g. “most”, “less than half”. ◮ Most As are B iff card(A ∩ B) > card(A − B). ◮ There is no finite automaton recognizing this language. ◮ We need internal memory. ◮ A push-down automata will do.

1, 0/ε 0, x/0x 1, #/# 1, x/1x 0, 1/ε 0, #/#

Figure : A PDA computing Lmost.

slide-22
SLIDE 22

Does it say anything about processing?

Question

Do minimal automata predict differences in verification?

slide-23
SLIDE 23

Neurobehavioral studies

Differences in brain activity.

◮ All quantifiers are associated with numerosity:

recruit right inferior parietal cortex.

◮ Only higher-order activate working-memory capacity:

recruit right dorsolateral prefrontal cortex.

McMillan et al., Neural basis for generalized quantifiers comprehension, Neuropsychologia, 2005 Szymanik, A Note on some neuroimaging study of natural language quantifiers comprehension, Neuropsychologia, 2007

slide-24
SLIDE 24

Schizophrenic patients

Zajenkowski et al., A computational approach to quantifiers as an explanation for some language impairments in schizophrenia, Journal of Communication Disorders, 2011.

slide-25
SLIDE 25

What about input variations?

slide-26
SLIDE 26

Model saliency

◮ We take models of the form M, A, B, R where R ⊆ M × M. ◮ τ ′ maps such models into an alphabet containing pairs of symbols in

addition to 0s and 1s.

◮ τ ′ will map all pairs in R to pairs of symbols in the natural way, ◮ e.g., a, b ∈ R where a ∈ A ∩ B and b ∈ A \ B, then a, b will get

mapped to 1, 0.

◮ Any elements of the model that are not paired will get mapped to 0 or 1. ◮ L′ most is the set of strings where the only pairs are 1, 0 or 0, 1 and all

individual symbols are 1s.

◮ This language, however, is paradigmatically regular.

slide-27
SLIDE 27

So theoretically it should make a difference . . .

slide-28
SLIDE 28

Outline

Introduction Proof-of-concept Semantic automata Experiments Discussion

slide-29
SLIDE 29

Method

4 experiments:

  • 1. Are more than half of the dots yellow?
  • 2. Are most of the dots yellow?
  • 3. Are most of the letters ‘O’?
  • 4. Are more than half of the letters ‘E’?

Manipulations:

◮ random and paired ◮ 8/7, 9/8 and 10/9 proportions

slide-30
SLIDE 30
  • Exp. 1, 2
  • Exp. 1, 2
  • Exp. 3
  • Exp. 4
slide-31
SLIDE 31

Manipulating WM

◮ A digit recall task. ◮ A string of 5 digits for 1500ms. ◮ Probing one digit. ◮ Blocks with low memory condition: the same sequence of digits. ◮ Blocks with high memory condition: the digits were randomized.

slide-32
SLIDE 32

Participants

◮ M’Turk with HIT approval rate of at least 99%. ◮ Exp. 1: N = 59, 28 male, age 20–59 (M = 33, SD = 9.9) ◮ Exp. 2: N = 57, 28 male, age 20–68 (M = 35, SD = 9.6) ◮ Exp. 3: N = 56, 18 male, age 19–75 (M = 40, SD = 14) ◮ Exp. 4: N = 54, 27 male, age 20–69 (M = 35, SD = 12)

slide-33
SLIDE 33

Effects of the interaction of stimulus type and WM in the digit recall task

random paired 500 1000 1500

*

random paired

*

random paired

~

stimulus type stimulus type stimulus type

5 10 15 20 25 AC (% error) RT (ms) experiment 1, proportion 87 experiment 4, proportion 87 experiment 4, proportion 98 Differences in digit task performance (high-low) for paired and random stimuli

slide-34
SLIDE 34

Effects of stimulus type on verification RT and accuracy

proportion

* * * * * * *

87 98 109 all

* * * *

proportion Verification reaction times (ms) and accuracy (% errors), difference (random-paired)

500 1000 1500 2000

* * * *

2 4 6 8 10

* * * *

RT AC

500 1000 1500 2000

*

87 98 109 all

2 4 6 8 10

*

RT AC

Experiment 1 Experiment 2 Experiment 3 Experiment 4

slide-35
SLIDE 35

Outline

Introduction Proof-of-concept Semantic automata Experiments Discussion

slide-36
SLIDE 36

Summary

◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions

slide-37
SLIDE 37

Summary

◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions ◮ Why, however, do we only see the interaction effect in certain cases?

◮ a controlled lab setting ◮ approximating/counting≈most/more than half ◮ making a speed-accuracy tradeoff ◮ looking for mixed strategies ◮ understanding visual search

slide-38
SLIDE 38

Summary

◮ WM involvement depends on the presentation of a visual scene ◮ Consideration of different representations lead to new predictions ◮ Why, however, do we only see the interaction effect in certain cases?

◮ a controlled lab setting ◮ approximating/counting≈most/more than half ◮ making a speed-accuracy tradeoff ◮ looking for mixed strategies ◮ understanding visual search

◮ Using WM to distinguish verification strategies

slide-39
SLIDE 39

Outlook

◮ Back-and-forth between logic and cognition ◮ Logic brings complexity classification ◮ Cognitive science representation/strategies ◮ We need put it together in a form of a cognitive model