The Co-evolution of Speech and the Lexicon: The Interaction of - - PowerPoint PPT Presentation
The Co-evolution of Speech and the Lexicon: The Interaction of - - PowerPoint PPT Presentation
The Co-evolution of Speech and the Lexicon: The Interaction of Functional Pressures, Redundancy, and Category Variation Winter & Wedel (2016) Presented by Miriam Schulz Seminar : Exemplar Theory 3 June 2020 Lecturer : Prof. Bernd Mbius
Outline
1. Introduction 2. The computational model 3. Simulations 4. Conclusion
Outline
1. Introduction 2. The computational model 3. Simulations 4. Conclusion
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The evolution of spoken language
4
Change in form Stability in function
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The evolution of spoken language
5
Change in form Stability in function
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The evolution of spoken language
6
Change in form Stability in function If language changes constantly, how can we maintain meaning?
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
7
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
8
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
9
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Pronunciation of ‘cot’
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
10
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Pronunciation of ‘cot’ Stored exemplars of the word ‘cot’
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
11
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Pronunciation of ‘cot’ Stored exemplars of the word ‘cot’ Stored exemplars of the sounds of ‘cot’: /k/, /၁/, /t/
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
12
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Perception of ‘cot’
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
13
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Perception of ‘cot’ Word-level update
- f ‘cot’
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
A multi-level exemplar framework
14
- Speech as a repeated cycle of production and perception
(Pierrehumbert 2001)
- Word pronunciation is influenced by stored exemplars of
the word, as well as of its individual sounds Perception of ‘cot’ Word-level update
- f ‘cot’
Sound-level update of /k/, /၁/, /t/
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
15
- Variation in pronunciation does not always impact
categorization success
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
16
- Variation in pronunciation does not always impact
categorization success
It doesn’t matter if I pronounce /k/
- r /x/ !
Are you saying “roca” and “roja” sound the same to you?!
English native Spanish native
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
17
- Variation in pronunciation does not always impact
categorization success Schematized variation of /k/ in English Schematized variation of /k/ in Spanish
/k/ /x/ /k/ /x/
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
18
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
19
Cryptic variation in biology: “variation that is not visible to evolution”
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
20
Source: Félix & Wagner (2008)
Cryptic variation in biology: “variation that is not visible to evolution”
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
21
Source: Félix & Wagner (2008) ≈ Variation in stored exemplars
Cryptic variation in biology: “variation that is not visible to evolution”
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
22
Source: Félix & Wagner (2008) ≈ Variation in stored exemplars
Cryptic variation in biology: “variation that is not visible to evolution”
≈ Production noise
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hidden variation
23
Source: Félix & Wagner (2008) ≈ Word recognition ≈ Variation in stored exemplars
Cryptic variation in biology: “variation that is not visible to evolution”
≈ Production noise
- Variation in pronunciation does not always impact
categorization success
- Analogy of neutral or cryptic variation in biology
(Wagner 2005)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
This paper
Research questions:
➔ How do the distribution of word categories and the distribution of sound categories interact?
24
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
This paper
Research questions:
➔ How do the distribution of word categories and the distribution of sound categories interact? ➔ How can the system of sound categories evolve?
25
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
This paper
Research questions:
➔ How do the distribution of word categories and the distribution of sound categories interact? ➔ How can the system of sound categories evolve?
Hypothesis:
The more words a specific sound contrast distinguishes, the less likely that contrast is to be lost.
26
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
This paper
Research questions:
➔ How do the distribution of word categories and the distribution of sound categories interact? ➔ How can the system of sound categories evolve?
Hypothesis:
The more words a specific sound contrast distinguishes, the less likely that contrast is to be lost.
Method:
Simulation; use computational model as conceptual tool
27
Outline
1. Introduction 2. The computational model 3. Simulations 4. Conclusion
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
29
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
30
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
31
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
- Each exemplar varies along two continuous
phonetic dimensions
○ Let’s assume that dimension 1 = voicing (/b/ vs. /p/); dimension 2 = vowel height (/a/ vs. /i/)
32
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
- Each exemplar varies along two continuous
phonetic dimensions
○ Let’s assume that dimension 1 = voicing (/b/ vs. /p/); dimension 2 = vowel height (/a/ vs. /i/) ○ /ba/ ≈ (30,30) vs. /pi/ ≈ (70,70)
33
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
- Each exemplar varies along two continuous
phonetic dimensions
○ Let’s assume that dimension 1 = voicing (/b/ vs. /p/); dimension 2 = vowel height (/a/ vs. /i/) ○ /ba/ ≈ (30,30) vs. /pi/ ≈ (70,70) VOT
34
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
- Each exemplar varies along two continuous
phonetic dimensions
○ Let’s assume that dimension 1 = voicing (/b/ vs. /p/); dimension 2 = vowel height (/a/ vs. /i/) ○ /ba/ ≈ (30,30) vs. /pi/ ≈ (70,70) VOT tongue height
35
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: setup
- Two agents
- Each with a mental lexicon
○ e.g. a 4-word-lexicon: {ba, pa, bi, pi}
- Every word is seeded with some exemplars
○ e.g. ba: [ba1, ba2, …, ban]
- Each exemplar varies along two continuous
phonetic dimensions
○ Let’s assume that dimension 1 = voicing (/b/ vs. /p/); dimension 2 = vowel height (/a/ vs. /i/) ○ /ba/ ≈ (30,30) vs. /pi/ ≈ (70,70) VOT tongue height
36
e.g. VOT e.g. tongue height
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
37
Speaker Listener
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
38
Speaker
- Choose word from vocabulary
/ba/
Listener
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
39
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
/ba/: (23, 29) /ba/ /ba/: [(23,29), (25,31), (30,30), ...]
Listener
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
40
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise
/ba/: (23, 29) /ba/ /ba/: [(23,29), (25,31), (30,30), ...] (23,29) → (21,33)
Listener
noise: (–2,+4)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
41
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Listener
noise: (–2,+4)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
42
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Listener
noise: (–2,+4)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
43
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Listener
noise: (–2,+4)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
44
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) (21, 33) → /ba/ !
Listener
- Categorize word based on incoming
sound noise: (–2,+4)
(21, 33) → /ba/ ! /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
45
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) (21, 33) → /ba/ !
Listener
- Categorize word based on incoming
sound
- Store as new exemplar of the
identified category noise: (–2,+4)
(21, 33) → /ba/ ! /ba/: [(21,33), (19,29,) ...] /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
46
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) (21, 33) → /ba/ !
Listener
- Categorize word based on incoming
sound
- Store as new exemplar of the
identified category → Anti-ambiguity bias: prioritize distinctive outputs noise: (–2,+4)
(21, 33) → /ba/ ! /ba/: [(21,33), (19,29,) ...] (21,33) more likely than (45,56) to be stored under /ba/ /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
The computational model: dialogue
47
Speaker
- Choose word from vocabulary
- Select exemplar for production: more
recent exemplars have higher activation value (memory decay)
- Apply two biases:
a. Random production noise b. Similarity bias
/ba/: (23, 29) (21, 33) → /ba/ !
Listener
- Categorize word based on incoming
sound
- Store as new exemplar of the
identified category → Anti-ambiguity bias: prioritize distinctive outputs noise: (–2,+4)
(21, 33) → /ba/ ! /ba/: [(21,33), (19,29,) ...] (21,33) more likely than (45,56) to be stored under /ba/ /ba/ /ba/: [(23,29), (25,31), (30,30), ...] /ba/: [(23,29), (25,31), ...] /bi/: [(31,79), (28,75), ….] (23,29) → (21,33)
Switch roles & repeat...
Outline
1. Introduction 2. The computational model 3. Simulations 4. Conclusion
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 1: the impact of vocabulary size
49
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 1: the impact of vocabulary size
50
Simulation results after 500 time steps
e.g. VOT e.g. tongue height
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 1: the impact of vocabulary size
51
Less variation with larger vocabulary
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 1: the impact of vocabulary size
52
Less variation with larger vocabulary Higher error rate with larger vocabulary
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 2: the impact of redundancy
53
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hypothesis: a redundant system will be less constrained by functional load, so more redundancy → more variation
Simulation 2: the impact of redundancy
54
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hypothesis: a redundant system will be less constrained by functional load, so more redundancy → more variation
Simulation 2: the impact of redundancy
55
(A) add a third, independent dimension, e.g. another vowel contrast, such as front-back → new vowel space: /i~ш~æ~α/
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hypothesis: a redundant system will be less constrained by functional load, so more redundancy → more variation
Simulation 2: the impact of redundancy
56
(A) add a third, independent dimension, e.g. another vowel contrast, such as front-back → new vowel space: /i~ш~æ~α/
three dimensions two dimensions Legend:
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Hypothesis: a redundant system will be less constrained by functional load, so more redundancy → more variation
Simulation 2: the impact of redundancy
57 three dimensions two dimensions
(A) add a third, independent dimension, e.g. another vowel contrast, such as front-back → new vowel space: /i~ш~æ~α/ (B) add a second “syllable”, e.g. /bapi/
Legend:
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Legend:
Hypothesis: a redundant system will be less constrained by functional load, so more redundancy → more variation
Simulation 2: the impact of redundancy
58 three dimensions two dimensions two syllables
- ne syllable
(A) add a third, independent dimension, e.g. another vowel contrast, such as front-back → new vowel space: /i~ш~æ~α/ (B) add a second “syllable”, e.g. /bapi/
Legend:
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
59
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
60
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
61
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
62
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
63
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 3: hidden variation
64
Hidden variation creates pathways for future change
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 4: the anti-ambiguity bias
65
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulation 4: the anti-ambiguity bias
66
Wedel (2012)
Outline
1. Introduction 2. The computational model 3. Simulations 4. Conclusion
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Simulating the evolution of spoken language
68
Change in form Stability in function
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
69
Change in form Stability in function
Similarity biases Anti-ambiguity bias
Simulating the evolution of spoken language
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
70
Change in form Stability in function
Similarity biases Anti-ambiguity bias Relative
- ptimum of
variation
Simulating the evolution of spoken language
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
71
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
72
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
73
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
- Key findings:
○ More words → less variation, due to anti-ambiguity bias
74
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
- Key findings:
○ More words → less variation, due to anti-ambiguity bias ○ Extra syllables or phonetic dimensions → more variation, due to increased redundancy
75
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
- Key findings:
○ More words → less variation, due to anti-ambiguity bias ○ Extra syllables or phonetic dimensions → more variation, due to increased redundancy ○ Hidden variation as a pathway to exploit new dimensions for more efficiency
76
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
- Key findings:
○ More words → less variation, due to anti-ambiguity bias ○ Extra syllables or phonetic dimensions → more variation, due to increased redundancy ○ Hidden variation as a pathway to exploit new dimensions for more efficiency
- Implications:
○ Selection at word level impacts selection at sound level!
77
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Conclusion
- Goal: simulate variation & evolution of the sound system
- Framework: Exemplar theory as a model of evolutionary change through
production-perception loop
- Method: simulation using an exemplar-based architecture
- Key findings:
○ More words → less variation, due to anti-ambiguity bias ○ Extra syllables or phonetic dimensions → more variation, due to increased redundancy ○ Hidden variation as a pathway to exploit new dimensions for more efficiency
- Implications:
○ Selection at word level impacts selection at sound level! ○ Structure of human languages shaped by cultural evolution
78
cot caught [k၁t]
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
References
Félix, M. A., & Wagner, A. (2008). Robustness and evolution: concepts, insights and challenges from a developmental model system. Heredity, 100(2), 132-140. Wedel, A. (2012). Lexical contrast maintenance and the organization of sublexical contrast systems. Language and Cognition, 4(4), 319-355. Winter, B., & Wedel, A. (2016). The Co-evolution of Speech and the Lexicon: The Interaction of Functional Pressures, Redundancy, and Category Variation. Topics in cognitive science, 8(2), 503–513. https://doi.org/10.1111/tops.12202 Winter & Wedel (2016) 3 June 2020 Miriam Schulz
79
Questions & Discussion
Winter & Wedel (2016) 3 June 2020 Miriam Schulz
Sample discussion topics
➔ Computational modeling as a tool for linguistic investigation ➔ A single-agent production-perception feedback loop? (see note 2) ➔ The biological metaphor of cryptic/neutral variation
81