Class 1: Introduction and OT Basics Adam Albright (albright@mit.edu) - - PowerPoint PPT Presentation

class 1 introduction and ot basics
SMART_READER_LITE
LIVE PREVIEW

Class 1: Introduction and OT Basics Adam Albright (albright@mit.edu) - - PowerPoint PPT Presentation

Class 1: Introduction and OT Basics Adam Albright (albright@mit.edu) LSA 2017 Phonology University of Kentucky Mechanics Syllabus Office hours Background Class website: lsa2017.phonology.party Introduction Constraints Ranking


slide-1
SLIDE 1

Class 1: Introduction and OT Basics

Adam Albright (albright@mit.edu)

LSA 2017 Phonology University of Kentucky

slide-2
SLIDE 2

Mechanics

▶ Syllabus ▶ Office hours ▶ Background ▶ Class website: lsa2017.phonology.party

Introduction Constraints Ranking Modeling distributions Practice References 1/42

slide-3
SLIDE 3

What is the goal of phonological analysis?

▶ Describing corpora ▶ Describing lexicons ▶ Describing speakers

Introduction Constraints Ranking Modeling distributions Practice References 2/42

slide-4
SLIDE 4

What is the goal of phonological analysis?

▶ Describing corpora ▶ Describing lexicons ▶ Describing speakers

Introduction Constraints Ranking Modeling distributions Practice References 2/42

slide-5
SLIDE 5

Phonology as a function

We seek to model the function that speakers use to assign probability distributions over surface (output) representations

▶ In general (unconditioned): possible/probable vs.

impossible/improbably morphemes, words,etc.

▶ What is a possible word/surface form

▶ Conditioned: the morpheme /ætam/ is pronounced [æɾəm] when

no overt suffix is added, not *[ətʰam].

▶ But the morpheme /ɔtʌm/ is pronounced [ɔɾəm], not *[æɾəm] ▶ What is a possible output for a given word

▶ This function is the grammar ▶ We indirectly observe the distribution that it assigns

(pronunciations, acceptability judgments, etc.), infer the function

▶ Language learners have even less evidence, and yet they

converge on similar functions

Introduction Constraints Ranking Modeling distributions Practice References 3/42

slide-6
SLIDE 6

Frequency vs. grammaticality

▶ We seek to model what speakers actually know about

distributions

▶ Just because we can observe a restriction in a wordlist, no

guarantee that speakers encode it in precisely this form (or at all)

▶ Sanity check: generalization

Introduction Constraints Ranking Modeling distributions Practice References 4/42

slide-7
SLIDE 7

Generalization: new words

▶ If a sequence is illegal, it will be avoided in new words, e.g.,

coined or borrowed

▶ English: Acronyms/initialisms create many #Cl items

▶ PLoS (Public Library of Science), vlog (v(ideo) log)

▶ Clippings sometimes do as well: (we)blog ▶ However, #tl, #dl are never generated

Introduction Constraints Ranking Modeling distributions Practice References 5/42

slide-8
SLIDE 8

Generalization: acceptability judgments

Halle (1978) ‘Knowledge unlearned and untaught’

▶ Which of the following would be possible English words?

▶ ptak, thole, hlad, plast, sram, mgla, vlas, flitch, dnom, rtut

▶ Native English speakers tend to agree that…

▶ Some would be perfectly fine English words: plast, flitch, thole ▶ Some are completely impossible: ptak, hlad, mgla, dnom, rtut ▶ Some are in between: vlas, sram

▶ Generally mirrors attestation of clusters

▶ Attested: #pl, #fl, #θ ▶ Marginally attested: #vl ▶ Unattested: #pt, #hl, #rt, #mgl

▶ ‘Blick’ test: confirms speakers generalize certain facts about

phonotactic distributions

Introduction Constraints Ranking Modeling distributions Practice References 6/42

slide-9
SLIDE 9

Underlearning

▶ English has no words ending in [ɛsp]1 ▶ Apparently not avoided when the result of truncation

▶ OED: resp(ectable), Thesp(ian) ▶ Urban dictionary: desp(ondent)

▶ Acronyms/initialisms

▶ DESP: Disability & Educational Support Program, Department of

Environmental Science and Policy, Division of Extramural Science Programs, Deployment Extension Stabilization Pay

▶ DJ Devin Skylar Post → [dɛsp]2

▶ T

ypicality judgments (1=very non-typical, 9=very typical) Bailey and Hahn (2001)

dɹɛsp 4.67 dɹɪsp 4.58 dɹʌsp 4.04 gɹɛsp 6.17 gɹʌsp 5.54 kɹɛsp 5.67 kɹʌsp 4.96 ɹɛsp 5.13 ɹʌsp 5.46 ʃɹɛsp 2.79 ʃɹɪsp 2.33 tɹɛsp 4.33 tɹʌsp 5.04

1The OED lists ‘(the) resp’, a Lincolnshire dialect word from the 18th and 19th

centuries referring to a sheep disease caused by brassica poisoning.

2http://www.soundclick.com/bands/default.cfm?bandID=340811 Introduction Constraints Ranking Modeling distributions Practice References 7/42

slide-10
SLIDE 10

Underlearning

▶ (Colloquial) English lacks words beginning with #skl

p t k l

✓spl

— — r

✓spɹ ✓stɹ ✓skɹ

▶ Nonetheless acceptable?

▶ Blick test: [sklæb] ▶ Learned words: sclerosis, sclerenchyma ▶ Borrowings: Sklar, Sklodowski, Skluzacek

▶ Clements and Keyser (1983): an accidental gap

▶ Unattested in the language, but permitted by the grammar

Introduction Constraints Ranking Modeling distributions Practice References 8/42

slide-11
SLIDE 11

Accounting for such discrepancies

▶ These gaps arguably bump up against a limitation on complexity

  • r nature of phonological restrictions

▶ Final ɪsp# and æsp# both attested; *ɛsp# restriction must

specifically target ɛsp#

▶ Clements and Keyser: C1C2C3 is tolerated if C1C2 and C2C3 are

both tolerated ▶ More generally: grammatical formalism determines which facts

can be encoded

Introduction Constraints Ranking Modeling distributions Practice References 9/42

slide-12
SLIDE 12

Overlearning

▶ Many #CC clusters are unattested in data to ordinary learners

▶ #pt, #lb, #zʒ, #hɹ, #vl, #mg, #jw, #sɹ, #bw

▶ Yet some are judged more acceptable than others

▶ ?vl, ?sɹ, ?bw ▶ *pt, *zʒ, *jw, *mg, …

▶ May reflect prior/innate preferences for some sequences over

  • thers

▶ Or, generalization based on properties that go beyond the

specific segments involved

▶ E.g., phonological features: fricative+liquid

Introduction Constraints Ranking Modeling distributions Practice References 10/42

slide-13
SLIDE 13

The upshot

▶ If our goal is to model speakers, we should not assume that

grammar includes all observable distributional restrictions

▶ Refined goal: formulate a grammar that distinguishes between

sounds/sequences that are accepted by native speakers (‘grammatical’), and ones that aren’t (‘ungrammatical’)

▶ In many cases, a restriction is so robust/abundantly supported in

the language that we will take it for granted that the grammar encodes it

▶ English lacks uvular consonants ▶ Japanese lacks word-final stops

▶ Promissory note: must confirm that speakers generalize patterns

and restrictions

Introduction Constraints Ranking Modeling distributions Practice References 11/42

slide-14
SLIDE 14

Encoding restrictions

A useful assumption: existing words and new words are generated by the same mechanism

▶ I.e., only difference is that known words have been encountered

before

▶ Clearly too strong (exceptions to grammar) ▶ Allows us to make predictions about lexicons/corpora ▶ Allows learners to reverse engineer grammar from lexicon/corpus

Introduction Constraints Ranking Modeling distributions Practice References 12/42

slide-15
SLIDE 15

Encoding restrictions

A baseline: an unrestricted model

▶ With some probability α, draw a previously generated morpheme

from the lexicon

▶ Otherwise (probability 1-α), generate a new morpheme

▶ Randomly draw a segment ▶ With some probability p, stop ▶ Otherwise, repeat

▶ Flat distribution: all sounds contrast in all contexts (no

predictability)

▶ Generative models vs. discriminative models

Demo: GenerateWords.Unconstrained.pl

Introduction Constraints Ranking Modeling distributions Practice References 13/42

slide-16
SLIDE 16

The function of phonology

We need the grammar to…

▶ Eliminate outputs containing certain sounds

▶ I.e., only certain sounds are allowed

▶ Eliminate outputs containing certain sounds in particular

contexts, or particular sequences

▶ I.e., only certain sound combinations are allowed

▶ Or, make these outputs less probable

▶ We’ll ignore gradient distributions for now, and focus on binomial

(all or nothing) distributions

Introduction Constraints Ranking Modeling distributions Practice References 14/42

slide-17
SLIDE 17

But what about transformations?

▶ You might have thought that we need the grammar to change

certain sounds to other sounds

▶ This is equivalent to ‘eliminate outputs containing certain sounds,

in the context where they are in the input’

▶ More on this below

Introduction Constraints Ranking Modeling distributions Practice References 15/42

slide-18
SLIDE 18

Phonological constraints

▶ Constraint-based approaches to phonology provide a convenient

and intuitive way to model functions that eliminate particular sounds or strings

▶ Starting simply: allowing some sounds and not others (an

inventory of surface phones)

▶ Markedness constraints: specify a configuration that is penalized

(marked)

▶ Each occurrence in a surface form incurs a violation ▶ Indicator functions: register presence or absence of a given

configuration ▶ E.g., *b violated by [tæb], [bɔl], [bɪb] (twice), etc. ▶ Satisfied by [tæp], [kɔl], etc.

Introduction Constraints Ranking Modeling distributions Practice References 16/42

slide-19
SLIDE 19

Filtering outputs

▶ Constraints act as a filter on outputs ▶ Outputs with fewer violations are better (more harmonic) than

  • utputs with more violations

▶ The output(s) with the fewest violations are optimal

*b

pa

✓ ✓

da

* ba *

Introduction Constraints Ranking Modeling distributions Practice References 17/42

slide-20
SLIDE 20

What do markedness constraints penalize?

▶ Constraint *b penalizes outputs that contain [b]

▶ Features: *b = *[−syllabic, +consonantal, −sonorant,

−continuant, +voice, +labial, …]

▶ Features allow us to express more general constraints that

penalize sets of segments (natural classes)

▶ *[−sonorant, +voice] (No voiced obstruents)

▶ Guide to features: see Canvas site

Introduction Constraints Ranking Modeling distributions Practice References 18/42

slide-21
SLIDE 21

What do markedness constraints penalize?

Reasons to think there are general constraints

▶ Inventories typically contain/ban featurally coherent classes of

segments (voiced stops, nasals, fricatives, etc.)

▶ Could be expressed one-by-one with segmental constraints, but

featural coherence would be a coincidence ▶ Models how humans generalize: the Bach test

▶ Evidence from existing words: *fz, *fd, *kz, *kd, etc. ▶ Generalize: *xz, *xd ▶ Features: *[−voi][−son,+voi]

▶ Not only are constraints on classes of segments necessary, but

speakers seem to prefer them

▶ Economy? (one constraint covers all cases) ▶ Generality? (broadest constraint consistent with the data)

▶ A working hypothesis/analytical strategy: constraints formulated

as broadly as possible

Introduction Constraints Ranking Modeling distributions Practice References 19/42

slide-22
SLIDE 22

Filtering outputs

A first stab (to be modified)

▶ With some probability α, draw a previously generated morpheme

from the lexicon

▶ Otherwise (probability 1-α), generate a new morpheme

▶ Randomly draw a segment ▶ With some probability p, stop; assess constraint violations, and

start again if there are violations

▶ Otherwise, repeat

Demo: GenerateWords.Inventory.pl, GenerateWords.Sequences.pl

Introduction Constraints Ranking Modeling distributions Practice References 20/42

slide-23
SLIDE 23

Filtering outputs

▶ This simple model successfully concentrates probability on ‘legal’

  • utputs, but it is insufficient

▶ Can’t handle restrictions involving complementary distribution ▶ No way to make output conditional on the input

▶ I.e., to choose different outputs for different inputs/target

morphemes

Introduction Constraints Ranking Modeling distributions Practice References 21/42

slide-24
SLIDE 24

Complementary distribution

[h] vs. [ɸ] in Japanese

([ɴ] = uvular nasal, [ɯ] = back high unrounded vowel) hako ‘box’ hoʃi ‘star’ nohohonto ‘without a care’ heɴ ‘strange’ saiɸɯ ‘wallet’ toːɸɯ ‘tofu’ tehoɴ ‘model’ ɸɯkɯ ‘clothes’ ɸɯkai ‘deep’ gyaɸɯɴ (speechless) ʃihai ‘control’ ɸɯwaɾi ‘softly’ gohaɴ ‘cooked rice’ kahei ‘currency’ hai ‘yes’ ɸɯtatsɯ ‘two’ kaɸɯ ‘widow’ heɾɯ ‘decrease’

Introduction Constraints Ranking Modeling distributions Practice References 22/42

slide-25
SLIDE 25

Complementary distribution

▶ Predictable distribution: mutually exclusive contexts

▶ ɸ/

ɯ, h/

  • ther

▶ Restrictions:

▶ No h/

ɯ

▶ No ɸ elsewhere

Indicating contexts: X Y

Introduction Constraints Ranking Modeling distributions Practice References 23/42

slide-26
SLIDE 26

Describing complementary distribution

▶ The simple ‘filter outputs with violations’ model can’t derive

complementary distribution *h[+lab] *ɸ

pa

✓ ✓ ✓

✓ ✓ ✓

ha

✓ ✓

* hɯ *

* ɸa

* * ɸɯ

*

▶ Intuition: [ɸɯ] has [ɸ], but it’s better than [hɯ] ▶ Constraint ranking: *h[+lab] ≫ *ɸ ▶ Candidate competition: force the model to choose between [hɯ]

and [ɸɯ]

Introduction Constraints Ranking Modeling distributions Practice References 24/42

slide-27
SLIDE 27

Inputs and faithfulness

▶ A common solution: condition the output on a specific input, such

as /hɯ/

▶ Notation: /input/, [output]

▶ Constraints penalize deviations between input and output

▶ Ident([±high]): corresponding segments must agree in vowel

height

▶ Ident([±labial]): corresponding segments must agree in labiality

Introduction Constraints Ranking Modeling distributions Practice References 25/42

slide-28
SLIDE 28

EVAL in OT

/h1ɯ2/ Ident([±high]) *hɯ *ɸ Ident([±lab])

a. ɸ1ɯ2

✓ ✓

* * b. h1ɯ2

*!

✓ ✓

c. h1a2 *!

✓ ✓ ✓

▶ Input-Output Correspondence (indicted with indices) ▶ Ranking (underdetermined: dashed line) ▶ Competition: candidate elimination

▶ Fatal violations: *! ▶ Optimal/most harmonic: ☞, or →

Comparative notation

W: winner has fewer violations than loser L: winner has more violations than loser e (or blank): equal violations Ranking condition: at least one ‘W’ above all ‘L’s

Introduction Constraints Ranking Modeling distributions Practice References 26/42

slide-29
SLIDE 29

EVAL in OT

/h1ɯ2/ Ident([±high]) *hɯ *ɸ Ident([±lab])

a. ɸ1ɯ2

✓ ✓

* * b. h1ɯ2

W *! L ✓ L ✓ c. h1a2 W *!

L ✓ L ✓

▶ Input-Output Correspondence (indicted with indices) ▶ Ranking (underdetermined: dashed line) ▶ Competition: candidate elimination

▶ Fatal violations: *! ▶ Optimal/most harmonic: ☞, or →

▶ Comparative notation

▶ W: winner has fewer violations than loser ▶ L: winner has more violations than loser ▶ e (or blank): equal violations ▶ Ranking condition: at least one ‘W’ above all ‘L’s

Introduction Constraints Ranking Modeling distributions Practice References 26/42

slide-30
SLIDE 30

Partial and total hierarchies

▶ Every W/L pair establishes a necessary ranking ▶ Between multiple W’s, L’s, e’s: ranking is harmless, but

unnecessary

▶ For most sets of data, no crucial rankings can be established for

many pairs of constraints

▶ Convention: “stratified hierarchies”

▶ Constraints within a stratum may be ranked either way (coin toss

at evaluation time)

Introduction Constraints Ranking Modeling distributions Practice References 27/42

slide-31
SLIDE 31

Deriving complementary distribution

/h1ɯ2/ Ident([±high]) *hɯ *ɸ Ident([±lab]) a. h1ɯ2

W *! L ✓ L ✓

b. ɸ1ɯ2

✓ ✓

* * c. h1a2 W *!

L ✓ L ✓ d. ɸ1a2 W *!

* * /ɸ1ɯ2/ Ident([±high]) *hɯ *ɸ Ident([±lab]) a. h1ɯ2

W *! L ✓ W *

b. ɸ1ɯ2

✓ ✓

*

c. h1a2 W *!

L ✓ W * d. ɸ1a2 W *!

*

/h1a2/ Ident([±high]) *hɯ *ɸ Ident([±lab])

a. h1a2

✓ ✓ ✓ ✓

b. ɸ1a2

✓ ✓

W * W * c. h1ɯ2 W *! W *

✓ ✓

d. ɸ1ɯ2 W *!

W * W * /ɸ1a2/ Ident([±high]) *hɯ *ɸ Ident([±lab])

a. h1a2

✓ ✓ ✓

* b. ɸ1a2

✓ ✓

W * L ✓ c. h1ɯ2 W *! W*

* d. ɸ1ɯ2 *!

* L ✓

Introduction Constraints Ranking Modeling distributions Practice References 28/42

slide-32
SLIDE 32

Revised model

▶ With some probability α, draw a previously generated morpheme

from the lexicon

▶ Otherwise (probability 1-α), generate a new morpheme

▶ Randomly draw a segment ▶ With some probability p, stop; evaluate to select optimal output ▶ Otherwise, repeat

This model correctly concentrates probability on legal outputs

▶ Conditioned: choose optimal output for a specific input ▶ Unconditioned: choose set of outputs that can ever emerge as

  • ptimal (i.e., for any input): Richness of the Base (ROTB)

Introduction Constraints Ranking Modeling distributions Practice References 29/42

slide-33
SLIDE 33

Constructing and arguing for an OT analysis

▶ Finding restrictions and hypothesizing constraints ▶ Ranking arguments, comparative notation ▶ Underdetermined rankings, and Hasse diagrams

Introduction Constraints Ranking Modeling distributions Practice References 30/42

slide-34
SLIDE 34

Phonological distributions

▶ Ban: X never occurs (predictably absent) ▶ Complementary distribution: X never occurs except in a specific

context; Y occurs in general, but not in the context where X occurs

▶ Contrast: X, Y are unpredictable (occur in same or overlapping

contexts)

▶ Contextual neutralization: X, Y contrast in some contexts, but in

specific contexts, only one of them occurs

Introduction Constraints Ranking Modeling distributions Practice References 31/42

slide-35
SLIDE 35

Japanese fricatives

hako ‘box’ hoʃi ‘star’ nohohonto ‘without a care’ heɴ ‘strange’ saiɸɯ ‘wallet’ toːɸɯ ‘tofu’ tehoɴ ‘model’ ɸɯkɯ ‘clothes’ ɸɯkai ‘deep’ gyaɸɯɴ (speechless) ʃihai ‘control’ ɸɯwaɾi ‘softly’ gohaɴ ‘cooked rice’ kahei ‘currency’ hai ‘yes’ ɸɯtatsɯ ‘two’ kaɸɯ ‘widow’ heɾɯ ‘decrease’

  • soi

‘slow’ mɯʃi ‘insect’ ase ‘sweat’ miso ‘soybean paste’ sakin ‘gold dust’ ʃotokɯ ‘income, earnings’ ʃako ‘garage’ senaka ‘back’ soʃitsɯ ‘aptitude’ kesa ‘this morning’ toʃi ‘year’ satoɾi ‘realization’ sewawo sɯɾɯ ‘take care of’ kaisoː ‘reminiscence’ ʃotokɯ ‘income’ haʃi ‘chopsticks’ tʃɯːʃi ‘stop’ sakɯsei sɯɾɯ ‘prepare’ soʃitsɯ ‘aptitude’ kɯsaɾɯ ‘rot’ kagakɯʃa ‘scientist’ sɯʃi ‘sushi’

▶ Distribution of [ɸ], [h] ▶ Distribution of [s], [ʃ], [x]

Introduction Constraints Ranking Modeling distributions Practice References 32/42

slide-36
SLIDE 36

Linking distributions to constraint rankings

▶ Space of possible phonological grammars

▶ Different rankings of constraints ▶ 3 constraints ⇒ 3! = 6 possible grammars ▶ However, not all produce distinct outputs

▶ Ident([±A]) > *A: preserve [+A] and [-A] in output (contrast) ▶ *A > Ident([±A]): value is predictable (neutralization, no contrast) ▶ *[-A]/C

D > *[+A]: value is predictable, but depends on context (complementary distribution)

Introduction Constraints Ranking Modeling distributions Practice References 33/42

slide-37
SLIDE 37

Different rankings, different outcomes

Contrast everywhere: Ident(aspiration) > others Initial: /pa/ Id(asp) *Unasp *Asp /#

pa * pʰa * * /pʰa/ Id(asp) *Unasp *Asp /# pa * *

pʰa * Final: /ap/ Id(asp) *Unasp *Asp /#

ap apʰ * * /apʰ/ Id(asp) *Unasp *Asp /# ap * *

apʰ * ▶ More important to preserve aspiration than obey other constraints ▶ Grammar lets through outputs with both values → contrast

▶ pa vs. pʰa, ap vs. apʰ

Introduction Constraints Ranking Modeling distributions Practice References 34/42

slide-38
SLIDE 38

Different rankings, different outcomes

Contextual neutralization: Specific > Ident > General Initial: /pa/ *Unasp Id(asp) *Asp /# pa *

pʰa * * /pʰa/ *Unasp Id(asp) *Asp /# pa * *

pʰa * Final: /ap/ *Unasp Id(asp) *Asp /#

ap apʰ * * /apʰ/ *Unasp Id(asp) *Asp /# ap *

apʰ * ▶ Ident(asp) > *Asp: preserves aspiration in output (contrast) ▶ *Unaspirated stop/#

> Ident(asp): stops aspirated initially

▶ Contrast in some contexts, neutralization in others

▶ ap vs. apʰ (contrast), but only pʰa, *pa (neutralization)

Introduction Constraints Ranking Modeling distributions Practice References 35/42

slide-39
SLIDE 39

Different rankings, different outcomes

Unaspirated everywhere: *Asp > others Initial: /pa/ *Asp *Unasp Id(asp) /#

pa * pʰa * * /pʰa/ *Asp *Unasp Id(asp) /#

pa * * pʰa * Final: /ap/ *Asp *Unasp Id(asp) /#

ap apʰ * * /apʰ/ *Asp *Unasp Id(asp) /#

ap * apʰ * ▶ Giving Ident(asp) lower priority means aspiration can be adjusted to satisfy higher ranked constraints ▶ More important to remove aspiration than to preserve it, or to aspirate word-initially ▶ Grammar selects unaspirated outputs → no contrast (neutralization)

Introduction Constraints Ranking Modeling distributions Practice References 36/42

slide-40
SLIDE 40

Different rankings, different outcomes

Complementary distribution: Specific > General > Ident Initial: /pa/ *Unasp *Asp Id(asp) /# pa *

pʰa * * /pʰa/ *Unasp *Asp Id(asp) /# pa * *

pʰa * Final: /ap/ *Unasp *Asp Id(asp) /#

ap apʰ * * /apʰ/ *Unasp *Asp Id(asp) /#

ap * apʰ * ▶ *Unaspirated stop/#

> *Aspirated: normally ban aspirated,

but ban unaspirated specifically word-initially

▶ No contrast, but values occur predictably in complementary

distribution

Introduction Constraints Ranking Modeling distributions Practice References 37/42

slide-41
SLIDE 41

Aspiration in Ossetic

▶ Find a ranking of constraints that can predict aspiration in Ossetic ▶ tsʰ = aspirated affricate

tʰəχ ‘strength’ kʰɔttaɡ ‘linen’ χɔstɔɡ ‘near’ ɔftən ‘be added’ fadatʰ ‘possibility’ kʰastɔn ‘I looked’ tsʰɔst ‘eye’ kʰarkʰ ‘hen’ akkaɡ ‘adequate’ dəkkaɡ ‘second’ tsʰəppar ‘four’ tsʰətʰ ‘honor’ tsʰəχt ‘cheese’ kʰɔm ‘where’ fɔste ‘behind’ kʰom ‘mouth’ pʰirən ‘comb wool’ zaχta ‘he told’ χɔskard ‘scissors’ χɔston ‘military’ pʰɔrrɔst ‘fluttering’

Introduction Constraints Ranking Modeling distributions Practice References 38/42

slide-42
SLIDE 42

Lakhota nasal vowels

ʃpã cooked nũnĩ wander lost ʃkate played ʃpa break off, divide paptã turn over

  • ʃkã

motion nãʒĩ stand ʃixtĩ poorly made papsaka break with pressure pʃũ shed luta red ɡnũɡnũʃka grasshopper nũpa two kʃu to bead

  • wãʒi

at rest t’anũs’e practically dead

  • pʰestola

pencil igmũ cat tohã sometime zaptã five mãza metal ekta to, at ʃnĩʃnĩʒe withered tkapa gummy mnũɣe eat crunchily nãʒitʃa flee hĩxpæ to fall gnãʃka frog papta through nãʃpi break w/foot ɣã bushy ʃakpe six

  • tʃeti

fireplace igleglepa vomit waʃte good

  • hãgle

follow around

Introduction Constraints Ranking Modeling distributions Practice References 39/42

slide-43
SLIDE 43

More reading

For the material from today

▶ If you need some background on speech sounds: Kenstowicz

(1994) Phonology in Generative Grammar, chapter 1 (and talk to me)

▶ Kager (1999, chapter 1) ▶ McCarthy (2002) Thematic Guide to OT, chapter 1

Introduction Constraints Ranking Modeling distributions Practice References 40/42

slide-44
SLIDE 44

Readings for next week

▶ Monday: Steriade (1997), Wright (2004), Jun (2004) ▶ Thursday: Flemming (2006)

Introduction Constraints Ranking Modeling distributions Practice References 41/42

slide-45
SLIDE 45

References

Bailey, T. M. and U. Hahn (2001). Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language 44, 568–591. Clements, G. N. and S. J. Keyser (1983). CV Phonology. Cambridge, MA: MIT Press. Halle, M. (1978). Knowledge unlearned and untaught: What speakers know about the sounds of their language. In M. Halle, J. Bresnan, and G. Miller (Eds.), Linguistic Theory and Psychological Reality., pp. 294–303. MIT Press. Kager, R. (1999). Optimality Theory. Cambridge University Press. Steriade, D. (1997). Phonetics in phonology: The case of laryngeal neutralization. UCLA ms. Wright, R. (2004). A review of perceptual cues and cue robustness. In B. Hayes,

  • R. Kirchner, and D. Steriade (Eds.), Phonetically-Based Phonology, pp. 34–57.

Cambridge University Press.

Introduction Constraints Ranking Modeling distributions Practice References 42/42