finite state morphology
play

FINITE STATE MORPHOLOGY 24.05.19 Statistical Natural Language - PowerPoint PPT Presentation

Jurafsky, D. and Martin, J. H. (2009): Speech and Language Processing. An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition . Second Edition. Pearson: New Jersey: Chapter 3 Transducers, Compact


  1. • Jurafsky, D. and Martin, J. H. (2009): Speech and Language Processing. An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition . Second Edition. Pearson: New Jersey: Chapter 3 Transducers, Compact Patricia Tries and DAWGs FINITE STATE MORPHOLOGY 24.05.19 Statistical Natural Language Processing 1

  2. Morphology with FSAs Morphology works fairly regular, so FSAs are an appropriate machinery for • morphological analysis Tasks for automated morphology: • – analyze a given word into its morphemes – generate a full form from a base form + morphological information Surface Lexical Surface Lexical runs run+Verb+Present+3sg Boote boot+Nomen+Plural run+Noun+Pl verlangsamte verlangsam+Verb +Imperf+3sg largest large+Adj+Sup verlangsamt+Adj better good+Adj+Comp +NomAkk Plain word lists are a possibility, but redundancies are not utilized and access • can be slow. Further, no generalization properties: cannot utilize regularities from inflection classes, cannot guess for unseen words 24.05.19 2

  3. Finite State Transducer A finite state transducer is a 6-tuple FST =( Φ,Σ,Γ,δ,S,F ) and consists of set of states Φ • input alphabet Σ, disjunct with Φ • output alphabet Γ, disjunct with Φ • set of start states S ⊂ Φ • set of final states F ⊂ Φ • transition function δ⊆Φ× ( Σ∪ { ε }) × ( Γ∪ { ε }) ×Φ • An FST is essentially an FSA with two tapes. It is useful to think about them as input tape and output tape, or upper tape and lower tape. An FST transduces an input string x to an output string y if there is a sequence of transitions that starts with a start state and ends with a final state and has x as its input and y as its output string. FSTs accept regular relations . 24.05.19 3

  4. Regular Relations, closure The set of regular relations is defined as follows: For all (x,y) ∈ Σ ×Γ , {(x, y)} is a regular relation • The empty set ∅ is a regular relation • If Q, R are regular relations, so are Q • R={(x 1 x 2 ,y 1 y 2 )|(x 1 ,y 1 ) ∈ Q, (x 2 ,y 2 ) ∈ R}, Q ∪ R and Q*. • Nothing else is a regular relation. • Like regular languages, regular relations are closed under union • concatenation • Kleene closure • Unlike regular languages, regular relations are NOT closed under intersection • difference • complementation • 24.05.19 4

  5. Closure of regular relations (ctd.) New operations for regular relations: Composition: Q°R: {(x,z) | ∃ y: (x,y) ∈ Q and (y,z) ∈ R} • Projection: {x | ∃ y, (x,y) ∈ R} • Inversion: {(y,x) | (x,y) ∈ R} • From regular language L to identity regular relation: {(x,x) | x ∈ L} • From two regular languages L and M, create the cross product relation: • {(x,y) | x ∈ L, y ∈ M} ° = composition example 24.05.19 5

  6. Examples for Morphology FSTs en:3pl wat:V en:1pl et:impf rast:V et:2pl 0 1 2 3 et:3sg ε hast:V e:1sg est:2sg Sch:Sch Tr:Tr a:ä u:u m:m ε:e B:B Note that FSTs can be non-deterministic and can have ε-transitions. 24.05.19 6

  7. Handling nondeterminism and ambiguities • Since language is ambiguous on many levels, we embrace nondeterminismas a mechanism to reflect that • As long as we do not know how to resolve ambiguities, we carry along several possibilities • Nondeterminism for FSA: we don’t know which path we took • Nondeterminism for FST: different paths produce different output strings • Nondeterminism requires to keep track of a set of current states • A nondeterministic automaton accepts if there is at least one path to a final state 24.05.19 7

  8. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg input r u n s string 24.05.19 8

  9. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u n s Dots: Keep track current state and output generated so far. 24.05.19 9

  10. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r r u n s Transition: dot moves on input tape and to next state, generating output 24.05.19 10

  11. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u r u n s Transition: dot moves on input tape and to next state, generating output 24.05.19 11

  12. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u n r u n s Transition: dot moves on input tape and to next state, generating output 24.05.19 12

  13. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u n +V r u n +N r u n s Non-determinism: Dot splits. Output tape is copied. 24.05.19 13

  14. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N r u n +V +n3p 5 r u n +N +Sg 9 ε:+Sg r u n +V r u n +N r u n s ε-transitions are also non-determinisms. 24.05.19 14

  15. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u n +V +3p r u n +N +Pl r u n s Dots that do not have a follow-up state are abandoned. 24.05.19 15

  16. Running Example s:+3p 6 4 ε:+V 7 ε:+n3p r:r u:u n:n 0 2 3 1 s:+Pl 8 ε:+N 5 9 ε:+Sg r u n +V +3p output strings r u n +N +Pl r u n s End of input is reached. All dots at final states have successfully transduced. 24.05.19 16

  17. Two level Morphology • Single morphology FSTs get very complex when accommodating large word lists in a large number of different flexion classes • need to express word lists and spelling rules separately: use concatenation • two-level morphology works by introducing an intermediate level: use composition and intersection – surface to intermediate level: from surface form to morphemes – intermediate to lexical level: from morphemes to morphological analysis The introduction of levels is here guided by linguistic intuition and merely a way to make writing and maintaining of FST morphological components simpler. In practice, all together is compiled into one big FST. 24.05.19 17

  18. The foxes example (I) Synthesis/Analysis of “foxes”: 24.05.19 18

  19. The foxes example (II) 24.05.19 19

  20. Why intersection? spelling rules are constraints, capturing each some • Overall Scheme phenomenon of spelling while not constraining cases where they do not apply spelling is correct if all constrains are satisfied • intersection handles the parallel checking if all con- • straints are satisfied, i.e. no spelling rule is violated declaration intersection of spelling rules composition • For intersection, the rules have to be modified to treat ε as part of the alphabet to ensure equal length 24.05.19 20

  21. Applications of FSTs in language technology • Lexicon data structure for e.g. speller • Morphology analysis and synthesis • Segmentation • Tokenization • Sentence boundary detection • Chunk parsing (cascaded) • decoding in speech recognition 24.05.19 21

  22. Motivation for Search Trees Tasks: memory-efficient storage of word lists • classification on the word level, e.g. lemmatization • generalization capabilities: e.g. lemmatize “ googled / googelte ” even it it • is not in the list of known/given words In applications, full FSTs are too complex. Simpler structures: Tries and DAWGs deterministic: only one path per input • no output tape • compressing word lists • generalization capabilities • Prerequisite:Search Trees 24.05.19 22

  23. Tries (a.k.a. Prefix Tree): combine common prefixes • A trie is a tree structure. The nodes have 0 to N daughters (N number of possible characters in alphabet). • Example for Markus, Maria, Jutta, Malte (root) 17 nodes with 16 characters, M J 16 edges a u l r t t k i t e u a a s 23 24.05.19 Statistical Natural Language Processing 23

  24. Patricia Trie (PT) (a.k.a. Radix Tree) • Decrease number of edges by putting several characters in one node • Example for Markus, Maria, Jutta, Malte (root) 7 nodes, 16 characters, 6 edges. Ma Jutta< "<" designates end-of-word lte< r kus< ia< 24 24.05.19 Statistical Natural Language Processing 24

  25. Search in PTs • Recursively walk down, search word gets eaten up • Return last reached node. • If remaining search word is empty: exact match , otherwise partial match Maria< Julia< (root) ria< lia< Ma Jutta< partial match lte< r ia< kus< ia< exact match 25 24.05.19 Statistical Natural Language Processing 25

  26. Insert in PTs Insert of w: Search for w returns appropriate node k • if exact match : Word was in PT already • if partial match : Split string contained in k, attach daughter nodes. • In k holds: k: w=uv, k.string=ux Manuela< Johannes< (root) nuela< J ohannes< Ma Jutta< utta< ohannes< lte< r nuela< Case 1: k.string=u, |x|=0 Case 2: k.string=ux,|x|>0 kus< ia< Insert one node with string v Insert two nodes with strings v as daughter of k and x as daughters of k 26 24.05.19 Statistical Natural Language Processing 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend