maximum entropy models for
play

Maximum Entropy Models for y Realization Ranking yz Erik Velldal - PowerPoint PPT Presentation

Maximum Entropy Models for y Realization Ranking yz Erik Velldal <erik.velldal@iln.uio.no> y Department of Linguistics and Scandinavian Studies, Stephan Oepen <oe@csli.stanford.edu> z Center for the Study of Language and


  1. Maximum Entropy Models for y Realization Ranking yz Erik Velldal <erik.velldal@iln.uio.no> y Department of Linguistics and Scandinavian Studies, Stephan Oepen <oe@csli.stanford.edu> z Center for the Study of Language and Information, University of Oslo (Norway) Stanford (USA)

  2. Maximum Entropy Models for Realization Ranking 1 � The problem: Ambiguity in generation, – many ways to formulate a given Realization Ranking � A solution: Use statistics for modeling preferences and soft constraints meaning. � Trained and tested three types of models: n -gram language models (surface oriented) (grammaticality already guaranteed). 1) 2) maximum entropy model (structural features) 3) a combination of 1) and 2) LOGON

  3. Maximum Entropy Models for Realization Ranking 2 � Generation in the LOGON MT-system and the problem of realization Overview � Reference experiments: random choice and n -gram language models. � The relation to parse selection. Treebank data and maximum entropy ranking. � A combined model: MaxEnt + language model. � Results, future work and discussion. models (MaxEnt). LOGON

  4. Maximum Entropy Models for Realization Ranking 3 � LOGON Generation in the LOGON MT-system – Aims at high precision Norwegian–English MT of texts in the tourism domain. – Symbolic, rule-based system, centered on semantic transfer using Minimal Recursion Semantics (MRS; Copestake, Flickinger, Malouf, Riehemann, & Sag, 1995). Includes stochastic methods for ambiguity management. LOGON �

  5. Maximum Entropy Models for Realization Ranking 3 � LOGON Generation in the LOGON MT-system – Aims at high precision Norwegian–English MT of texts in the tourism domain. – Symbolic, rule-based system, centered on semantic transfer using � The LKB Chart Generator (Carroll, Copestake, Flickinger, & Poznanski, Minimal Recursion Semantics (MRS; Copestake, Flickinger, Malouf, Riehemann, & Sag, 1995). Includes stochastic methods for ambiguity management. 1999; Carroll & Oepen, 2005). – Lexically-driven, bottom-up chart generation from MRSs. – Generation based on the LinGO English Resource Grammar ( ERG ; Flickinger, 2002); a general-purpose, wide-coverage grammar, designed using HPSG and MRS. LOGON

  6. � Caused by e.g. the optionality of complementizers and relative pronouns, Maximum Entropy Models for Realization Ranking 4 Generator Ambiguity � Average number of realizations in the current data set is 73 (max = 5712 ). permutation of (intersective) modifiers, different possible topicalizations, as well as lexical and orthographic alternations. All realizations of a given MRS are guaranteed to be semantically (truth-conditionally) equivalent. Grammaticality is ensured wrt. the underlying grammar (LinGO ERG ). Remember that dogs must be on a leash. Remember dogs must be on a leash. On a leash remember that dogs must be. On a leash remember dogs must be. A leash remember that dogs must be on. A leash remember dogs must be on. Dogs remember must be on a leash. LOGON

  7. Maximum Entropy Models for Realization Ranking 5 � The most common approach to the problem of generator ambiguity is to n -gram statistics (Langkilde & Knight, 1998; White, 2004; A Language Model Ranker � Score and rank strings using a language model; Q k p ( w ; : : : ; w ) = p ( w j w ; : : : ; w ) . n 1 k i i � n i � 1 use i =1 Callison-Burch & Flournoy, 2001). � Trained a 4 -gram model on the BNC (100 mill. words). LOGON

  8. Maximum Entropy Models for Realization Ranking 6 � Results on the LOGON data set ‘Rondane’ (864 test items, to be detailed A Language Model Ranker (Cont’d) 48 : 46% 18 : 03% ) 0 : 8776 later in the talk): 0 : 727 ) – Exact match accuracy: � Limitations: Can not model dependencies between non-contiguous (Random choice baseline: – B LEU score: (Random choice baseline: words. No linguistic information. Does not condition the output (string) on the input (MRS). LOGON

  9. Maximum Entropy Models for Realization Ranking 7 � The problem of selecting the best realization can be seen to be “inversely The Relation to Parse Selection � p ( analysis j utterance ) vs. p ( utterance j analysis ) similar” to the problem of selecting the best parse . LOGON � � � �

  10. Maximum Entropy Models for Realization Ranking 7 � The problem of selecting the best realization can be seen to be “inversely The Relation to Parse Selection � p ( analysis j utterance ) vs. p ( utterance j analysis ) � Toutanova, Manning, Shieber, Flickinger, & Oepen (2002) implement a similar” to the problem of selecting the best parse . � Features defined over derivation trees with non-terminals representing the MaxEnt model for parse disambiguation using the Redwoods HPSG treebank. construction types and lexical types of the grammar. LOGON � �

  11. Maximum Entropy Models for Realization Ranking 7 � The problem of selecting the best realization can be seen to be “inversely The Relation to Parse Selection � p ( analysis j utterance ) vs. p ( utterance j analysis ) � Toutanova, Manning, Shieber, Flickinger, & Oepen (2002) implement a similar” to the problem of selecting the best parse . � Features defined over derivation trees with non-terminals representing the MaxEnt model for parse disambiguation using the Redwoods HPSG treebank. � We train a realization ranker in much the same way. � Requires different types of treebanks for training. construction types and lexical types of the grammar. LOGON

  12. Maximum Entropy Models for Realization Ranking 8 Treebanks for Parse Selection Training data for parse Utterances Analyses selection models is typi- cally given by (1) a tree- bank of utterances paired with their optimal anal- yses, together with (2) all their competing (sub- optimal) analyses. LOGON

  13. Maximum Entropy Models for Realization Ranking 9 Symmetric Treebanks To produce a symmetric Utterances Analyses treebank , exhaustively generate all paraphrases of the treebanked analyses, and assume optimality relation to be bidirectional (Velldal, Oepen, & Flickinger, 2004). LOGON

  14. Maximum Entropy Models for Realization Ranking 10 Treebanks for Realization Ranking We now have the train- Utterances Analyses ing data for a realization ranking model, given by (1) a treebank of anal- yses paired with their optimal utterances, to- gether with (2) all com- peting (suboptimal) can- didates. LOGON

  15. Maximum Entropy Models for Realization Ranking 11 ℄ � � % The Rondane Treebank 100 � readings 50 � readings < 100 items words ambiguity baseline Aggregate 10 � readings < 50 5 < readings < 10 87 20.5 580.8 0.42 1 < readings < 5 61 17.3 73.0 1.44 269 15.1 22.5 5.61 172 11.1 6.9 15.66 275 8.8 2.8 40.9 Total 864 13.0 72.9 18.03 The treebank data binned with respect to generator ambiguity, for each group showing the total number of items, average string length, average number of paraphrases, and a random choice baseline for accuracy. LOGON

  16. Maximum Entropy Models for Realization Ranking 12 � Given by a set of features f f ; : : : ; f g and a set of associated weights 1 m f � ; : : : ; � g . 1 m Maximum Entropy Models � The real-valued feature-functions describe relevant properties of the data � The lambda weights determine the contribution or importance of each items. � Prob. of a realization r given a semantics s : P 1 p ( r j s ) = exp ( � f ( r )) i i i Z ( s ) feature. � Learning amounts to finding the optimal weights � that maximize the likelihood of the training corpus. LOGON

  17. Maximum Entropy Models for Realization Ranking 13 MaxEnt Features � H � H � H � H � H Sample HPSG derivation tree for the dog barks . subjh � H � H � H Features record local deriva- tion sub-trees with different hspec third_sg_fin_verb n -grams over lexical degrees of lexicalization, v_unerg_le levels of grandparenting, det_the_le sing_noun etc. Additional features barks record n_intr_le the types. dog LOGON

  18. Maximum Entropy Models for Realization Ranking 14 MaxEnt Features � H � H � H � H � H Sample HPSG derivation tree for the dog barks . subjh � H � H � H Features record local deriva- tion sub-trees with different hspec third_sg_fin_verb n -grams over lexical degrees of lexicalization, v_unerg_le levels of grandparenting, det_the_le sing_noun etc. Additional features barks record n_intr_le the types. dog LOGON

  19. Maximum Entropy Models for Realization Ranking 15 MaxEnt Features � H � H � H � H � H Sample HPSG derivation tree for the dog barks . subjh � H � H � H Features record local deriva- tion sub-trees with different hspec third_sg_fin_verb n -grams over lexical degrees of lexicalization, v_unerg_le levels of grandparenting, det_the_le sing_noun etc. Additional features barks record n_intr_le the types. dog LOGON

  20. Maximum Entropy Models for Realization Ranking 16 MaxEnt Features � H � H � H � H � H Sample HPSG derivation tree for the dog barks . subjh � H � H � H Features record local deriva- tion sub-trees with different hspec third_sg_fin_verb n -grams over lexical degrees of lexicalization, v_unerg_le levels of grandparenting, det_the_le sing_noun etc. Additional features barks record n_intr_le the types. dog LOGON

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend