making sense of word sense variation
play

Making Sense of Word Sense Variation Rebecca J. Passonneau and Ansaf - PDF document

Making Sense of Word Sense Variation Rebecca J. Passonneau and Ansaf Salleb-Aouissi Nancy Ide Center for Computational Learning Systems Department of Computer Science Columbia University Vassar College New York, NY, USA Poughkeepsie, NY, USA


  1. Making Sense of Word Sense Variation Rebecca J. Passonneau and Ansaf Salleb-Aouissi Nancy Ide Center for Computational Learning Systems Department of Computer Science Columbia University Vassar College New York, NY, USA Poughkeepsie, NY, USA (becky@cs|ansaf@ccls).columbia.edu ide@cs.vassar.edu Abstract Nunberg, 1979). Based on our initial observations from a word sense annotation task for relatively pol- We present a pilot study of word-sense an- ysemous words, carried out by multiple annotators notation using multiple annotators, relatively on a heterogeneous corpus, we hypothesize that dif- polysemous words, and a heterogenous cor- ferent words lead to greater or lesser interannota- pus. Annotators selected senses for words in context, using an annotation interface that pre- tor agreement (IA) for reasons that in the long run sented WordNet senses. Interannotator agree- should be explicitly modelled in order for Natural ment (IA) results show that annotators agree Language Processing (NLP) applications to handle well or not, depending primarily on the indi- usage differences more robustly. This pilot study is vidual words and their general usage proper- a step in that direction. ties. Our focus is on identifying systematic We present related work in the next section, then differences across words and annotators that describe the annotation task in the following one. In can account for IA variation. We identify three Section 4, we present examples of variation in agree- lexical use factors: semantic specificity of the context, sense concreteness, and similarity of ment on a matched subset of words. In Section 5 senses. We discuss systematic differences in we discuss why we believe the observed variation sense selection across annotators, and present depends on the words and present three lexical use the use of association rules to mine the data factors we hypothesize to lead to greater or lesser for systematic differences across annotators. IA. In Section 6, we use association rules to mine our data for systematic differences among annota- 1 Introduction tors, thus to explain the variations in IA. We con- clude with a summary of our findings goals. Our goal is to grapple seriously with the natural sense variation arising from individual differences in 2 Related Work word usage. It has been widely observed that usage features such as vocabulary and syntax vary across There has been a decade-long community-wide ef- corpora of different genres and registers (Biber, fort to evaluate word sense disambiguation (WSD) 1995), and that serve different functions (Kittredge systems across languages in the four Senseval ef- et al., 1991). Still, we are far from able to pre- forts (1998, 2001, 2004, and 2007, cf. (Kilgarriff, dict specific morphosyntactic and lexical variations 1998; Pedersen, 2002a; Pedersen, 2002b; Palmer across corpora (Kilgarriff, 2001), much less quan- et al., 2005)), with a corollary effort to investi- tify them in a way that makes it possible to apply gate the issues pertaining to preparation of man- the same analysis tools (taggers, parsers) without re- ually annotated gold standard corpora tagged for training. In comparison to morphosyntactic proper- word senses (Palmer et al., 2005). Differences in IA ties of language, word and phrasal meaning is fluid, and system performance across part-of-speech have and to some degree, generative (Pustejovsky, 1991; been examined, as in (Ng et al., 1999; Palmer et al., 2 Proceedings of the NAACL HLT Workshop on Semantic Evaluations: Recent Achievements and Future Directions , pages 2–9, Boulder, Colorado, June 2009. c � 2009 Association for Computational Linguistics

  2. Word POS No. senses No. occurrences fair Adj 10 463 long Adj 9 2706 quiet Adj 6 244 land Noun 11 1288 time Noun 10 21790 work Noun 7 5780 know Verb 11 10334 say Verb 11 20372 show Verb 12 11877 tell Verb 8 4799 Table 1: Ten Words 2005). Pedersen (Pedersen, 2002a) examines varia- tion across individual words in evaluating WSD sys- tems, but does not attempt to explain it. Factors that have been proposed as affecting human or system sense disambiguation include whether annotators are allowed to assign multilabels (Veronis, 1998; Ide et al., 2002; Passonneau et al., 2006), the number or granularity of senses (Ng et al., 1999), merging of related senses (Snow et al., 2007), Figure 1: MASC word sense annotation tool sense similarity (Chugur et al., 2002), sense perplex- ity (Diab, 2004), entropy (Diab, 2004; Palmer et the project is to support efforts to harmonize Word- al., 2005), and in psycholinguistic experiments, re- Net (Miller et al., 1993) and FrameNet (Ruppen- actions times required to distinguish senses (Klein hofer et al., 2006), in order to bring the sense distinc- and Murphy, 2002; Ide and Wilks, 2006). tions each makes into better alignment. As a start- With respect to using multiple annotators, Snow ing sample, we chose ten fairly frequent, moderately et al. included disambiguation of the word polysemous words for sense tagging, targeting in president –a relatively non-polysemous word with particular words that do not yet exist in FrameNet, as three senses–in a set of tasks given to Amazon Me- well as words with different numbers of senses in the chanical Turkers, aimed at determining how to com- two resources. The ten words with part of speech, bine data from multiple non-experts for machine number of senses, and occurrences in the OANC learning tasks. The word sense task comprised 177 are shown in Table 1. One thousand occurrences sentences taken from the SemEval Word Sense Dis- of each word , including all occurrences appear- ambiguation Lexical Sample task. Majority voting ing in the MASC subset and others semi-randomly 2 among three annotators achieve 99% accuracy. chosen from the remainder of the 15 million word OANC, were annotated by at least one annotator of 3 The Annotation Task six undergraduate annotators at Vassar College and The Manually Annotated Sub-Corpus (MASC) Columbia University. project is creating a small, representative corpus Fifty occurrences of each word in context were of American English written and spoken texts sense-tagged by all six annotators for the in-depth drawn from the Open American National Cor- study of inter-annotator agreement (IA) reported pus (OANC). 1 The MASC corpus includes hand- here. We have just finished collecting annotations validated or manually produced annotations for a va- of fifty new occurrences. All annotations are pro- riety of linguistic phenomena. One of the goals of 2 The occurrences were drawn equally from each of the 1 http://www.anc.org genre-specific portions of the OANC. 3

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend