natural language processing using constraint based
play

Natural language processing using constraint-based grammars Ann - PowerPoint PPT Presentation

Natural language processing using constraint-based grammars Ann Copestake University of Cambridge Computer Laboratory Center for the Study of Language and Information, Stanford aac@cl.cam.ac.uk Overview of course NLP applications.


  1. Natural language processing using constraint-based grammars Ann Copestake University of Cambridge Computer Laboratory Center for the Study of Language and Information, Stanford aac@cl.cam.ac.uk

  2. Overview of course • NLP applications. State-of-the-art, deep vs shallow processing, deep processing modules. What are constraint-based grammars, why use constraint-based grammars? • Implementing and using constraint-based grammars. Formalism (inheritance, type constraints), semantic representation and generation, grammar engineering. • Test suites and efficiency issues. Some research issues: stochastic HPSG, multiword expressions, combining deep and shallow processing. Although these are mostly general issues, specific examples and demos will mostly be of LinGO technology.

  3. Overview of lecture 1 • NLP applications • Deep vs. shallow processing • Architecture of deep processing systems • Constraint-based grammar • Constraint-based grammar formalisms in NLP applications: why and how? • Demo of LKB and ERG

  4. Some NLP applications • spelling and grammar checking • screen readers and OCR • augmentative and alternative communication • machine aided translation • lexicographers’ tools • information retrieval • document classification (filtering, routing) • document clustering • information extraction • question answering • summarization

  5. • text segmentation • exam marking • report generation (mono- and multi-lingual) • machine translation • natural language interfaces to databases • email understanding • dialogue systems

  6. Example 1: Email routing Email sent to a single address (e.g. a company) is sorted into categories depending on subject, so it can be routed to the right department. For instance: New orders Questions about orders General queries Junk email Most such systems depend on a mixture of types of evidence: e.g., words in the email body, address of sender etc, number of exclamation marks (for detecting junk email). Systems can be trained based on manually classified data.

  7. Example 2: automatic response to email Within-domain questions 1. Has my order number 4291 been shipped yet? 2. Is FD5 compatible with a Vaio 505G? 3. What is the speed of the Vaio 505G? 4. How long will 4291 take? 5. How long is FD5? Out of domain 1. My order did not arrive on time. You will be hearing from my lawyers. 2. What is the speed of an African swallow?

  8. How automatic question response works 1. Analyze the incoming question to produce a query in some formal meaning representation 2. If no possible query can be constructed, pass the question to a human 3. Otherwise, run the query against the relevant database 4. Generate a response

  9. Database querying ORDER Order number Date ordered Date shipped 4290 2/2/02 2/2/02 4291 2/2/02 2/2/02 4292 2/2/02 - 1. USER QUESTION: Have you shipped 4291? 2. DB QUERY: order(number=4291,date shipped=?) 3. RESPONSE TO USER: Order number 4291 was shipped on 2/2/02

  10. Shallow and deep processing Most NLP applications fall into one of two categories: 1. Narrow coverage deep processing (e.g., email response): target is a fully described data or knowledge base. 2. Broad-coverage shallow processing (e.g., email routing): extract partial information from (relatively) unstructured text. Some applications are intermediate: good MT requires limited domains, but MT on unrestricted text can involve relatively deep processing (semantic transfer). Recently, systems for question answering on unrestricted text have been developed: some of these use relatively deep processing.

  11. Methodology The deep/shallow distinction is partially aligned with methodology: 1. Knowledge-intensive NLP methods (i.e., methods that require extensive ‘linguistic’ hand-coding) are generally used for deep processing (though also sometimes for shallow processing, like POS tagging). 2. Machine-learning techniques are generally used for shallow processing (though some attempts to use them for deep processing). 3. Statistical NLP is always associated with machine-learning, and generally with shallow processing, but most full systems combine statistical and symbolic techniques. Most deep processing assumes a limited domain, but this isn’t true of question answering and machine translation.

  12. Some history Natural language interfaces were the ‘classic’ NLP problem in the 70s and early 80s. LUNAR was a natural language interface to a database (Woods, 1978 — but note most of the work was done several years earlier): it was capable of translating elaborate natural language expressions into database queries. SHRDLU (Winograd, 1973) was a system capable of participating in a dialogue about a microworld (the blocks world) and manipulating this world according to commands issued in English by the user. LUNAR and SHRDLU both exploited the limitations of the domain to make the natural language understanding problem tractable. For instance, disambiguation, compound noun analysis, quantifier scope, pronoun reference.

  13. Domain knowledge for disambiguation Schematically, in the blocks world: 1. Context: blue(b1), block(b1), on(b1,b2), red(b2), block(b2), pyramid(p3), green(p3), on(p3,ground) etc 2. Input: Put the green pyramid on the blue block on the red blocks 3. Parser: (a) (Put (the (green pyramid on the blue block)) (on the red blocks)) (b) (Put (the green pyramid) (on the (blue block (on the red blocks)))) 4. Context resolves to: (Put (the green pyramid) (on the (blue block (on the red blocks)))) But doesn’t scale up well: AI-complete for arbitrary domains.

  14. Developments since 1970s No really good way of building large-scale detailed knowledge bases has been found, but there have advances in deep NLP since LUNAR: 1. powerful, declarative grammar formalisms 2. more motivated approaches to semantics 3. better methodology for evaluation 4. modularity reduces difficulty of porting between domains 5. large scale, domain-independent grammars have been built 6. disambiguation etc is yielding (slowly) to corpus-based methods 7. systems are much easier to build Commercial systems remain rare.

  15. Domain-independent linguistic processing Most linguistically-motivated deep processing work assumes a level of representation constructed by a (somewhat) domain-independent grammar that can be mapped into the domain-dependent application. For instance: 1. USER QUESTION: Have you shipped 4291? 2. SEMANTIC REP: ynq(2pers(y) and def(x, id(x,4291), ship(e,y,x) and past(e))) 3. DB QUERY: order(number=4291,date shipped=?) So don’t have to completely rewrite the grammar for each new application. (Currently deployed spoken dialogue systems don’t do this, however.)

  16. Generic NLP application architecture • input preprocessing: speech recogniser or text preprocessor (non-trivial in languages like Chinese) or gesture recogniser. • morphological analysis • parsing: this includes syntax and compositional semantics • disambiguation • context module • text planning: the part of language generation that’s concerned with deciding what meaning to convey • tactical generation: converts meaning representations to strings. • morphological generation • output processing: text-to-speech, text formatter, etc.

  17. Natural language interface to a knowledge base KB ✟ ✯ ❍❍❍❍❍❍❍❍ ✟✟✟✟✟✟✟✟ ❍ ❥ KB INTERFACE/CONTEXT KB OUTPUT/TEXT PLANNING ✻ ❄ PARSING TACTICAL GENERATION ✻ ❄ MORPHOLOGY MORPHOLOGY GENERATION ✻ ❄ INPUT PROCESSING OUTPUT PROCESSING ✻ ❄ user input output

  18. MT using semantic transfer SEMANTIC TRANSFER ✟ ✯ ❍❍❍❍❍❍❍❍ ✟✟✟✟✟✟✟✟ ❍ ❥ PARSING TACTICAL GENERATION ✻ ❄ MORPHOLOGY MORPHOLOGY GENERATION ✻ ❄ INPUT PROCESSING OUTPUT PROCESSING ✻ ❄ source language input target language output

  19. Candidates for the parser/generator 1. Finite-state or simple context-free grammars. Used for domain-specific grammars. 2. Augmented transition networks. Used in the 1970s, most significantly in LUNAR. 3. Induced probabilistic grammars 4. Constraint-based grammars • Linguistic framework: FUG, GPSG, LFG, HPSG, categorial grammar (various), TAG, dependency grammar, construction grammar, . . . . • Formalisms: DCGs (Prolog), (typed) feature structures, TAG . . . • Systems: PATR, ANLT parser, XTAG parser, XLE, CLE, LKB, ALE . . . • Grammars: ANLT grammar, LinGO ERG, XTAG grammar, PARGRAM, CLE grammars.

  20. What is a constraint-based grammar (CBG)? A grammar expressed in a formalism which specifies a natural language using a set of independently specifiable constraints, without imposing any conditions on processing or processing order. For example, consider a conventional CFG: S -> NP VP VP -> V S VP -> V NP V -> believes V -> expects NP -> Kim NP -> Sandy NP -> Lee Rule notation suggest a procedural description (production rules).

  21. CFG rules as tree fragments S VP VP � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ NP VP V NP V S V V NP NP NP expects Sandy believes Lee Kim A tree is licensed by the grammar if it can be put together from the tree fragments in the grammar.

  22. Example of valid tree S � ❅ � ❅ � ❅ � ❅ NP VP � ❅ � ❅ � � ❅ ❅ Kim V S � ❅ � ❅ � � ❅ ❅ believes NP VP � ❅ � ❅ � ❅ � ❅ Lee V NP Sandy believes

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend