handling uncertainty in information extraction
play

HANDLING UNCERTAINTY IN INFORMATION EXTRACTION Maurice van Keulen - PowerPoint PPT Presentation

HANDLING UNCERTAINTY IN INFORMATION EXTRACTION Maurice van Keulen and Mena Badieh Habib URSW 23 Oct 2011 INFORMATION EXTRACTION Information Unstructured Web of Data extraction Text Inherently imperfect process Word Paris source:


  1. HANDLING UNCERTAINTY IN INFORMATION EXTRACTION Maurice van Keulen and Mena Badieh Habib URSW 23 Oct 2011

  2. INFORMATION EXTRACTION Information Unstructured Web of Data extraction Text Inherently imperfect process Word “Paris” source: GeoNames • First name? City? … “We humans happily deal with doubt and misinterpretation every day; • City => over 60 cities “Paris” Why shouldn’t computers?” Toponyms: 46% >2 refs Goal : Technology to support the development of domain specific information extractors Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 2

  3. SHERLOCK HOLMES-STYLE INFORMATION EXTRACTION “when you have eliminated the impossible, whatever remains, however improbable, must be the truth” Information extraction is about gathering enough evidence to decide upon a certain combination of annotations among many possible ones Evidence comes from ML + developer (generic) + end user (instances)  Annotations are uncertain Maintain alternatives + probabilities throughout process (incl. result)  Unconventional starting point Not “no annotations”, but “no knowledge, hence anything is possible”  Developer interactively defines information extractor until “good enough” Iterations: Add knowledge, apply to sample texts, evaluate result  Scalability for storage, querying, manipulation of annotations From my own field (databases): Probabilistic databases? Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 3

  4. SHERLOCK HOLMES-STYLE INFORMATION EXTRACTION EXAMPLE: NAMED ENTITY RECOGNITION (NER) “when you have eliminated the impossible, whatever remains, however improbable, must be the truth” dnc Paris Hilton stayed in the Paris Hilton a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 Person Toponym a11 a12 a12 a13 a14 a15 a16 a17 dnc a18 isa a19 a20 a21 a22 City a23 a24 a25 inter-actively a26 a27 defined a28 Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 4

  5. SHERLOCK HOLMES-STYLE INFORMATION EXTRACTION EXAMPLE: NAMED ENTITY RECOGNITION (NER) Paris Hilton stayed in the Paris Hilton  |A|=O(klt) linear?!? a1 a2 a3 a4 a5 a6 a7 Although a8 a9 a10 k: length of string a11 a12 a12 a13 a14 a15 conceptual/theoretical, l: maximum length phrases considered a16 a17 a18 it doesn’t seem to be a a19 t: number of entity types a20 severe challenge for a a21 a22 a23 probabilistic database a24 a25  Here: 28 * 3 = 84 possible annotations a26 a27 The problem is not in a28 the amount of  URSW call for papers alternative about 1300 words annotations! say 20 types say max length 6 (I saw one with 5) = roughly 1300 * 20 * 6 = roughly 156,000 possible annotations Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 5

  6. ADDING KNOWLEDGE = CONDITIONING Paris Hilton stayed in the Paris Hilton ∅ a ∧¬ b 0.08 a and b 0.12 independent b ∧¬ a P(a)=0.6 0.32 a ∧ b P(b)=0.8 0.48 ∅ a and b a ∧¬ b 0.15 mutually 0.23 exclusive (a ∧ b is not b ∧¬ a Person --- dnc --- City 0.62 possible) x 1 2 (“Paris” is a City) [a] x 8 1 (“Paris Hilton” is a Person) [b] become mutually exclusive Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 6

  7. ADDING KNOWLEDGE CREATES DEPENDENCIES NUMBER OF DEPS MAGNITUDES IN SIZE SMALLER THAN POSSIBLE COMBINATIONS Person Paris Hilton stayed City dnc Person dnc neq City dnc 8 +8 +15 Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 7

  8. PROBLEM AND SOLUTION DIRECTIONS I’m looking for a scalable approach to reason and redistribute probability mass considering all these dependencies to find the remaining possible interpretations and their probabilities  Feasibility approach hinges on efficient representation and conditioning of probabilistic dependencies  Solution directions (in my own field):  Koch etal VLDB 2008 (Conditioning in MayBMS)  Getoor etal VLDB 2008 (Shared correlations)  This is not about only learning a joint probability distribution. Here I’d like to estimate a joint probability distribution based on initial independent observations and then batch-by-batch add constraints/dependencies and recalculate  Techniques out there that fit this problem? Questions / Suggestions? Uncertainty Reasoning for the Semantic Web, Bonn, Germany 23 Oct 2011 8

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend