chapter 8 information extraction ie
play

Chapter 8: Information Extraction (IE) 8.1 Motivation and Overview - PowerPoint PPT Presentation

Chapter 8: Information Extraction (IE) 8.1 Motivation and Overview 8.2 Rule-based IE 8.3 Hidden Markov Models (HMMs) for IE 8.4 Linguistic IE 8.5 Entity Reconciliation 8.6 IE for Knowledge Acquisition 8-1 IRDM WS 2005 IE by text


  1. Chapter 8: Information Extraction (IE) 8.1 Motivation and Overview 8.2 Rule-based IE 8.3 Hidden Markov Models (HMMs) for IE 8.4 Linguistic IE 8.5 Entity Reconciliation 8.6 IE for Knowledge Acquisition 8-1 IRDM WS 2005

  2. IE by text segmentation Source: concatenation of structured elements with limited reordering and some missing fields – Example: Addresses, bib records House Zip number State Building Road City 4089 Whispering Pines Nobel Drive San Diego CA 92122 Title Journal Year Author VolumePage P.P.Wangikar, T.P. Graycar, D.A. Estell, D.S. Clark, J.S. Dordick (1993) Protein and Solvent Engineering of Subtilising BPN' in Nearly Anhydrous Organic Media J.Amer. Chem. Soc. 115, 12231-12237. Source: Sunita Sarawagi: Information Extraction Using HMMs, http://www.cs.cmu.edu/~wcohen/10-707/talks/sunita.ppt 8-2 IRDM WS 2005

  3. 8.3 Hidden Markov Models (HMMs) for IE Idea: text doc is assumed to be generated by a regular grammar (i.e. an FSA) with some probabilistic variation and uncertainty → stochastic FSA = Markov model HMM – intuitive explanation: • associate with each state a tag or symbol category (e.g. noun, verb, phone number, person name) that matches some words in the text; • the instances of the category are given by a probability distribution of possible outputs in this state; • the goal is to find a state sequence from an initial to a final state with maximum probability of generating the given text ; • the outputs are known, but the state sequence cannot be observed, hence the name hidden Markov model 8-3 IRDM WS 2005

  4. Hidden Markov Models in a Nutshell A 0.6 • Doubly stochastic models A 0.9 C 0.4 0.5 C 0.1 S 1 S 2 0.9 0.5 0.1 0.8 • Efficient dynamic programming S 4 S 3 algorithms exist for 0.2 – Finding Pr(S) A 0.5 A 0.3 – The highest probability path P that C 0.5 C 0.7 maximizes Pr(S,P) (Viterbi) • Training the model – (Baum-Welch algorithm) Source: Sunita Sarawagi: Information Extraction Using HMMs, http://www.cs.cmu.edu/~wcohen/10-707/talks/sunita.ppt 8-4 IRDM WS 2005

  5. Hidden Markov Model (HMM): Formal Definition An HMM is a discrete-time, finite-state Markov model with • state set S = (s 1 , ..., s n ) and the state in step t denoted X(t), • initial state probabilities p i (i=1, ..., n), • transition probabilities p ij : S × S → [0,1], denoted p(s i → s j ), • output alphabet Σ = {w 1 , ..., w m }, and • state-specific output probabilities q ik : S × × × × Σ Σ → Σ Σ → [0,1], denoted q(s i ↑ → → ↑ ↑ ↑ w k ) (or transition-specific output probabilities). Probability of emitting output o 1 ... o k ∈ Σ k is: k ∑ ∏ → ↑ p x x q x o → = p x x p x ( ) ( ) with ( ) : ( ) − i i i i 0 1 1 1 ∈ = x x S i ... 1 k 1 can be computed iteratively with clever caching and reuse of intermediate results („memoization“) α = = ( t ) : P [ o ... o , X ( t ) i ] − i 1 t 1 n α + = α → ↑ α = ( t 1 ) ( t ) p ( s s ) p ( s o ) ( 1 ) p ( i ) j i i j i t ∑ i = i 1 8-5 IRDM WS 2005

  6. Example for Hidden Markov Model address ... start title author abstract section email p(start)=1 p[author → author]=0.5 q[author ↑ <firstname>]= 0.1 p[author → address]=0.2 q[author ↑ <initials>]= 0.2 p[author → email]=0.3 q[author ↑ <lastname>]= 0.5 ... ... q[email ↑ @]=0.2 q[email ↑ .edu]=0.4 q[email ↑ <lastname>]=0.3 ... 8-6 IRDM WS 2005

  7. Example 0.4 0.2 A 0.4 A 0.2 0.8 C 0.1 C 0.3 G 0.2 G 0.3 0.6 0.5 T 0.3 T 0.2 1 3 begin end 0 5 A 0.4 A 0.1 0.5 0.9 0.2 C 0.1 C 0.4 G 0.1 G 0.4 T 0.4 T 0.1 2 4 0.1 0.8 π = × × × × × × a b a b a b a Pr( AAC , ) ( A ) ( A ) ( C ) 01 1 11 1 13 3 35 = × × × × × × 0 . 5 0 . 4 0 . 2 0 . 4 0 . 8 0 . 3 0 . 6 Source: Sunita Sarawagi: Information Extraction Using HMMs, http://www.cs.cmu.edu/~wcohen/10-707/talks/sunita.ppt 8-7 IRDM WS 2005

  8. Training of HMM MLE for HMM parameters (based on fully tagged training sequences ) → # transition s s s i j → = p ( s s ) i j → # transition s s x i ∑ x ↑ # outputs s w → = i k q ( s w ) i k → # outputs s o i ∑ o or use special case of EM (Baum-Welch algorithm) to incorporate unlabeled data (training: output sequence only, state sequence unknown) learning of HMM structure (#states, connections): some work, but very difficult 8-8 IRDM WS 2005

  9. Viterbi Algorithm for the Most Likely State Sequence Find arg max P [ state sequence x ... x | output o ... o ] x ... x 1 t 1 t 1 t Viterbi algorithm (uses dynamic programming): δ = = ( t ) : max P [ x ... x , o ... o , X ( t ) i ] − − i x ... x 1 t 1 1 t 1 − 1 t 1 δ = ( 1 ) p ( i ) i δ + = δ → ↑ ( t 1 ) max ( t ) p ( s s ) q ( s o ) = j i 1 ,..., n i i j i t store argmax in each step 8-9 IRDM WS 2005

  10. HMMs for IE The following 6 slides are from: Sunita Sarawagi: Information Extraction Using HMMs, http://www.cs.cmu.edu/~wcohen/10-707/talks/sunita.ppt 8-10 IRDM WS 2005

  11. Combining HMMs with Dictionaries • Augment dictionary – Example: list of Cities • Exploit functional dependencies – Example • Santa Barbara -> USA • Piskinov -> Georgia Example: 2001 University Avenue, Kendall Sq. Piskinov, Georgia House Area City State number Road Name 2001 University Avenue, Kendall Sq., Piskinov, Georgia House Area City Road Name Country number 2001 University Avenue, Kendall Sq., Piskinov, Georgia 8-11 IRDM WS 2005

  12. Combining HMMs with Frequency Constraints • Including constraints of the form: the same tag cannot appear in two disconnected segments – Eg: Title in a citation cannot appear twice – Street name cannot appear twice • Not relevant for named-entity tagging kinds of problems → → → → extend Viterbi algorithm with constraint handling 8-12 IRDM WS 2005

  13. Comparative Evaluation • Naïve model – One state per element in the HMM • Independent HMM – One HMM per element; • Rule Learning Method – Rapier • Nested Model – Each state in the Naïve model replaced by a HMM 8-13 IRDM WS 2005

  14. Results: Comparative Evaluation Dataset insta Elem nces ents IITB student 2388 17 Addresses Company 769 6 Addresses US 740 6 Addresses The Nested model does best in all three cases (from Borkar 2001) 8-14 IRDM WS 2005

  15. Results: Effect of Feature Hierarchy Feature Selection showed at least a 3% increase in accuracy 8-15 IRDM WS 2005

  16. Results: Effect of training data size HMMs are fast Learners. We reach very close to the maximum accuracy with just 50 to 100 addresses 8-16 IRDM WS 2005

  17. Semi-Markov Models for IE The following 4 slides are from: William W. Cohen A Century of Progress on Information Integration: a Mid-Term Report http://www.cs.cmu.edu/~wcohen/webdb-talk.ppt 8-17 IRDM WS 2005

  18. Features for information extraction I met Prof. F. Douglas at the zoo t 1 2 3 4 5 6 7 8 x I met Prof F. Douglas at the zoo. y Other Other Person Person Person other other Location Question: how can we guide this using a dictionary D ? Simple answer: make membership in D a feature f d 8-18 IRDM WS 2005

  19. Existing Markov models for IE • Feature vector for each position previous i-th label label Word i & neighbors • Examples • Parameters: weight W for each feature (vector) 8-19 IRDM WS 2005

  20. Semi-markov models for IE t 1 2 3 4 5 6 7 8 x I met Prof. F. Douglas at the zoo. y Other Other Person Person Person other other Location l,u l 1 =u 1 =1 l 2 =u 2 =2 l 3 =3, u 3 =5 l 4 =6,u 4 =6 l 5 =u 5 =7 l 6 =u 6 =8 x I met Prof. F. Douglas at the zoo. y Other Other Person other other Location COST: Requires additional search in Viterbi Learning and inference slower by O(maxNameLength) 8-20 IRDM WS 2005

  21. Features for Semi-Markov models previous Start of S j label j-th label end of S j 8-21 IRDM WS 2005

  22. Problems and Extensions of HMMs • individual output letters/word may not show learnable patterns → output words can be entire lexical classes (e.g. numbers, zip codes) • geared for flat sequences, not for structured text docs → use nested HMM where each state can hold another HMM • cannot capture long-range dependencies (e.g. in addresses: with first word being „Mr.“ or „Mrs.“ the probability of later seeing a P.O. box rather than a street address decreases substantially) → use dictionary lookups in critial states and/or combine HMMs with other techniques for long-range effects → use semi-Markov models 8-22 IRDM WS 2005

  23. 8.4 Linguistic IE Preprocess input text using NLP methods: • Part-of-speech (PoS) tagging: each word (group) → grammatical role (NP, ADJ, VT, etc.) • Chunk parsing: sentence → labeled segments (temp. adverb phrase, etc.) • Link parsing: bridges between logically connected segments NLP-driven IE tasks: • Named Entity Recognition (NER) • Coreference resolution (anaphor resolution) • Template element construction • Template relation construction • Scenario template construction … • Logical representation of sentence semantics (e.g., FrameNet) 8-23 IRDM WS 2005

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend