Multi-level Representations
for Fine-Grained Typing
- f Knowledge Base Entities
Multi-level Representations for Fine-Grained Typing of Knowledge Base - - PowerPoint PPT Presentation
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities Yadollah Yaghoobzadeh and Hinrich Schtze LMU Munich, Germany Presented by: Xiaotao Gu Knowledge Graph/Base Image retrieved from:
Image retrieved from: https://www.ambiverse.com/knowledge-graphs-encyclopaedias-for-machines/
How to extract high quality feature representation?
Character-level Representation Max Pooling Convolution Layer concatenate Character Embedding + Lookup Table
s p a n i s h
Character Embedding of “S” Filter of size 2 (3 * 6 feature map) Filter of size 4 (4 * 4 feature map)
Semantic meaning of words are useful for typing! E.g. “XXX Lake” implies $LOCATION
Image retrieved from: Professor Julia Hockenmaier’s slides for CS447: Natural Language Processing
Word embedding based on Distributional Hypothesis
Entity embedding = average word embedding E(Lake Michigan) = 0.5 * { E(Lake) + E(Michigan) } Subword/morphology in words are useful for typing! E.g. “Spanish” implies $LANGUAGE FastText: n-gram subword embedding!
FastText: https://research.fb.com/fasttext/
Capture Context Information Entity (Lake Michigan) Identifier (Lake_Michigan) Entity Embedding Corpus+SkipGram Unique Identifier
Task: Entity Typing Dataset: FIGMENT
Evaluation Metrics
baseline Character-level
baseline Character-level Word-level
baseline Character-level Word-level Word + Character
baseline Character-level Word-level Word + Character Entity-level
baseline Character-level Word-level Word + Character Entity-level Entity + Word + Character
Add description