Fine-Grained Temporal Relation Extraction
Siddharth Vashishtha Benjamin Van Durme Aaron Steven White
University of Rochester Johns Hopkins University University of Rochester
Fine-Grained Temporal Relation Extraction Siddharth Vashishtha - - PowerPoint PPT Presentation
Fine-Grained Temporal Relation Extraction Siddharth Vashishtha Benjamin Van Durme Aaron Steven White University of Rochester Johns Hopkins University University of Rochester Data and code available at: http://decomp.io Overarching
Siddharth Vashishtha Benjamin Van Durme Aaron Steven White
University of Rochester Johns Hopkins University University of Rochester
Consider the narrative: At 3pm, a boy broke his neighbor’s window.
Consider the narrative: At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him.
Consider the narrative: At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
Consider the narrative: At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
Each predicate denotes some event
At 3pm, a boy broke his neighbor’s window.
At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him.
At 3pm, a boy broke his neighbor’s window. He was running away, when the neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
At 3pm, a boy broke his neighbor’s
neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
Input Document:
At 3pm, a boy broke his neighbor’s
neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
Input Document:
At 3pm, a boy broke his neighbor’s
neighbor rushed out to confront him. His parents were called but couldn’t arrive for two hours because they were still at work.
Input Document: Two components are crucial: 1. Relations between events 2. Durations of individual events
Background Methodology Model Results Model Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
A standard approach: Pairwise categorical temporal relation extraction based on Allen Relations (1983).
(Pustejovsky et al., 2003; Styler IV et al., 2014; Minard et al., 2016)
Background
Methodology Model Results Analysis Conclusion
A standard approach: Pairwise categorical temporal relation extraction based on Allen Relations (1983).
For example: X takes place before Y
Background
Methodology Model Results Analysis Conclusion
A standard approach: Pairwise categorical temporal relation extraction based on Allen Relations (1983).
For example: X overlaps with Y
Background
Methodology Model Results Analysis Conclusion
A standard approach: Pairwise categorical temporal relation extraction based on Allen Relations (1983).
For example: X finishes Y
Background
Methodology Model Results Analysis Conclusion
(Pustejovsky et al., 2003)
Background
Methodology Model Results Analysis Conclusion
(Verhagen et al., 2007, 2010; UzZaman et al., 2013)
Background
Methodology Model Results Analysis Conclusion
(Cassidy et al., 2014)
Background
Methodology Model Results Analysis Conclusion
(O’Gorman et al., 2016)
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
(Fokkens et al., 2013)
Background
Methodology Model Results Analysis Conclusion
(Mani et al., 2006; Bethard, 2013; Lin et al., 2015)
Background
Methodology Model Results Analysis Conclusion
(D’Souza and Ng, 2013)
Background
Methodology Model Results Analysis Conclusion
(Chambers et al., 2014; Mirza and Tonelli, 2016)
Background
Methodology Model Results Analysis Conclusion
(Leeuwenberg and Moens, 2017 ; Ning et al., 2017)
Background
Methodology Model Results Analysis Conclusion
(Tourille et al., 2017; Cheng and Miyao, 2017; Leeuwenberg and Moens, 2018, Dligach et al., 2017)
Background
Methodology Model Results Analysis Conclusion
(Ning et al., 2018)
Background
Methodology Model Results Analysis Conclusion
(Pan et al., 2007; Gusev et al., 2011; Williams and Katz, 2012)
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
<TIMEX TYPE="TIME"> twelve o’clock noon </TIMEX> <TIMEX TYPE="DATE"> fiscal 1989’s fourth quarter </TIMEX>
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
However, approaches have been used to create relative timelines from the temporal relations (Leeuwenberg and Moens, 2018)
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
representation that puts event duration front and center.
Background
Methodology Model Results Analysis Conclusion
representation that puts event duration front and center.
Background
Methodology Model Results Analysis Conclusion
representation that puts event duration front and center.
Sam broke the window and ran away.
Background
Methodology Model Results Analysis Conclusion
representation that puts event duration front and center.
Sam broke the window and ran away.
broke ran 100 5 20 25 60 reference-interval
Background
Methodology Model Results Analysis Conclusion
start-point end-point
Background
Methodology Model Results Analysis Conclusion
protocol to extract fine-grained temporal relations.
Background
Methodology Model Results Analysis Conclusion
protocol to extract fine-grained temporal relations.
(White et al., 2016; Zhang et al., 2017)
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
~30k 70k
Background
Methodology Model Results Analysis Conclusion
Event Durations
Background
Methodology Model Results Analysis Conclusion
Event Durations
Background
Methodology Model Results Analysis Conclusion
Event Durations
Background
Methodology Model Results Analysis Conclusion
Event Relations
Background
Methodology Model Results Analysis Conclusion
Event Relations
High Priority: Try googling it or type it into youtube you might get lucky. High Priority: Try googling it or type it into youtube you might get lucky.
e1 e2
Background
Methodology Model Results Analysis Conclusion
Event Relations
High Containment: Both Tina and Vicky are excellent. I will definitely refer my friends and family.
e1 e2
Background
Methodology Model Results Analysis Conclusion
Event Relations
High Equality: I go Disco dancing and
e1 e2
Background
Methodology Model Results Analysis Conclusion
Event Relations
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
To model the pairwise fine-grained temporal relations and durations by attempting to automatically build featural representations of each predicate, its duration and its relation.
Background
Methodology Model Results Analysis Conclusion
1. Event representation 2. Duration representation 3. Relation representation
Background
Methodology Model Results Analysis Conclusion
1. Event representation What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
Background
Methodology Model Results Analysis Conclusion
1. Event representation What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
Background
Methodology Model Results Analysis Conclusion
2. Duration representation What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
Background
Methodology Model Results Analysis Conclusion
What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
2. Duration representation
Background
Methodology Model Results Analysis Conclusion
What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
3. Relation representation
Background
Methodology Model Results Analysis Conclusion
What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
3. Relation representation
Background
Methodology Model Results Analysis Conclusion
What to feed my dog after gastroenteritis? My dog has been sick for about 3 days now.
Full Architecture
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
A transfer learning approach on TimeBank-Dense to predict standard categorical temporal relations
Background
Methodology Model Results Analysis Conclusion
A transfer learning approach on TimeBank-Dense to predict standard categorical temporal relations. Features
Background
Methodology Model Results Analysis Conclusion
A transfer learning approach on TimeBank-Dense to predict standard categorical temporal relations.
Background
Methodology Model Results Analysis Conclusion
A transfer learning approach on TimeBank-Dense to predict standard categorical temporal relations.
0.566 0.529 0.519 0.494
Background
Methodology Model Results Analysis Conclusion
A transfer learning approach on TimeBank-Dense to predict standard categorical temporal relations. Our transfer learning approach beats most systems on TimeBank-Dense (Event-Event Relations)
0.566 0.529 0.519 0.494
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
actual data: beginning point: 0.28 duration: -0.097
Background
Methodology Model Results Analysis Conclusion
actual data: beginning point: 0.28 duration: -0.097
predictions, it struggles to generate the entire document timeline
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
minutes, hour etc.) have the highest mean duration attention-weights.
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
and and), or bearers of tense information - i.e. lexical verbs and auxiliaries, have the highest mean relation attention weights
Background
Methodology Model Results Analysis Conclusion
duration-attention and relation-attention weights.
and and), or bearers of tense information - i.e. lexical verbs and auxiliaries, have the highest mean relation attention weights
Background
Methodology Model Results Analysis Conclusion
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Methodology: A new approach
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Methodology: A new approach
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Methodology: A new approach
Background
Methodology Model Results Analysis Conclusion
Introduction
Background
Methodology: A new approach
Background
Methodology Model Results Analysis Conclusion
Model
Background
Methodology Model Results Analysis Conclusion
Model
Background
Methodology Model Results Analysis Conclusion
Model
Background
Methodology Model Results Analysis Conclusion
Model
Results
Background
Methodology Model Results Analysis Conclusion
Model
Results
Background
Methodology Model Results Analysis Conclusion
Model
Results
Background
Methodology Model Results Analysis Conclusion
Model
Results
Background
Methodology Model Results Analysis Conclusion
Model
Results
Background
Methodology Model Results Analysis Conclusion
Model
Results
Model Analysis
Background
Methodology Model Results Analysis Conclusion
Model
Results
Model Analysis
minutes, year, week etc.
Background
Methodology Model Results Analysis Conclusion
Model
Results
Model Analysis
minutes, year, week etc.
information (present tense, past tense)
Artificial Intelligence-Volume 1, pages 151–157. Morgan Kaufmann Publishers Inc.
Artificial Intelligence-Volume 1, pages 528–531. Morgan Kaufmann Publishers Inc.
Temporal Logic, pages 238–264. Springer.
Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK.
Lin, Guergana Savova, et al. 2014. Temporal annotation in the clinical domain. Transactions of the Association for Computational Linguistics, 2:143.
Transactions of the Association for Computational Linguistics, 2:273–284.
Tempeval temporal relation identification. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 75–80. Association for Computational Linguistics
5th International Workshop on Semantic Evaluation, pages 57– 62. Association for Computational Linguistics.
Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 1–9.
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 501–506.
causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 47–56.
event-event relation corpus. In Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with Association for Computational Linguistics 2016 (LAW-X 2016), pages 1–6.
Representation, pages 11–20.
Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 753–760. Association for Computational Linguistics.
Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 10–14.
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 918–927.
Transactions of the Association for Computational Linguistics, 2:273– 284.
COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 64–75.
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2278–2288.
approach for detecting narrative containers. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 224–230.
Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 1–6
the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1237–1246.
746–751.
Proceedings of the 22nd National Conference on Artificial Intelligence. Volume 2, pages 1659–1662. AAAI Press.
learn the duration of events. In Proceedings of the Ninth International Conference on Computational Semantics, pages 145–154. Association for Computational Linguistics.
Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 223–227. Association for Computational Linguistics.
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 501–506.
In IWCS 201712th International Conference on Computational Semantics (Short papers).
Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 223–227. Association for Computational Linguistics.
language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60.
temporal relations.
the pivot predicate from a sentence: We find the root-predicate of the sentence and if it governs a CCOMP, CSUBJ, or XCOMP, we follow that dependency to the next predicate until we find a predicate that doesn't govern a CCOMP, CSUBJ, or XCOMP.
temporal relations.
the pivot predicate from a sentence: We find the root-predicate of the sentence and if it governs a CCOMP, CSUBJ, or XCOMP, we follow that dependency to the next predicate until we find a predicate that doesn't govern a CCOMP, CSUBJ, or XCOMP. Sentence: “Has anyone considered that perhaps George Bush just wanted to fly jets?” Fig3: An example of our heuristic to find the pivot predicate
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
Multiple checks to detect potentially bad annotations:
start-point end-point Span: 53
Multiple checks to detect potentially bad annotations:
Span: 10
Relations: Average Spearman Rank correlation between slider positions: 0.665 (95% CI=[0.661, 0.669])
Relations: Average Spearman Rank correlation between slider positions: 0.665 (95% CI=[0.661, 0.669]) Durations: Average Absolute difference in Duration rank: 2.24 scale points (95% CI=[2.21, 2.25])
likely (24.6%) and rank difference 2 as a distant third (15.4%).
Fig: Normalization of slider values (a toy example with three annotators -- A, B, and C)
compare it with the rotated space of actual slider positions
0.19 for PRIORITY, 0.23 for CONTAINMENT, and 0.17 for EQUALITY