semantic similarity
play

Semantic Similarity MultiJEDI ERC 259234 Semantic Similarity - PowerPoint PPT Presentation

SemEval 2014 Task-3 Cross-Level Semantic Similarity MultiJEDI ERC 259234 Semantic Similarity Semantic Similarity Mostly focused on similar types of lexical items Semantic Similarity What if we have different types of inputs? CLSS:


  1. SemEval 2014 Task-3 Cross-Level Semantic Similarity MultiJEDI ERC 259234

  2. Semantic Similarity

  3. Semantic Similarity Mostly focused on similar types of lexical items

  4. Semantic Similarity What if we have different types of inputs?

  5. CLSS: Cross-Level Semantic Similarity A new type of similarity task

  6. CLSS: Cross-Level Semantic Similarity A new type of similarity task • • •

  7. CLSS: Comparison Types Paragraph to Sentence

  8. CLSS: Comparison Types Paragraph to Sentence Sentence to Phrase

  9. CLSS: Comparison Types Paragraph to Sentence Sentence to Phrase Phrase to Word

  10. CLSS: Comparison Types Paragraph to Sentence Sentence to Phrase Phrase to Word Word to Sense

  11. Task Data 4000 pairs in total Training set Test set

  12. Task Data A wide range of domains and text styles

  13. word-to-sense pairs Word to Sense

  14. word-to-sense pairs Word to Sense

  15. word-to-sense pairs Word to Sense

  16. word-to-sense pairs Word to Sense

  17. Rating Scale

  18. Crafting an idealized similarity distribution

  19. Crafting an idealized similarity distribution larger side

  20. Crafting an idealized similarity distribution larger side

  21. Crafting an idealized similarity distribution 2 0 4 1 3 larger side

  22. Crafting an idealized similarity distribution 2 0 4 1 3 larger side

  23. Crafting an idealized similarity distribution 2 0 4 1 3 smaller side larger side

  24. Crafting an idealized similarity distribution 2 0 4 1 3 smaller side larger side

  25. Crafting an idealized similarity distribution 2 0 4 1 3

  26. Crafting an idealized similarity distribution 2 0 4 1 3

  27. Crafting an idealized similarity distribution 2 0 4 1 3

  28. Test and Training data IAA Paragraph-Sentence Sentence-Phrase Phrase-Word Word-Sense Krippendorff’s α Training (all) Training (unadjudicated) Test (all) Test (unadjudicated)

  29. The annotation procedure produces a balanced rating distribution

  30. Experimental Setup Baslines: The quick brown fox • The brown fox was quick The quick brown fox • The brown fox es were quick

  31. Experimental Setup Baslines: The quick brown fox • The brown fox was quick The quick brown fox • The brown fox es were quick Evaluation Measure:

  32. Number of participants Paragraph-Sentence Sentence-Phrase Phrase-Word Word-Sense

  33. Top 5 Systems and Baselines Gold LCS Baseline GST Baseline SemantiKLUE run1 UNAL-NLP run2 ECNU run1 SimCompass run1 Meerkat Mafia pw* 0 1 2 3 4 paragraph-sentence sentence-phrase phrase-word word-sense

  34. Top 5 Systems and Baselines Gold LCS Baseline GST Baseline SemantiKLUE run1 UNAL-NLP run2 ECNU run1 SimCompass run1 Meerkat Mafia pw* 0 1 2 3 4 paragraph-sentence sentence-phrase phrase-word word-sense

  35. Where do the baselines stand? LCS Baseline GST Baseline SemantiKLUE run1 UNAL-NLP run2 ECNU run1 SimCompass run1 Meerkat Mafia pw* 0 0.75 1.5 2.25 3 paragraph-sentence sentence-phrase phrase-word word-sense

  36. Where do the baselines stand? LCS Baseline GST Baseline SemantiKLUE run1 UNAL-NLP run2 ECNU run1 SimCompass run1 Meerkat Mafia pw* 0 0.75 1.5 2.25 3 paragraph-sentence sentence-phrase phrase-word word-sense

  37. Where do the baselines stand? LCS Baseline GST Baseline SemantiKLUE run1 UNAL-NLP run2 ECNU run1 SimCompass run1 Meerkat Mafia pw* 0 0.75 1.5 2.25 3 paragraph-sentence sentence-phrase phrase-word word-sense

  38. Correlation per genre paragraph-to-sentence

  39. Correlation per genre paragraph-to-sentence

  40. Correlation per genre paragraph-to-sentence

  41. Correlation per genre phrase-to-word

  42. Correlation per genre phrase-to-word

  43. What makes the task difficult?

  44. Handling OOV words and novel usages

  45. Dealing with social media text

  46. CLSS: Cross-Level Semantic Similarity Similarity of different types of lexical items High-quality dataset: 4000 pairs for four comparison types 38 systems from 19 teams

  47. Thank you! MultiJEDI ERC 259234 David Jurgens Mohammad Taher Pilehvar Roberto Navigli

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend