dynamically time capped possibilistic testing of
play

Dynamically Time-Capped Possibilistic Testing of SubClassOf Axioms - PowerPoint PPT Presentation

Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Dynamically Time-Capped Possibilistic Testing of SubClassOf Axioms Against RDF Data to Enrich Schemas Andrea G. B.


  1. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Dynamically Time-Capped Possibilistic Testing of SubClassOf Axioms Against RDF Data to Enrich Schemas Andrea G. B. Tettamanzi, Catherine Faron-Zucker, and Fabien Gandon Univ. Nice Sophia Antipolis, CNRS, Inria, I3S, UMR 7271, France K-Cap 2015, Palisades, NY 1 / 25

  2. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Introduction: Ontology Learning Top-down construction of ontologies has limitations aprioristic and dogmatic does not scale well does not lend itself to a collaborative effort Bottom-up, grass-roots approach to ontology and KB creation start from RDF facts and learn OWL 2 axioms Recent contributions towards OWL 2 ontology learning FOIL-like algorithms for learning concept definitions statistical schema induction via association rule mining light-weight schema enrichment (DL-Learner framework) All these methods apply and extend ILP techniques. 2 / 25

  3. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Introduction: Ontology validation, Axiom Scoring Need for evaluating and validating ontologies General methodological investigations, surveys Tools like OOPS! for detecting pitfalls Integrity constraint validation Ontology learning and validation rely on axiom scoring We have recently proposed a possibilistic scoring heuristic [A. Tettamanzi, C. Faron-Zucker, and F. Gandon. “Testing OWL Axioms against RDF Facts: A possibilistic approach”, EKAW 2014] Computationally heavy, but there is evidence that testing time tends to be inversely proportional to score Research Question: 1 Can time capping alleviate the computation of the heuristic without giving up the precision of the scores? 3 / 25

  4. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Content Content of an Axiom Definition (Content of Axiom φ ) We define content ( φ ), as the finite set of formulas, which can be tested against an RDF dataset K , constructed from the set-theoretic formulas expressing the direct OWL 2 semantics of φ by grounding them. E.g., φ = dbo:LaunchPad ⊑ dbo:Infrastructure ∀ x ∈ ∆ I , x ∈ dbo:LaunchPad I ⇒ x ∈ dbo:Infrastructure I content ( φ ) = { dbo:LaunchPad ( r ) ⇒ dbo:Infrastructure ( r ) : r is a resource occurring in DBPedia } By construction, for all ψ ∈ content ( φ ), φ | = ψ . 4 / 25

  5. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Content Confirmation and Counterexample of an Axiom Given ψ ∈ content ( φ ) and an RDF dataset K , three cases: 1 K | = ψ : − → ψ is a confirmation of φ ; 2 K | = ¬ ψ : − → ψ is a counterexample of φ ; 3 K �| = ψ and K �| = ¬ ψ : − → ψ is neither of the above Selective confirmation: a ψ favoring φ rather than ¬ φ . φ = Raven ⊑ Black − → ψ = a black raven (vs. a green apple) Idea Restrict content ( φ ) just to those ψ which can be counterexamples of φ . Leave out all ψ which would be trivial confirmations of φ . 5 / 25

  6. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Support, Confirmation, and Counterexample Support, Confirmation, and Counterexample of an Axiom Definition Given axiom φ , let us define u φ = � content ( φ ) � u + φ = the number of confirmations of φ u − φ = the number of counterexamples of φ Some properties: u + φ + u − φ ≤ u φ (there may be ψ s.t. K �| = ψ and K �| = ¬ ψ ) u + φ = u − ¬ φ (confirmations of φ are counterexamples of ¬ φ ) φ = u + u − ¬ φ (counterexamples of φ are confirmations of ¬ φ ) u φ = u ¬ φ ( φ and ¬ φ have the same support) 6 / 25

  7. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Possibility Theory Possibility Theory Definition (Possibility Distribution) π : Ω → [0 , 1] Definition (Possibility and Necessity Measures) Π( A ) = max ω ∈ A π ( ω ); 1 − Π(¯ N ( A ) = A ) = min A { 1 − π ( ω ) } . ω ∈ ¯ For all subsets A ⊆ Ω, 1 Π( ∅ ) = N ( ∅ ) = 0, Π(Ω) = N (Ω) = 1; 2 Π( A ) = 1 − N (¯ A ) (duality); 3 N ( A ) > 0 implies Π( A ) = 1, Π( A ) < 1 implies N ( A ) = 0. In case of complete ignorance on A , Π( A ) = Π(¯ A ) = 1. 7 / 25

  8. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Possibility and Necessity of an Axiom Possibility and Necessity of an Axiom � � 2 � u φ − u − � � φ Π( φ ) = 1 − � 1 − u φ � � 2 u φ − u + � � � φ N ( φ ) = � 1 − if Π( φ ) = 1, 0 otherwise. u φ 1.0 1.0 0.8 0.8 0.6 0.6 possibility necessity 0.4 0.4 0.2 0.2 0.0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 no. of counterexamples no. of confirmations 8 / 25

  9. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Possibility and Necessity of an Axiom Acceptance/Rejection Index Combination of possibility and necessity of an axiom: Definition ARI ( φ ) = N ( φ ) − N ( ¬ φ ) = N ( φ ) + Π( φ ) − 1 − 1 ≤ ARI ( φ ) ≤ 1 for all axiom φ ARI ( φ ) < 0 suggests rejection of φ (Π( φ ) < 1) ARI ( φ ) > 0 suggests acceptance of φ ( N ( φ ) > 0) ARI ( φ ) ≈ 0 reflects ignorance about the status of φ 9 / 25

  10. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion OWL 2 → SPARQL To test axioms, we define a mapping Q ( E , x ) from OWL 2 expressions to SPARQL graph patterns such that SELECT DISTINCT ?x WHERE { Q ( E , ?x ) } returns [ Q ( E , x )], all known instances of class expression E and ASK { Q ( E , a ) } checks whether E ( a ) is in the RDF base. For an atomic concept A (a valid IRI), Q ( A , ?x ) = ?x a A . 10 / 25

  11. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Concept Negation: Q ( ¬ C , ?x ) Problem Open-world hypothesis, but no ¬ in RDF! We approximate an open-world semantics as follows: Q ( ¬ C , ?x ) = { ?x a ?dc . (1) FILTER NOT EXISTS { Q ( C , ?z ) } ?z a ?dc . } For an atomic class expression A , this becomes Q ( ¬ A , ?x ) = { ?x a ?dc . (2) FILTER NOT EXISTS { ?z a A } } . ?z a ?dc . 11 / 25

  12. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion SubClassOf( C D ) Axioms To test SubClassOf axioms, we must define their logical content based on their OWL 2 semantics: C I ⊆ D I ( C ⊑ D ) I = x ∈ C I ⇒ x ∈ D I ≡ ∀ x Therefore, following the principle of selective confirmation, u C ⊑ D = �{ D ( a ) : K | = C ( a ) }� , because, if C ( a ) holds, C ( a ) ⇒ D ( a ) ≡ ¬ C ( a ) ∨ D ( a ) ≡ ⊥ ∨ D ( a ) ≡ D ( a ) 12 / 25

  13. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Support, Confirmations and Counterexamples of C ⊑ D u C ⊑ D can be computed by SELECT (count(DISTINCT ?x) AS ?u) WHERE { Q ( C , ?x ) } . As for the computational definition of u + C ⊑ D and u − C ⊑ D : confirmations: a s.t. a ∈ [ Q ( C , x )] and a ∈ [ Q ( D , x )]; counterexamples: a s.t. a ∈ [ Q ( C , x )] and a ∈ [ Q ( ¬ D , x )]. Therefore, u + C ⊑ D can be computed by SELECT (count(DISTINCT ?x) AS ?numConfirmations) WHERE { Q ( C , ?x ) Q ( D , ?x ) } u − C ⊑ D can be computed by SELECT (count(DISTINCT ?x) AS ?numCounterexamples) WHERE { Q ( C , ?x ) Q ( ¬ D , ?x ) } 13 / 25

  14. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Test a SubClassOf axiom (plain version, w/o time cap) Input: φ , an axiom of the form SubClassOf ( C D ); Output: Π( φ ), N ( φ ), confirmations, counterexamples. 1: Compute u φ using the corresponding SPARQL query; 2: compute u + φ using the corresponding SPARQL query; 3: if 0 < u + φ ≤ 100 then query a list of confirmations; 4: 5: if u + φ < u φ then compute u − φ using the corresponding SPARQL query; 6: if 0 < u − φ ≤ 100 then 7: query a list of counterexamples; 8: 9: else u − φ ← 0; 10: 11: compute Π( φ ) and N ( φ ) based on u φ , u + φ , and u − φ . 14 / 25

  15. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Comparison with a Probability-Based Score 1.0 Bühmann and Lehmann’s Score 0.8 0.6 0.4 0.2 0.0 −1.0 −0.5 0.0 0.5 1.0 Acceptance/Rejection Index 15 / 25

  16. Introduction Principles Possibilistic Scoring Candidate Axiom Testing Subsumption Axiom Testing Experiments Conclusion Scalable Axiom Testing (1 + ARI ( φ )) − 1 � � or O (exp( − ARI ( φ ))) T ( φ ) = O 60000 Elapsed Time (min) 40000 20000 0 −1.0 −0.5 0.0 0.5 1.0 Acceptance/Rejection Index 16 / 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend