mining implications from lattices of closed trees
play

Mining Implications from Lattices of Closed Trees Jos L. Balczar, - PowerPoint PPT Presentation

Mining Implications from Lattices of Closed Trees Jos L. Balczar, Albert Bifet and Antoni Lozano Universitat Politcnica de Catalunya Extraction et Gestion des Connaissances EGC2008 2008 Sophia Antipolis, France Introduction Problem


  1. Mining Implications from Lattices of Closed Trees José L. Balcázar, Albert Bifet and Antoni Lozano Universitat Politècnica de Catalunya Extraction et Gestion des Connaissances EGC’2008 2008 Sophia Antipolis, France

  2. Introduction Problem Given a dataset D of rooted, unlabelled and unordered trees, find a “basis”: a set of rules that are sufficient to infer all the rules that hold in the dataset D . ∧ → → ∧ → D → ∧

  3. Introduction Set of Rules: A → Γ D ( A ) . antecedents are 2 1 3 obtained through a computation akin to a hypergraph transversal consequents 12 23 13 follow from an application of the closure operators 123

  4. Introduction Set of Rules: A → Γ D ( A ) . ∧ → 2 1 3 ∧ → → 12 23 13 ∧ → 123

  5. Trees Our trees are: Our subtrees are: Rooted Induced Unlabeled Top-down Unordered Two different ordered trees but the same unordered tree

  6. Deterministic association rules Logical implications are the traditional mean of representing knowledge in formal AI systems. In the field of data mining they are known as association rules . M a b c d → a b , d m 1 1 1 0 1 m 2 0 1 1 1 d → b m 3 0 1 0 1 a , b → d Deterministic association rules are implications with 100% confidence. An advantatge of deterministic association rules is that they can be studied in purely logical terms with propositional Horn logic .

  7. Propositional Horn Logic M a b c d (¯ a ∨ b ) ∧ (¯ a → b , d a ∨ d ) m 1 1 1 0 1 ¯ m 2 0 1 1 1 d → b d ∨ b 0 1 0 1 a ∨ ¯ m 3 ¯ a , b → d b ∨ d Assume a finite number of variables. V = { a , b , c , d } A clause is Horn iff it contains at most one positive literal. a ∨ ¯ ¯ b ∨ d a , b → d A model is a complete truth assignment from variables to { 0 , 1 } . m ( a ) = 0 , m ( b ) = 1 , m ( c ) = 1 ,... Given a set of models M, the Horn theory of M corresponds to the conjunction of all Horn clauses satisfied by all models from M.

  8. Propositional Horn Logic Theorem Given a set of models M , there is exactly one minimal Horn theory containing it. Semantically, it contains all the models that are intersections of models of M . This is sometimes called the empirical Horn approximation . We propose Closure operator translation of tree set of rules to a specific propositional theory

  9. Closure Operator D : the finite input dataset of trees T : the (infinite) set of all trees Definition We define the following the Galois connection pair: For finite A ⊆ D σ ( A ) is the set of subtrees of the A trees in T � ∀ t ′ ∈ A ( t � t ′ ) } � σ ( A ) = { t ∈ T For finite B ⊂ T τ D ( B ) is the set of supertrees of the B trees in D τ D ( B ) = { t ′ ∈ D � ∀ t ∈ B ( t � t ′ ) } � Closure Operator The composition Γ D = σ ◦ τ D is a closure operator.

  10. Galois Lattice of closed set of trees 2 1 3 12 23 13 123

  11. Model transformation Intuition One propositional variable v t is assigned to each possible subtree t . A set of trees A corresponds in a natural way to a model m A . Let m A be a model: we impose on m A the constraints that if m A ( v t ) = 1 for a variable v t , then m A ( v t ′ ) = 1 for all those variables v t ′ such that v t ′ represents a subtree of the tree represented by v t . � t ′ � t , t ∈ U , t ′ ∈ U } � R 0 = { v t ′ → v t

  12. Model transformation Theorem The following propositional formulas are logically equivalent: the conjunction of all the Horn formulas that are satisfied by all the models m t for t ∈ D the conjunction of R 0 and all the propositional translations of the formulas in R ′ D � Γ D ( A ) = C , t ∈ C } R ′ � � D = { A → t C the conjunction of R 0 and all the propositional translations of the formulas in a subset of R ′ D obtained transversing the hypergraph of differences between the nodes of the lattice.

  13. Association Rule Computation Example 2 1 3 12 23 23 13 123

  14. Association Rule Computation Example 2 1 3 12 23 23 13 123

  15. Association Rule Computation Example → 2 1 3 12 23 23 13 123

  16. Implicit rules Implicit Rule → ∧ D Given three trees t 1 , t 2 , t 3 , we say that t 1 ∧ t 2 → t 3 , is an implicit Horn rule (abbreviately, an implicit rule ) if for every tree t it holds t 1 � t ∧ t 2 � t ↔ t 3 � t . t 1 and t 2 have implicit rules if t 1 ∧ t 2 → t is an implicit rule for some t .

  17. Implicit rules Implicit Rule → ∧ D Given three trees t 1 , t 2 , t 3 , we say that t 1 ∧ t 2 → t 3 , is an implicit Horn rule (abbreviately, an implicit rule ) if for every tree t it holds t 1 � t ∧ t 2 � t ↔ t 3 � t . t 1 and t 2 have implicit rules if t 1 ∧ t 2 → t is an implicit rule for some t .

  18. Implicit rules NOT Implicit Rule → ∧ D Given three trees t 1 , t 2 , t 3 , we say that t 1 ∧ t 2 → t 3 , is an implicit Horn rule (abbreviately, an implicit rule ) if for every tree t it holds t 1 � t ∧ t 2 � t ↔ t 3 � t . t 1 and t 2 have implicit rules if t 1 ∧ t 2 → t is an implicit rule for some t .

  19. Implicit rules NOT Implicit Rule ∧ → This supertree of the antecedents is NOT a supertree of the consequents.

  20. Implicit rules NOT Implicit Rule → D Given three trees t 1 , t 2 , t 3 , we say that t 1 ∧ t 2 → t 3 , is an implicit Horn rule (abbreviately, an implicit rule ) if for every tree t it holds t 1 � t ∧ t 2 � t ↔ t 3 � t . t 1 and t 2 have implicit rules if t 1 ∧ t 2 → t is an implicit rule for some t .

  21. Implicit Rules Theorem All trees a , b such that a � b have implicit rules. Theorem Suppose that b has only one component. Then they have implicit rules if and only if a has a maximum component which is a subtree of the component of b . for all i < n a i � a n � b 1 → ∧ a 1 a n − 1 a 1 a n − 1 ··· a n b 1 ··· b 1

  22. Experimental Validation: CSLOGS 800 Number of rules Number of rules not implicit 700 Number of detected rules 600 500 400 300 200 100 0 5000 10000 15000 20000 25000 30000 Support

  23. Summary Conclusions A way of extracting high-confidence association rules from datasets consisting of unlabeled trees antecedents are obtained through a computation akin to a hypergraph transversal consequents follow from an application of the closure operators Detection of some cases of implicit rules : rules that always hold, independently of the dataset

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend