equational logic
play

Equational Logic A language L of algebras (or algebraic structures ) - PDF document

(LMCS, p. 133) III.1 Equational Logic A language L of algebras (or algebraic structures ) consists of a set F of function symbols f, g, h, a set C of constant symbols c, d, e, a set of variables x, y, z .


  1. (LMCS, p. 149) III.31 Semantics Suppose L is a language of algebras. For A an L –algebra and an • s ≈ t L –equation we write A | = s ≈ t and say satisfies s ≈ t or s ≈ t holds in A A s A = t A , if that is, s and define the same term t function on A .

  2. (LMCS, p. 149) III.32 • If A is an L –algebra and S is a set of L –equations, then we write A | = S and say A satisfies S or S holds in A if A | = s ≈ t for every equation s ≈ t in S .

  3. (LMCS, p. 149) III.33 • If S is a set of L –equations and s ≈ t is an L –equation, then we write S | = s ≈ t and say s ≈ t is a consequence of S or follows from s ≈ t S if A | = s ≈ t whenever A | = S . For satisfies we also use the word models , and we say A is a model of an equation or a set of equations.

  4. (LMCS, p. 149) III.34 Laws of Binary Algebras A binary algebra A = ( A, · ) is associative if it satisfies the associative law x · ( y · z ) ≈ ( x · y ) · z In such case it is also called a semigroup . It is commutative if it satisfies the commutative law x · y ≈ y · x And it is idempotent if it satisfies the idempotent law x · x ≈ x .

  5. (LMCS, p. 150) III.35 The idempotent law is easy to check: one looks down the main diagonal of the table of the operation to see that x · x always has the value x . a b c a a b b c c

  6. (LMCS, p. 150) III.36 The commutative law is also easy to check: look at the table of the operation to see if it is symmetric about the main diagonal. a b c a d b c d

  7. (LMCS, p. 151) III.37 Example The binary algebra A given by · a b a a a b a b is idempotent, commutative, and associative: | = x · x ≈ x A A | = x · y ≈ y · x = x · ( y · z ) ≈ ( x · y ) · z. A | The first two properties follow by the preceding remarks and diagrams. To check the associative law we construct an evaluation table for the terms x · ( y · z ) and ( x · y ) · z to show the term functions are equal:

  8. (LMCS, p. 151) III.38 · a b From the binary algebra given by a a a b a b we have the following table: s t � �� � � �� � x · ( y · z ) ( x · y ) · z x y z y · z x · y 1 . a a a a a a a 2 . a a b a a a a 3 . a b a a a a a 4 . a b b b a a a 5 . b a a a a a a 6 . b a b a a a a 7 . b b a a a b a 8 . b b b b b b b Since the columns for x · ( y · z ) and ( x · y ) · z are the same, the associative law holds.

  9. (LMCS, pp. 151-152) III.39 Example · a b The binary algebra A given by a b a b b b is not idempotent nor commutative nor associative. To see the failure of the associative law we use an evaluation table: s t � �� � � �� � y · z x · ( y · z ) x · y ( x · y ) · z x y z 1 . a a a b a b b 2 . a a b a b b b 3 . a b a b a a b 4 . a b b b a a a 5 . b a a b b b b 6 . b a b a b b b 7 . b b a b b b b 8 . b b b b b b b Indeed, associativity fails precisely on lines 1 and 3.

  10. (LMCS, pp. 151-152) III.40 · a b However, this binary algebra a b a b b b does satisfy an interesting equation, namely, A | = ( x · y ) · z ≈ ( x · z ) · y. We can see this by using an evaluation table: s t � �� � � �� � x · y ( x · y ) · z x · z ( x · z ) · y x y z a a a b b b b a a b b b a b a b a a b b b a b b a a a a b a a b b b b b a b b b b b b b a b b b b b b b b b b b

  11. (LMCS, p. 153) III.41 Many interesting classes of algebras are defined by equations. The text gives examples of: • Rings • Boolean Algebras • Semigroups • Monoids • Groups You will NOT be required to memorize the equations defining these classes. They are for reference only.

  12. (LMCS, p. 154) III.42 Rings The language L R is { + , · , − , 0 , 1 } . R is the following set of equations in this language: R1 . x + 0 ≈ additive identity x R2 . x + ( − x ) 0 additive inverse ≈ R3 . x + y y + x + is commutative ≈ R4 . x + ( y + z ) ( x + y ) + z + is associative ≈ R5 . x · 1 ≈ right mult. identity x R6 . 1 · x ≈ left mult. identity x R7 . x · ( y · z ) ≈ ( x · y ) · z · is associative R8 . x · ( y + z ) ≈ ( x · y ) + ( x · z ) left distributive R9 . ( x + y ) · z ≈ ( x · z ) + ( y · z ) right distributive. All algebras R = ( R, + , · , − , 0 , 1) that satisfy R are called rings . R is called a set of axioms or a set of defining equations for rings.

  13. (LMCS, p. 154-155) III.43 Boolean Algebras We choose BA to be the following set of equations in the language L BA : B1. x ∨ y ≈ y ∨ x commutative B2. x ∧ y ≈ y ∧ x commutative B3. x ∨ ( y ∨ z ) ( x ∨ y ) ∨ z associative ≈ B4. x ∧ ( y ∧ z ) ( x ∧ y ) ∧ z associative ≈ B5. x ∧ ( x ∨ y ) absorption ≈ x B6. x ∨ ( x ∧ y ) ≈ absorption x B7. x ∧ ( y ∨ z ) ≈ ( x ∧ y ) ∨ ( x ∧ z ) distributive x ∨ x ′ B8. ≈ 1 x ∧ x ′ B9. ≈ 0 B10. x ∨ 1 ≈ 1 B11. x ∧ 0 ≈ 0 . All algebras B = ( B, ∨ , ∧ , ′ , 0 , 1) that satisfy are called Boolean algebras . BA BA is called a set of axioms or a set of defining equations for Boolean algebra.

  14. (LMCS, p. 156) III.44 Semigroups The language L SG = {·} consists of a single binary operation. SG has only one equation, the associative law: SG1 : ( x · y ) · z ≈ x · ( y · z ) . Models of SG are called semigroups . axiomatizes or defines the class of SG semigroups.

  15. (LMCS, p. 156-157) III.45 Monoids L M is {· , 1 } , where · is binary and 1 is a constant symbol. M consists of: MO1 : x · 1 ≈ x MO2 : 1 · x ≈ x MO3 : ( x · y ) · z ≈ x · ( y · z ) . Any algebra A that satisfies M is called a monoid . M is a set of axioms or defining equations for monoids.

  16. (LMCS, p. 157-158) III.46 Groups L G = {· , − 1 , 1 } , where − 1 · is binary, is unary, and 1 is a constant symbol. G is the set of equations: G1 : x · 1 ≈ x x · x − 1 G2 : 1 ≈ G3 : ( x · y ) · z x · ( y · z ) . ≈ There is also an additive notation for groups, namely, the language is { + , − , 0 } and is usually reserved for groups that are commutative : G1 ′ : x + 0 ≈ x G2 ′ : x + ( − x ) ≈ 0 G3 ′ : ( x + y ) + z ≈ x + ( y + z ) G4 ′ : x + y ≈ y + x.

  17. (LMCS, p. 158-159) III.47 Three Very Basic Properties of Equations ( ≈ behaves like an equivalence relation ) • A | = s ≈ s • A | = s ≈ t implies A | = t ≈ s = s 1 ≈ s 2 and = s 2 ≈ s 3 • A | A | implies A | = s 1 ≈ s 3 .

  18. (LMCS, p. 158-159) III.48 Similar results hold for consequences . (Recall the definition of S | = s ≈ t from slide III.33, namely that any algebra that satisfies S will also satisfy s ≈ t .) • S | = s ≈ s = s ≈ t implies = t ≈ s • S | S | • S | = s 1 ≈ s 2 and S | = s 2 ≈ s 3 implies S | = s 1 ≈ s 3 .

  19. (LMCS, p. 161) III.49 Arguments that are Valid in A Given an algebra A , an argument s 1 ≈ t 1 , · · · , s n ≈ t n ∴ s ≈ t, is valid in A (or correct in A ) provided either • some equation s i ≈ t i does not hold in A , or holds in A , • s ≈ t i.e., if all the premisses hold in A , then so does the conclusion.

  20. (LMCS, p. 163) III.50 Let A be the two–element Boolean algebra. To check the validity of the argument ∴ x ∨ x ′ ≈ x ∧ x ′ x ∨ y ≈ y ∨ x, x ∧ ( y ∨ x ) ≈ x we form an evaluation table: s 1 t 1 s 2 t 2 s t x ∨ x ′ x ∧ x ′ x ∨ y y ∨ x x ∧ ( y ∨ x ) x y x 0 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 0 Thus this is not a valid argument in A .

  21. (LMCS, p. 163) III.51 Valid Equational Arguments An argument s 1 ≈ t 1 , · · · , s n ≈ t n ∴ s ≈ t, is valid (or correct ) provided it is valid in every algebra A . Of course this becomes impossible to check using evaluation tables because there are an infinite number of algebras to examine.

  22. (LMCS, p. 163) III.52 So how do we ever verify that an equational argument is valid? There are two ways. One is to use a proof system that we will soon study. The other is to study abstract algebra to learn special methods that can aid in a semantic analysis of validity.

  23. (LMCS, p. 164) III.53 Refuting an Equational Argument An equational argument s 1 ≈ t 1 , · · · , s n ≈ t n ∴ s ≈ t is not valid iff one can find an algebra A such that • all the premisses hold in A , • but the conclusion does not hold. Such an A is called a counterexample to the argument.

  24. (LMCS, p. 164) III.54 One-Element Algebras We can never use a one–element algebra to refute an equational argument because in a one–element algebra all equations are true (there is only one value for the term functions to take). So to find a counterexample to an equational argument one must look to algebras with more than one element.

  25. (LMCS, p. 164) III.55 Example Show that the argument x · y ≈ y · x ∴ x · x ≈ x is not valid. We need to find a binary algebra that is commutative but not idempotent. The following two–element binary algebra does the job: · a b a a a b a a

  26. (LMCS, p. 164-165) III.56 Example Show that the argument fffx ≈ fx ∴ ffx ≈ fx is not valid. The following two–element monounary algebra provides a counterexample: f a b b a Such examples we usually discover by drawing pictures such as the following: f a b

  27. (LMCS, p. 168) III.57 What is Substitution? The word substitution encompasses two quite distinct kinds of substitution. One is for uniform substitution for variables . For example from x + y ≈ y + x (*) we can deduce ( x · y ) + ( z + z ) ≈ ( z + z ) + ( x · y ) by substituting in (*) as follows: the term x · y for every occurrence of x the term z + z for every occurrence of y .

  28. (LMCS, p. 168) III.58 The other use of substitution is to substitute one term for another . For example from x + y ≈ y + x we can also deduce ( u · ( x + y )) + ( z + z ) ≈ ( u · ( y + x )) + ( z + z ) Notice that we replaced the underlined term on the left with the underlined term on the right.

  29. (LMCS, p. 168) III.59 Since these two kinds of substitution are so different we will use the word substitution only for the first kind, that is, for the uniform substitution for variables; and call the second kind of substitution replacement .

  30. (LMCS, p. 168) III.60 Substitution Given a term s ( x 1 , . . . , x n ) and terms t 1 , . . . , t n , the expression s ( t 1 , . . . , t n ) denotes the result of simultaneously substituting for in s . t i x i The notation we use for the substitution is   x 1 ← t 1 . .  .   .  x n ← t n

  31. (LMCS, p. 168) III.61 Example Let s ( x, y ) be x + y . Applying the substitution � � x ← x · z y ← x + y to s ( x, y ) gives s ( x · z, x + y ) which is ( x · z ) + ( x + y )

  32. (LMCS, p. 168-169) III.62 We can illustrate the substitution procedure with a picture. s(x,y) = x+y s(x z , x+y) = (x z) + (x+y) + + x y + z x x y As you can see, substitution makes the tree grow downwards from the bottom. Substitution can only increase the tree size.

  33. (LMCS, p. 169) III.63 Substitution Theorems = p ( x 1 , . . . , x n ) ≈ q ( x 1 , . . . , x n ) A | implies = p ( t 1 , . . . , t n ) ≈ q ( t 1 , . . . , t n ) . A | S | = p ( x 1 , . . . , x n ) ≈ q ( x 1 , . . . , x n ) implies S | = p ( t 1 , . . . , t n ) ≈ q ( t 1 , . . . , t n ) .

  34. (LMCS, p. 171-172) III.64 Replacement Starting with a term p ( · · · s · · · ) with an occurrence of a subterm in it, s we replace with another term and s t obtain p ( · · · t · · · ). Example Replacing the second occurrence (from the left) of x + y in (( x + y ) · y ) + ( y + ( x + y )) � �� � � �� � first second with the term u · v gives the term (( x + y ) · y ) + ( y + ( u · v ))

  35. (LMCS, p. 171-172) III.65 We can visualize this by the use of trees: (( x+y ) y ) + ( y+ ( )) (( x+y ) ) ( y+ ( )) u v y y + x + + + + + + + y + y y y x y x u v y x y Replacement has the effect of replacing one part of the tree. The replacement part can be larger or smaller than the original. Replacement can increase or decrease the size of the tree.

  36. (LMCS, p. 173-174) III.66 Replacement Theorems = s ≈ t A | implies = p ( · · · s · · · ) ≈ p ( · · · t · · · ) . A | S | = s ≈ t implies S | = p ( · · · s · · · ) ≈ p ( · · · t · · · ) .

  37. (LMCS, p. 175) III.67 The Syntactic Viewpoint Birkhoff’s Rules Rule Name Example Reflexive x + y ≈ x + y s ≈ s s ≈ t x ≈ x · x Symmetric t ≈ s x · x ≈ x x ≈ x · x, x · x ≈ 1 r ≈ s, s ≈ t Transitive x ≈ 1 r ≈ t r ( � x ) ≈ s ( � x ) x · 1 ≈ x Substitution r ( � t ) ≈ s ( � ( x · x ) · 1 ≈ x · x t ) s ≈ t x · x ≈ x Replacement r ( · · · s · · · ) ≈ r ( · · · t · · · ) ( x · x ) + x ≈ x + x

  38. (LMCS, p. 175) III.68 Derivation of Equations A derivation of an equation s ≈ t from S is a sequence s 1 ≈ t 1 , . . . , s n ≈ t n of equations such that each equation is either (1) a member of S , or (2) is the result of applying a rule of inference to previous members of the sequence, and the last equation is s ≈ t . We write S ⊢ s ≈ t if such a derivation exists.

  39. (LMCS, p. 176) III.69 Example A derivation to witness fffx ≈ fy ⊢ fx ≈ fy is given by 1 . ≈ given fffx fy 2 . ≈ subs 1 fffx fx 3 . ≈ symm 2 fx fffx 4 . ≈ trans 1,3 fx fy

  40. (LMCS, p. 176) III.70 Example Show fgx ≈ x, ggx ≈ x ⊢ fx ≈ gx . 1 . given fgx ≈ x 2 . given ggx ≈ x 3 . subs 1 fggx ≈ gx 4 . ≈ repl 2 fggx fx 5 . ≈ symm 4 fx fggx 6 . ≈ trans 3,5 fx gx

  41. (LMCS, p. 176-177) III.71 Example Show R ⊢ x · 0 ≈ 0. 1. x + 0 given (R1) ≈ x 2. 0 + 0 ≈ 0 subs 1 3. x · (0 + 0) ≈ x · 0 repl 2 4. x · ( y + z ) ≈ x · y + x · z given (R8) 5. x · (0 + 0) ≈ x · 0 + x · 0 subs 4 6. x · 0 + x · 0 ≈ x · (0 + 0) symm 5 7. x · 0 + x · 0 ≈ x · 0 trans 3,6 8. ( x · 0 + x · 0) + ( − ( x · 0)) ≈ x · 0 + ( − ( x · 0)) repl 7 9. x + ( − x ) ≈ 0 given (R2) 10. x · 0 + ( − ( x · 0)) ≈ 0 subs 9 11. ( x · 0 + x · 0) + ( − ( x · 0)) ≈ 0 trans 8,10 12. x + ( y + z ) ≈ ( x + y ) + z given (R4) 13. x · 0 + ( x · 0 + ( − ( x · 0))) ≈ ( x · 0 + x · 0) + ( − ( x · 0)) subs 12 14. x · 0 + ( x · 0 + ( − ( x · 0))) ≈ 0 trans 11,13 15. x · 0 + ( − ( x · 0)) ≈ 0 subs 9 16. x · 0 + ( x · 0 + ( − ( x · 0))) ≈ ( x · 0) + 0 repl 15 17. x · 0 + 0 ≈ x · 0 subs 1 18. x · 0 + ( x · 0 + ( − ( x · 0))) ≈ x · 0 trans 16,17 19. x · 0 ≈ x · 0 + ( x · 0 + ( − ( x · 0))) symm 18 20. x · 0 ≈ 0 trans 14,19

  42. (LMCS, p. 183-185) III.72 Soundness and Completeness Birkhoff’s Rules are • sound : S ⊢ s ≈ t implies S | = s ≈ t • complete : S | = s ≈ t implies S ⊢ s ≈ t Propositional logic has many competing proof systems. In equational logic we prefer Birkhoff’s rules.

  43. (LMCS, p. 183-185) III.73 Determining Validity In the propositional logic we have (often very slow) methods to check the validity of an argument. Equational logic has no such algorithm ! This does not mean that we have not yet found an algorithm, but rather that it simply does not exist .

  44. (LMCS, p. 185-186) III.74 Summary of Our Methods for Analyzing Arguments • An equational argument s 1 ≈ t 1 , · · · , s n ≈ t n ∴ s ≈ t is valid iff there is a derivation of the conclusion from the premisses, that is, iff s 1 ≈ t 1 , · · · , s n ≈ t n ⊢ s ≈ t. Thus we look for a derivation to show an argument is valid.

  45. (LMCS, p. 185-186) III.75 • An equational argument s 1 ≈ t 1 , · · · , s n ≈ t n ∴ s ≈ t is invalid iff there is a counterexample A , that is, iff some makes the premisses true A and the conclusion false. Thus we can show an argument is not valid by finding a counterexample.

  46. (LMCS, p. 190) III.76 Unification One of the most popular and powerful tools of automated theorem proving is an algorithm to find most general unifiers. s ′ ( x 1 , . . . , x n ) Let s ( x 1 , . . . , x n ) and be two terms. A unifier of s and s ′ is a substitution   x 1 ← t 1 . .   .   x n ← t n such that s ( t 1 , . . . , t n ) = s ′ ( t 1 , . . . , t n ) . After the substitution has been carried out, the two terms have become the same term. If a unifier can be found for and t , we say s they can be unified , or they are unifiable .

  47. (LMCS, p. 190) III.77 Example Let s ( x, y ) = ( x + y ) · x t ( x, y ) = ( y + x ) · y. There are many unifiers of s and t , e.g., � � � � x ← 1 x ← u · u and . y ← 1 y ← u · u However the substitution � � x ← y y ← y is a unifier that is more general than the preceding two examples.

  48. (LMCS, p. 191) III.78 Most General Unifiers If two terms have a unifier such that s, t µ every unifier of is an instance of then s, t µ we say is a most general unifier of s, t . µ Theorem (Robinson, 1965) • If two terms are unifiable then they have a most general unifier. The most general unifier of two terms is • (essentially) unique. • There is an algorithm to determine if two terms are unifiable, and if so the algorithm produces the most general unifier.

  49. (LMCS, p. 192) III.79 Critical Subterms of s , t are the first subterms where disagree. s, t Example For the terms = ( x · y ) + (( x + y ) · x ) s = ( x · y ) + (( y · ( x + y )) · y ) t the critical subterms are s ′ = x + y t ′ = y · ( x + y ) , s = + x y + x y x + x y y + x y y t =

  50. (LMCS, p. 192-193) III.80 Critical Subterm Condition (CSC) The critical subterm condition (CSC) is satisfied by and t if their critical subterms s s ′ t ′ and consist of: a variable, say x , and • • another term, say r , that has no occurrence of in it. x

  51. (LMCS, p. 192-193) III.81 Unification Algorithm: let be the identity substitution µ WHILE s � = t find the critical subterms s ′ , t ′ if the CSC fails, return NOT UNIFIABLE else with { s ′ , t ′ } = { x, r } apply ( x ← r ) to both and s t apply ( x ← r ) to µ ENDWHILE return ( µ )

  52. (LMCS, p. 193) III.82 A picture of a possible single step of the unification algorithm when the CSC holds: s = x t = r ( ) x r r r In this single step the substitution µ changes to ( x ← r ) µ .

  53. (LMCS, p. 195) III.83 Example Apply the unification algorithm to x + ( y · x ) and ( y · y ) + z : y x + x s = + y y z t = x y y + y y y y y + y y z z y (y y) + y y y y y + y y y y y x y y µ = z y (y y)

  54. (LMCS, p. 198-199) III.84 Composition of Substitutions σ, σ ′ , the Given two substitutions composition of the two is defined by ( σ ′ σ )( t ) = σ ′ ( σ ( t )) Example     x ← x + y x ← y + x σ ′ =     σ = and     y ← x · y y ← x + x leads to   x ← ( y + x ) + ( x + x ) σ ′ σ =    .  y ← ( y + x ) · ( x + x )

  55. (LMCS, p. 208) III.85 Term Rewrite Systems (TRS’s) A term rewrite (rule) is an expression s − → t , sometimes called a directed equation , where and are terms. s t A term rewrite system , abbreviated TRS, is a set R of term rewrite rules. Example A simple system of three term rewrite R rules used for monoids is   ( x · y ) · z − → x · ( y · z )           R = . 1 · x − → x         x · 1 − → x  

  56. (LMCS, p. 208) III.86 Elementary Rewrites An elementary rewrite obtained from s − → t is a rewrite of the form r ( · · · σs · · · ) − → r ( · · · σt · · · ) , where is a substitution, a term, and σ r one occurrence of on the left has been σs replaced by on the right. σt Example 1 · ( x · ( y · z )) − → ( x · ( y · z )) is an elementary rewrite obtained from 1 · x − → x by using the substitution � � x ← x · ( y · z )

  57. (LMCS, p. 208-209) III.87 If R is a set of term rewrite rules then s − → R t means s − → t is an elementary rewrite of some term rewrite p − → q in R . We write → + s − R t if there is a finite sequence of elementary rewrites such that s = t 1 − → R t n = t. → R t 2 − → R · · · − → ∗ The notation s − means R t → + s = t or s − holds. R t

  58. (LMCS, p. 208-209) III.88 Example Using the monoid rules R we have → + (( x · 1) · (1 · y )) · ( z · 1) − R x · ( y · z ) , since (( x · 1) · (1 · y )) · ( z · 1) − → R ( x · (1 · y )) · ( z · 1) ( x · (1 · y )) · ( z · 1) ( x · y ) · ( z · 1) − → R ( x · y ) · ( z · 1) − → R ( x · y ) · z ( x · y ) · z − → R x · ( y · z )

  59. (LMCS, p. 209) III.89 Terminating TRS’s • A TRS R is terminating if every sequence of elementary rewrites s 1 − → R s 2 − → R · · · eventually stops. • A term is terminal if no elementary s rewrite s − → R t is possible.

  60. (LMCS, p. 209) III.90 Example The monoid example R is terminating. The terminal terms are either 1 or of the form x · ( y · ( z · · · w ) · · · )) For example we have the terminating sequence (( x · y ) · u ) · v − → R ( x · y ) · ( u · v ) − → R x · ( y · ( u · v ))

  61. (LMCS, p. 209) III.91 Two Necessary Conditions for Terminating • No rule of R is of the form x − → t . For otherwise this could be applied to any term s , so no term would be terminal. • If s − → t is in R then the variables of t are also variables of s . For otherwise a substitution into s − → t would give an elementary rewrite of the form → t ′ t ′ , and s − that has as a subterm of s this would permit an infinitely long sequence of rewrites.

  62. (LMCS, p. 210) III.92 The Length of a Term | t | is the length of (the number of t symbols in the prefix form of t ). | t | x is the x – length of t , the number of times the variable occurs in t . x Thus for t = ( x · y ) + ( x · z ) we have = 7 | t | | t | x = 2

  63. (LMCS, p. 210) III.93 Theorem Let R be a term rewrite system with the property that for each s − we have → t ∈ R • | s | > | t | , and • | s | x ≥ | t | x for every variable x . Then R is a terminating TRS. (In this situation all of the elementary rewrites s − → R t are length-reducing .)

  64. (LMCS, p. 211) III.94 Applications The following TRS’s are terminating: • R = { x · x − → 0 } • R = { x · 1 − → x, 1 · x − → x } R = { x ∨ x ′ − • → x } • R = { x ∨ ( x ∧ y ) − → x ∧ y } R = { x ′′ ∨ x ′ − → x ∨ x ′ } • R = { fggx − → fx } . • → gfx, fgx −

  65. (LMCS, p. 211) III.95 Normal Form TRS’s A normal form TRS is a uniquely terminating TRS. For such a TRS the terminating form for any given term s is called the normal form of s , and written n R ( s ). A normal form TRS R is a normal form TRS for a set E of equations if E | = s ≈ t iff n R ( s ) = n R ( t ) .

  66. (LMCS, p. 211) III.96 Example R = { ffx − → x } is a normal form TRS. The normal form of is fx . fffx Example   ( x · y ) · z − → x · ( y · z )     R = x · 1 − → x   1 · x − → x   is a normal form TRS (for monoids). The normal form of (( x · 1) · (1 · y )) · z is x · ( y · z ).

  67. (LMCS, p. 212-213) III.97 A Normal Form TRS for Groups (Knuth and Bendix, 1967) E R 1 − 1 1 · x ≈ − → 1 x x − 1 · x ≈ 1 x · 1 − → x ( x · y ) · z ≈ x · ( y · z ) 1 · x − → x ( x − 1 ) − 1 − → x x − 1 · x − → 1 x · x − 1 1 − → x − 1 · ( x · y ) − → y x · ( x − 1 · y ) − → y y − 1 · x − 1 ( x · y ) − 1 − → ( x · y ) · z − → x · ( y · z ) In this case it is not immediate that the system R is terminating, much less a normal form TRS.

  68. (LMCS, p. 213) III.98 Converting R into an equational theory Let E ( R ) = { s ≈ t : s − → t ∈ R} . Proposition If R is a normal form TRS, then it is a normal form TRS for E ( R ). Thus, for example, once we show that R = { ( x · y ) · z − → x · ( y · z ) } is a normal form TRS, then it follows that it is a normal form TRS for semigroups.

  69. (LMCS, p. 215) III.99 Critical Pairs − → s 1 t 1 Given − → t 2 . s 2 Rename the variables in one of them (if necessary) so that s 1 and have no s 2 variables in common . Choose an occurrence of a nonvariable s ′ s ′ subterm of s 1 such that are 1 , s 2 1 unifiable. s ′ Find the most general unifier of and µ 1 s 2 . Two rewrites of give a critical pair. µs 1

  70. (LMCS, p. 216) III.100 Given: Two Rewrite Rules with disjoint variables s = s t 1 1 1 s = t 2 2 Find , the most general unifier of and . µ s s 1 2 Then apply it to both rules. µ µ s = t 1 µ 1 s 1 µ t 2 µ s = 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend