Learning efficient logic programs Andrew Cropper & Stephen - - PowerPoint PPT Presentation

learning efficient logic programs
SMART_READER_LITE
LIVE PREVIEW

Learning efficient logic programs Andrew Cropper & Stephen - - PowerPoint PPT Presentation

Learning efficient logic programs Andrew Cropper & Stephen Muggleton Input Output [s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] ? Input Output [s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] c %% metagol


slide-1
SLIDE 1

Learning efficient logic programs

Andrew Cropper & Stephen Muggleton

slide-2
SLIDE 2

Input Output

[s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] ?

slide-3
SLIDE 3

Input Output

[s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] c

slide-4
SLIDE 4

%% metagol f(A,B):-head(A,B),tail(A,C),element(C,B). f(A,B):-tail(A,C),f(C,B).

slide-5
SLIDE 5

%% alternative f(A,B):-mergesort(A,C),f1(C,B). f1(A,B):-head(A,B),tail(A,C),head(C,B). f1(A,B):-tail(A,C),f1(C,B).

slide-6
SLIDE 6

Input

  • examples E
  • background knowledge B
  • cost : ︎Program × Example︎ → N

Idea

slide-7
SLIDE 7

Idea

  • 1. Learn any program H
  • 2. Repeat while possible:
  • a. Learn program H’ where max_cost(H’,E) < max_cost(H,E)
  • b. H=H’
  • 3. Return H
slide-8
SLIDE 8

prove([],P,P). prove([Atom|Atoms],P1,P2):- prove_aux(Atom,P1,P3), prove(Atoms,P3,P2). prove_aux(Atom,P,P):- call(Atom). prove_aux(Atom,P1,P2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), prove(Body,P3,P2).

Metagol

slide-9
SLIDE 9

prove([],P,P). prove([Atom|Atoms],P1,P2):- prove_aux(Atom,P1,P3), prove(Atoms,P3,P2). prove_aux(Atom,P,P):- call(Atom). prove_aux(Atom,P1,P2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), prove(Body,P3,P2).

Metagol

slide-10
SLIDE 10

prove([],P,P). prove([Atom|Atoms],P1,P2):- prove_aux(Atom,P1,P3), prove(Atoms,P3,P2). prove_aux(Atom,P,P):- call(Atom). prove_aux(Atom,P1,P2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), prove(Body,P3,P2).

Metagol

slide-11
SLIDE 11

prove([],P,P). prove([Atom|Atoms],P1,P2):- prove_aux(Atom,P1,P3), prove(Atoms,P3,P2). prove_aux(Atom,P,P):- call(Atom). prove_aux(Atom,P1,P2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), prove(Body,P3,P2).

Metagol

slide-12
SLIDE 12

prove([],P,P,C,C). prove([Atom|Atoms],P1,P2,C1,C2):- prove_aux(Atom,P1,P3,C1,C3), prove(Atoms,P3,P2,C3,C2). prove_aux(Atom,P,P,C1,C2):- pos_cost(Atom,Cost). C2 is C1+Cost, bound(MaxCost), C2 < MaxCost. prove_aux(Atom,P1,P2,C1,C2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), C3 is C1+1, prove(Body,P3,P2,C3,C2).

Metaopt

slide-13
SLIDE 13

prove([],P,P,C,C). prove([Atom|Atoms],P1,P2,C1,C2):- prove_aux(Atom,P1,P3,C1,C3), prove(Atoms,P3,P2,C3,C2). prove_aux(Atom,P,P,C1,C2):- pos_cost(Atom,Cost). C2 is C1+Cost, bound(MaxCost), C2 < MaxCost. prove_aux(Atom,P1,P2,C1,C2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), C3 is C1+1, prove(Body,P3,P2,C3,C2).

Metaopt

slide-14
SLIDE 14

prove([],P,P,C,C). prove([Atom|Atoms],P1,P2,C1,C2):- prove_aux(Atom,P1,P3,C1,C3), prove(Atoms,P3,P2,C3,C2). prove_aux(Atom,P,P,C1,C2):- pos_cost(Atom,Cost). C2 is C1+Cost, bound(MaxCost), C2 < MaxCost. prove_aux(Atom,P1,P2,C1,C2):- metarule(Atom,Body,Subs), save(Subs,P1,P3), C3 is C1+1, prove(Body,P3,P2,C3,C2).

Metaopt

slide-15
SLIDE 15

Iterative descent

  • 1. Learn any program H with minimal clauses
  • 2. Repeat while possible:
  • a. Learn program H’ where max_cost(H’,E) < max_cost(H,E)
  • b. H=H’
  • 3. Return H
slide-16
SLIDE 16

Metaopt prunes as it learns

slide-17
SLIDE 17

Tree cost

Positive examples: size of the leftmost successful branch

slide-18
SLIDE 18

Tree cost

pos_cost(Atom,Cost):- statistics(inferences,I1), call(Atom), statistics(inferences,I2), Cost is I2-I1.

Positive examples: size of the leftmost successful branch

slide-19
SLIDE 19

Tree cost

Negative examples: size of the finitely-failed SLD-tree

slide-20
SLIDE 20

Tree cost

neg_cost(Atom,Cost):- statistics(inferences,I1), \+ call(Atom), statistics(inferences,I2), Cost is I2-I1.

Negative examples: size of the finitely-failed SLD-tree

slide-21
SLIDE 21

Tree cost

  • any arity logics
  • no user-supplied costs
  • backtracking and non-determinism
slide-22
SLIDE 22

Input Output

[s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] c

slide-23
SLIDE 23

f(A,B):-mergesort(A,C),f1(C,B). f1(A,B):-head(A,B),tail(A,C),head(C,B). f1(A,B):-tail(A,C),f1(C,B).

Input Output

[s,h,e,e,p] e [a,l,p,a,c,a] a [c,h,i,c,k,e,n] c

slide-24
SLIDE 24

Convergence: program tree costs

slide-25
SLIDE 25

Convergence: program running times

slide-26
SLIDE 26

Performance

slide-27
SLIDE 27

Performance

slide-28
SLIDE 28

Initial state

L1 L2

L1 L2

Final state

Resource complexity

slide-29
SLIDE 29
slide-30
SLIDE 30

Input Output My name is John. John My name is Bill. Bill My name is Josh. Josh My name is Albert. Albert My name is Richard. Richard

slide-31
SLIDE 31
slide-32
SLIDE 32

%% metagol f(A,B):-tail(A,C),f1(C,B). f1(A,B):-dropLast(A,C),f2(C,B). f2(A,B):-dropWhile(A,B,not_uppercase).

slide-33
SLIDE 33

%% metagol unfolded f(A,B):- tail(A,C), dropLast(C,D), dropWhile(D,B,not_uppercase).

slide-34
SLIDE 34

% metagolO f(A,B):-f1(A,C),f4(C,B). f1(A,B):-f2(A,C),f3(C,B). f2(A,B):-filter(A,B,is_letter). f3(A,B):-dropWhile(A,B,is_uppercase). f4(A,B):-dropWhile(A,B,not_uppercase).

slide-35
SLIDE 35

% metagolO unfolded f(A,B):- filter(A,C,is_letter). dropWhile(C,D,is_uppercase), dropWhile(D,B,not_uppercase).

slide-36
SLIDE 36

% metaopt f(A,B):-tail(A,C),f1(C,B). f1(A,B):-f2(A,C),dropLast(C,B). f2(A,B):-f3(A,C),f3(C,B). f3(A,B):-tail(A,C),f4(C,B). f4(A,B):-f5(A,C),f5(C,B). f5(A,B):-tail(A,C),tail(C,B).

slide-37
SLIDE 37

% metaopt unfolded f(A,B):- tail(A,C), tail(C,D), tail(D,E), tail(E,F), tail(F,G), tail(G,H), tail(H,I), tail(I,J), tail(J,K), tail(K,L), tail(L,M), dropLast(M,B).

slide-38
SLIDE 38

% metaopt unfolded f(A,B):- tail(A,C), tail(C,D), tail(D,E), tail(E,F), tail(F,G), tail(G,H), tail(H,I), tail(I,J), tail(J,K), tail(K,L), tail(L,M), dropLast(M,B).

does this last

slide-39
SLIDE 39

Todo

  • Study complexity of Metaopt variants
  • Characterise complexity of learned programs
  • Discover new efficient algorithms