Logic and Knowledge Representation R e i n f o r c e m e - - PowerPoint PPT Presentation

logic and knowledge representation
SMART_READER_LITE
LIVE PREVIEW

Logic and Knowledge Representation R e i n f o r c e m e - - PowerPoint PPT Presentation

Logic and Knowledge Representation R e i n f o r c e m e n t l e a r n i n g , I n d u c t i v e L o g i c P r o g r a m m i n g D e s c r i p t i o n C o m p l e x i t y 1 5 J


slide-1
SLIDE 1

Logic and Knowledge Representation

G i

  • v

a n n i S i l e n

  • g

s i l e n

  • @e

n s t . f r

T é l é c

  • m

P a r i s T e c h , P a r i s

  • D

a u p h i n e U n i v e r s i t y

R e i n f

  • r

c e m e n t l e a r n i n g , I n d u c t i v e L

  • g

i c P r

  • g

r a m m i n g D e s c r i p t i

  • n

C

  • m

p l e x i t y

1 5 J u n e 2 1 8

slide-2
SLIDE 2

Induction (again) – after Pierce

I n d u c t i

  • n

F a c t : T h e s e b e a n s a r e f r

  • m

t h i s b a g . F a c t : T h e s e b e a n s a r e w h i t e .  H y p . r u l e : A l l t h e b e a n s f r

  • m

t h i s b a g a r e w h i t e .

slide-3
SLIDE 3

Induction (again) – after Pierce

I n d u c t i

  • n

F a c t : T h e s e b e a n s a r e f r

  • m

t h i s b a g . F a c t : T h e s e b e a n s a r e w h i t e .  H y p . r u l e : A l l t h e b e a n s f r

  • m

t h i s b a g a r e w h i t e .

  • I

n d u c t i

  • n

e n a b l e s p r e d i c t i

  • n

t h r

  • u

g h t h e s e t t l e d m

  • d

e l .

slide-4
SLIDE 4

Induction (again)

3 , 4 , 6 , 8 , 1 2 , 1 4 , 1 8 , 2 , 2 4 , 3 , 3 2 , 3 8 , 4 2 , … ? ? P

  • s

s i b l e m

  • d

e l s ?

slide-5
SLIDE 5

Induction (again)

3 , 4 , 6 , 8 , 1 2 , 1 4 , 1 8 , 2 , 2 4 , 3 , 3 2 , 3 8 , 4 2 , … ? ? P

  • s

s i b l e m

  • d

e l s :

  • n

u m b e r s n + 1 , n p r i m e n u m b e r → 4 4 , 4 8 , 5 4 , . .

slide-6
SLIDE 6

Induction (again)

3 , 4 , 6 , 8 , 1 2 , 1 4 , 1 8 , 2 , 2 4 , 3 , 3 2 , 3 8 , 4 2 , … ? ? P

  • s

s i b l e m

  • d

e l s :

  • n

u m b e r s n + 1 , n p r i m e n u m b e r → 4 4 , 4 8 , 5 4 , . .

  • n

u m b e r s n s u c h t h a t f

  • r

a l l k w i t h g c d ( n , k ) = 1 a n d n > k

2

, n

  • k

2

i s p r i m e . → 4 8 , 5 4 , 6 , . .

L

  • k
  • n

h t t p s : / /

  • e

i s .

  • r

g / f

  • r
  • t

h e r s .

  • F

u r t h e r

  • b

s e r v a t i

  • n

s e n a b l e t h e c

  • r

r e c t i

  • n
  • f

t h e m

  • d

e l .

slide-7
SLIDE 7

Alien environment problem

  • S

u p p

  • s

e a r

  • b
  • t

l a n d s

  • n

a n u n k n

  • w

n p l a n e t .

– i

n

  • r

d e r t

  • a

c c

  • m

p l i s h i t s m i s s i

  • n

, i t h a s t

  • a

c q u i r e a n

  • p

e r a t i

  • n

a l k n

  • w

l e d g e

  • f

:

  • w

h a t ( m i g h t )

  • c

c u r

  • w

h a t i t s a c t i

  • n

s ( m i g h t ) a c h i e v e f r

  • m

i t s

  • b

s e r v a t i

  • n

s ! ! !

I n d u c t i

  • n
slide-8
SLIDE 8

Reinforcement Learning

slide-9
SLIDE 9

Nim game

  • T

w

  • p

l a y e r s g a m e

  • E

a c h p l a y e r m a y t a k e a s m a n y i t e m s f r

  • m

a s i n g l e r

  • w

i n t u r n

  • T

h e

  • n

e w h

  • t

a k e s t h e l a s t i t e m l

  • s

e s .

slide-10
SLIDE 10

Nim game

  • H
  • w
  • n

e c a n l e a r n t

  • w

i n w i t h

  • u

t k n

  • w

i n g t h e r u l e s ?

– r

e c

  • r

d i n g s t a t e s e n c

  • u

n t e r e d d u r i n g t h e p l a y

– u

p d a t i n g v a l u e

  • f

s t a t e s w i t h fj n a l r e s u l t s ( w

  • n
  • r

l

  • s

t )

– s

e l e c t i n g a c t i

  • n

s b r i n g i n g t

  • w

i n n i n g s t a t e s v e r y s i m p l e e x a m p l e

  • f

r e i n f

  • r

c e m e n t l e a r n i n g !

slide-11
SLIDE 11

Example of reinforcement learning algorithm

p* ( s ) = a r g m a x

a

( R ( s , a ) + V * ( ( s , a ) ) )

best strategy in s state of the world expected immediate reinforcement in performing a in s action transition function: a performed in s leads to s' discount factor expected (long-term) gain from a state s'

slide-12
SLIDE 12

Example of reinforcement learning algorithm

p* ( s ) = a r g m a x

a

( R ( s , a ) + V * ( ( s , a ) ) )

best strategy in s state of the world expected immediate reinforcement in performing a in s action transition function: a performed in s leads to s' discount factor expected (long-term) gain from a state s'

Q ( s , a ) = R ( s , a ) + V * ( ( s , a ) ) U t i l i t y f u n c t i

  • n

expected gain

slide-13
SLIDE 13

Q-learning

Q ( s , a ) = R ( s , a ) +  V * ( ( s , a ) ) = R ( s , a ) +  m a x

a '

( Q ( s ' , a ' ) ) = R ( s , a ) +  m a x

a '

( Q ( ( s , a ) , a ' ) ) p* ( s ) = a r g m a x

a

Q ( s , a )

slide-14
SLIDE 14

Q-learning algorithm

Q ( s , a ) = R ( s , a ) +  m a x

a '

( Q ( ( s , a ) , a ' ) ) p* ( s ) = a r g m a x

a

Q ( s , a )

initialize the table Q(s,a) to zero

  • bserve the current state s.

repeat choose an action and execute it receive the reward r

  • bserve the new state s'

update the table Q(s,a) as: Q(s,a) := r + max Q(s’,a’) s := s’

a’

slide-15
SLIDE 15

This was about behaviour, but what about knowledge?

I n d u c t i

  • n

a s g e n e r a l i z a t i

  • n

. . .

slide-16
SLIDE 16

Version space learning

[ D u b

  • i

s , V i n c e n t ; Q u a f a f

  • u

, M

  • h

a m e d ( 2 2 ) . " C

  • n

c e p t l e a r n i n g w i t h a p p r

  • x

i m a t i

  • n

: R

  • u

g h v e r s i

  • n

s p a c e s " . R S C T C 2 2 . S v e r d l i k , W. ; R e y n

  • l

d s , R . G . ( 1 9 9 2 ) . " D y n a m i c v e r s i

  • n

s p a c e s i n m a c h i n e l e a r n i n g " . T A I ' 9 2 . ]

  • L
  • g

i c a l a p p r

  • a

c h t

  • b

i n a r y c l a s s i fj c a t i

  • n
  • S

e a r c h

  • n

a p r e d e fj n e d s p a c e

  • f

h y p

  • t

h e s e s : H

1

H ∨

2

. . . H ∨ ∨

n

  • Y
  • u

d

  • n
  • t

n e e d t

  • m

a i n t a i n e x e m p l a r s !

positive example counter-example specific boundary general boundary pessimistic

  • ptimistic
slide-17
SLIDE 17

Using a version space

represent E predict from the representation of H whether or not E exemplifies H if correct then retain H if incorrect then identify the differences between E and H use the selected differences to

– generalize H if it is a positive instance – specialize H if it is a negative instance

c a n d i d a t e e l i m i n a t i

  • n

a l g

  • r

i t h m c u r r e n t e x a m p l e c u r r e n t h y p

  • t

h e s i s , t a k e n f r

  • m

a v e r s i

  • n

s p a c e

slide-18
SLIDE 18

Machine learning

slide-19
SLIDE 19

Machine learning

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i fj c i a l s y s t e m s t

  • i

m p r

  • v

e w i t h e x p e r i e n c e .

what are the criteria?

slide-20
SLIDE 20

Machine learning

M a c h i n e l e a r n i n g i s a p r

  • c

e s s t h a t e n a b l e s a r t i fj c i a l s y s t e m s t

  • i

m p r

  • v

e w i t h e x p e r i e n c e .

  • E

l e m e n t s

  • f

a l e a r n i n g t a s k

– I

t e m s

  • f

E x p e r i e n c e , i I ∈

– A

v a i l a b l e A c t i

  • n

s : a A ∈

– E

v a l u a t i

  • n

: v ( a , I )

– P

e r f

  • r

m e r S y s t e m : b : I A →

– L

e a r n i n g S y s t e m : L : ( i

1

, a

1

, v

1

) . . . ( i

n

, a

n

, v

n

) b →

slide-21
SLIDE 21

Types of learning problems

  • b

a t c h

  • r
  • ffmi

n e v s

  • n

l i n e l e a r n i n g

t r a i n i n g p h a s e a n d t e s t i n g v s l e a r n i n g w h i l e d

  • i

n g

  • c
  • m

p l e t e v s p a r t i a l v s p

  • i

n t w i s e f e e d b a c k

f e e d b a c k c

  • n

c e r n s a l l v s s

  • m

e v s

  • n

e p e r f

  • r

m e r s y s t e m

  • p

a s s i v e v s a c t i v e l e a r n i n g

  • b

s e r v a t i

  • n

v s e x p e r i m e n t a t i

  • n
  • a

c a u s a l

  • r

c a s u a l s e t t i n g

p r e s e n c e

  • r

n

  • t
  • f

s i d e

  • e

fg e c t s : e . g . r a i n p r e d i c t i

  • n

v s b e h a v i

  • u

r a l c

  • n

t r

  • l
  • s

t a t i

  • n

a r y v s n

  • n
  • s

t a t i

  • n

a r y e n v i r

  • n

m e n t

e v a l u a t i

  • n

d

  • e

s

  • r

d

  • e

s n

  • t

c h a n g e i n t i m e

slide-22
SLIDE 22

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

slide-23
SLIDE 23

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

a n y l e a r n i n g m e t h

  • d

s a r e a v a i l a b l e , b u t s t u d i e d a n d u s e d b y d i fg e r e n t c

  • m

m u n i t i e s !

  • A

f e w e x a m p l e s . . .

slide-24
SLIDE 24

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

e t h

  • d

1 : t r a d i t i

  • n

a l s t a t i s t i c s ( r e g r e s s i

  • n

a n a l y s i s ) h : R

n

→ R h i s a l i n e a r f u n c t i

  • n

s q u a r e d p r e d i c t i

  • n

e r r

  • r
slide-25
SLIDE 25

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

e t h

  • d

2 : t r a d i t i

  • n

a l p a t t e r n r e c

  • g

n i t i

  • n

h : R

n

→ { , 1 , … , m } h i s a d i s c r i m i n a n t b

  • u

n d a r y r i g h t / w r

  • n

g p r e d i c t i

  • n

e r r

  • r
slide-26
SLIDE 26

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

e t h

  • d

3 : “ s y m b

  • l

i c ” m a c h i n e l e a r n i n g h : { a t t r i b u t e

  • v

a l u e v e c t

  • r

s } → { , 1 } h i s a b

  • l

e a n f u n c t i

  • n

( e . g . a d e c i s i

  • n

t r e e )

slide-27
SLIDE 27

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

e t h

  • d

4 : N e u r a l n e t w

  • r

k s

– h

: R

n

→ R

– h

i s a f e e d f

  • r

w a r d n e u r a l n e t

slide-28
SLIDE 28

Learning a function from examples

d

  • m

a i n X : d e s c r i p t i

  • n

s d

  • m

a i n Y : p r e d i c t i

  • n

s H : h y p

  • t

h e s i s s p a c e h : t a r g e t h y p

  • t

h e s i s ( x

1

, y

1

) ( x

2

, y

2

) . . ( x

n

, y

n

)

h ∈ H , h : X → Y

l e a r n e r

examples function

  • M

e t h

  • d

5 : I n d u c t i v e L

  • g

i c P r

  • g

r a m m i n g

– h

: { t e r m s t r u c t u r e } → { , 1 }

– h

i s a “ s i m p l e ” l

  • g

i c p r

  • g

r a m .

slide-29
SLIDE 29

Inductive Logic Programming

slide-30
SLIDE 30

Symbolic induction

B a c k g r

  • u

n d k n

  • w

l e d g e E x e m p l a r

  • f

c l a s s X

E1 = square(A) & circle(B) & above(A, B) E2 = triangle(C) & square(D) & above(C, D)

Wh a t i s X ? a g r

  • u

p

  • f

g e

  • m

e t r i c s h a p e s ? a g r

  • u

p

  • f

2 g e

  • m

e t r i c s h a p e s ? a g r

  • u

p

  • f

2 g e

  • m

e t r i c s h a p e s w i t h a s q u a r e ?

  • I

n d u c t i

  • n

a s l e a s t g e n e r a l g e n e r a l i z a t i

  • n
  • f

e x e m p l a r s .

E x e m p l a r

  • f

c l a s s X

slide-31
SLIDE 31

Inductive Logic Programming

  • E

x a m p l e s :

cute(X) :- dog(X), small(X), fluffy(X). cute(X) :- cat(X), fluffy(X).

  • G

e n e r a l i s a t i

  • n

:

cute(X) :- fluffy(X).

slide-32
SLIDE 32

Inductive Logic Programming

  • E

x a m p l e s :

cute(X) :- dog(X), small(X), fluffy(X). cute(X) :- cat(X), fluffy(X).

  • G

e n e r a l i s a t i

  • n

:

cute(X) :- fluffy(X).

  • B

a c k g r

  • u

n d k n

  • w

l e d g e :

pet(X) :- cat(X). pet(X) :- dog(X). small(X) :- cat(X).

  • G

e n e r a l i s a t i

  • n

:

cute(X) :- pet(X), small(X), fluffy(X).

slide-33
SLIDE 33

Inductive Logic Programming

  • E

x a m p l e s E a r e e x p e c t e d t

  • r

e s u l t f r

  • m

b a c k g r

  • u

n d k n

  • w

l e d g e B a n d h y p

  • t

h e s i s H : B H ∧ ⊨ E

  • I

n v e r s e r e s

  • l

u t i

  • n

:

– f

r

  • m

e x a m p l e :

cute(X) :- cat(X), fluffy(X).

– f

r

  • m

k n

  • w

l e d g e :

pet(X) :- cat(X). small(X) :- cat(X).

i n d u c e :

cute(X) :- pet(X), small(X), fluffy(X).

slide-34
SLIDE 34

Explanation-Based Generalization

slide-35
SLIDE 35

Explanation-based Generalization

telephone(T) :- connected(T), partOf(T, D), dialingDevice(D), emitsSound(T). connected(X) :- hasWire(X, W), attached(W, wall). connected(X) :- feature(X, bluetooth). connected(X) :- feature(X, wifi). connected(X) :- partOf(X, A), antenna(A), hasProtocol(X, gsm). dialingDevice(DD) :- rotaryDial(DD). dialingDevice(DD) :- frequencyDial(DD). dialingDevice(DD) :- touchScreen(DD), hasSoftware(DD,DS), dialingSoftware(DS). emitsSound(P) :- hasHP(P). emitsSound(P) :- feature(P, bluetooth).

slide-36
SLIDE 36

Explanation-based Generalization

example(myphone, Features) :- Features = [silver(myphone), belongs(myphone, jld), partOf(myphone, tc), touchScreen(tc), partOf(myphone, a), antenna(a), hasSoftware(tc, s1), game(s1), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone,wifi), feature(myphone,bluetooth), hasProtocol(myphone, gsm), beautiful(myphone)].

  • F

e a t u r e s a c t i v a t e d d u r i n g t h e p r

  • f

:

[ feature(myphone, bluetooth), partOf(myphone, tc), touchScreen(tc), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone, bluetooth) ]

slide-37
SLIDE 37

Explanation-based Generalization

  • F

e a t u r e s a c t i v a t e d d u r i n g t h e p r

  • f

:

[ feature(myphone, bluetooth), partOf(myphone, tc), touchScreen(tc), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone, bluetooth) ]

slide-38
SLIDE 38

Explanation-based Generalization

  • F

e a t u r e s a c t i v a t e d d u r i n g t h e p r

  • f

:

[ feature(myphone, bluetooth), partOf(myphone, tc), touchScreen(tc), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone, bluetooth) ]

  • F

r

  • m

t h e t r a c e , b y g e n e r a l i z i n g s h a r e d c

  • n

s t a n t s :

C001(X) :- feature(X, bluetooth), partOf(X, Y), touchScreen(Y), hasSoftware(Y, Z), dialingSoftware(Z).

slide-39
SLIDE 39

Explanation-based Generalization

  • F

e a t u r e s a c t i v a t e d d u r i n g t h e p r

  • f

:

[ feature(myphone, bluetooth), partOf(myphone, tc), touchScreen(tc), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone, bluetooth) ]

  • F

r

  • m

t h e t r a c e , b y g e n e r a l i z i n g s h a r e d c

  • n

s t a n t s :

C001(X) :- feature(X, bluetooth), partOf(X, Y), touchScreen(Y), hasSoftware(Y, Z), dialingSoftware(Z).

  • B

y g r

  • u

p i n g p r e d i c a t e s t h a t d

  • n
  • t

d e p e n d

  • n

X :

C002(Y) :- touchScreen(Y), hasSoftware(Y, Z), dialingSoftware(Z).

slide-40
SLIDE 40

Explanation-based Generalization

  • F

e a t u r e s a c t i v a t e d d u r i n g t h e p r

  • f

:

[ feature(myphone, bluetooth), partOf(myphone, tc), touchScreen(tc), hasSoftware(tc, s2), dialingSoftware(s2), feature(myphone, bluetooth) ]

  • F

r

  • m

t h e t r a c e , b y g e n e r a l i z i n g s h a r e d c

  • n

s t a n t s :

C001(X) :- feature(X, bluetooth), partOf(X, Y), touchScreen(Y), hasSoftware(Y, Z), dialingSoftware(Z).

  • B

y g r

  • u

p i n g p r e d i c a t e s t h a t d

  • n
  • t

d e p e n d

  • n

X :

C002(Y) :- touchScreen(Y), hasSoftware(Y, Z), dialingSoftware(Z).

  • C

1 t h e n b e c

  • m

e s

C001(X) :- feature(X, bluetooth), partOf(X, Y), C002(Y).

slide-41
SLIDE 41

Description Complexity

slide-42
SLIDE 42

Informal definition of Kolmogorov complexity

  • T

h e c

  • m

p l e x i t y

  • f

a n

  • b

j e c t c

  • r

r e s p

  • n

d s t

  • t

h e m i n i m a l l e n g t h

  • f

a c

  • m

p u t e r p r

  • g

r a m p r

  • d

u c i n g t h i s

  • b

j e c t .

[ i n f

  • r

m a l r e d u c t i

  • n
  • f

a p r e s e n t a t i

  • n

b y P i e r r e

  • A

l e x a n d r e M u r e n a ( T é l é c

  • m

P a r i s T e c h ) ]

slide-43
SLIDE 43

Informal definition of Kolmogorov complexity

  • T

h e c

  • m

p l e x i t y

  • f

a n

  • b

j e c t c

  • r

r e s p

  • n

d s t

  • t

h e m i n i m a l l e n g t h

  • f

a c

  • m

p u t e r p r

  • g

r a m p r

  • d

u c i n g t h i s

  • b

j e c t .

  • A

fj n i t e s t r i n g l i k e “ a a a . . . ” i s n

  • t

v e r y c

  • m

p l e x : for i=1..n: print a

slide-44
SLIDE 44

Informal definition of Kolmogorov complexity

  • T

h e c

  • m

p l e x i t y

  • f

a n

  • b

j e c t c

  • r

r e s p

  • n

d s t

  • t

h e m i n i m a l l e n g t h

  • f

a c

  • m

p u t e r p r

  • g

r a m p r

  • d

u c i n g t h i s

  • b

j e c t .

  • A

fj n i t e s t r i n g l i k e “ a a a . . . ” i s n

  • t

v e r y c

  • m

p l e x : for i=1..n: print a

  • I

s p c

  • m

p l e x ? p/ 4 = 1 – 1 / 3 + 1 / 5 – 1 / 7 + 1 / 9 – . . .

slide-45
SLIDE 45

Informal definition of Kolmogorov complexity

  • T

h e c

  • m

p l e x i t y

  • f

a n

  • b

j e c t c

  • r

r e s p

  • n

d s t

  • t

h e m i n i m a l l e n g t h

  • f

a c

  • m

p u t e r p r

  • g

r a m p r

  • d

u c i n g t h i s

  • b

j e c t .

  • A

fj n i t e s t r i n g l i k e “ a a a . . . ” i s n

  • t

v e r y c

  • m

p l e x : for i=1..n: print a

  • I

s p c

  • m

p l e x ? p/ 4 = 1 – 1 / 3 + 1 / 5 – 1 / 7 + 1 / 9 – . . .

K

  • l

m

  • g
  • r
  • v

c

  • m

p l e x i t y i s i n c

  • m

p u t a b l e .

slide-46
SLIDE 46

Randomness

slide-47
SLIDE 47

Randomness

  • A

r e b

  • t

h t h e s e s e q u e n c e s e q u a l l y r a n d

  • m

? 00000000000001111111111111 10010011011000111010110010

slide-48
SLIDE 48

Randomness and compression

  • A

r e b

  • t

h t h e s e s e q u e n c e s e q u a l l y r a n d

  • m

? 00000000000001111111111111 10010011011000111010110010

  • A

fj n i t e s e q u e n c e i s s a i d t

  • b

e r a n d

  • m

i f i t i s i n c

  • m

p r e s s i b l e , i . e . i f i t s s h

  • r

t e s t d e s c r i p t i

  • n

i s t h e s e q u e n c e i t s e l f .

slide-49
SLIDE 49

Deduction

  • D

e d u c t i

  • n

g e n e r a l l y w

  • r

k s f r

  • m

t h e g e n e r a l t

  • t

h e p a r t i c u l a r g e n e r a l p r e m i s e a n d p a r t i c u l a r p r e m i s e p a r t i c u l a r c

  • n

c l u s i

  • n

all animals eat fido is an animal fido eats. →

g e n e r a l p r e m i s e a n d l e s s g e n e r a l p r e m i s e s l e s s g e n e r a l c

  • n

c l u s i

  • n

all animals eat cats are animals cats eat. →

slide-50
SLIDE 50

Deduction and compression

  • D

e d u c t i

  • n

g e n e r a l l y w

  • r

k s f r

  • m

t h e g e n e r a l t

  • t

h e p a r t i c u l a r

  • I

n t u i t i

  • n

: A f

  • r

m a l s y s t e m i s a c

  • m

p r e s s i

  • n
  • f

t h e s e t

  • f

t h e

  • r

e m s i t c a n p r

  • v

e .

slide-51
SLIDE 51

Deduction and compression

  • D

e d u c t i

  • n

g e n e r a l l y w

  • r

k s f r

  • m

t h e g e n e r a l t

  • t

h e p a r t i c u l a r

  • I

n t u i t i

  • n

: A f

  • r

m a l s y s t e m i s a c

  • m

p r e s s i

  • n
  • f

t h e s e t

  • f

t h e

  • r

e m s i t c a n p r

  • v

e .

Understanding is compressing.

slide-52
SLIDE 52

Minimum Description Length as inductive principle

  • T

h e M D L p r i n c i p l e s t a t e s t h a t : t h e b e s t t h e

  • r

y t

  • d

e s c r i b e

  • b

s e r v e d d a t a i s t h e

  • n

e w h i c h m i n i m i z e s t h e s u m

  • f

t h e d e s c r i p t i

  • n

l e n g t h ( i n b i t s )

  • f

:

– t

h e t h e

  • r

y d e s c r i p t i

  • n

– t

h e d a t a e n c

  • d

e d f r

  • m

t h e t h e

  • r

y

slide-53
SLIDE 53

Hofstadter’s problems

A B C : A B D : : I J K : x R S T : R S U : : R R S S T T : x A B C : A B D : : B C A : x A B C : A B D : : A A B A B C : x I J K : I J L : : I J J K K K : x

problems of analogy

slide-54
SLIDE 54

Hofstadter’s problems

A B C : A B D : : I J K : x R S T : R S U : : R R S S T T : x A B C : A B D : : B C A : x A B C : A B D : : A A B A B C : x I J K : I J L : : I J J K K K : x

problems of analogy

  • L

e t u s a p p l y t h e M D L p r i n c i p l e t

  • d

e c i d e x :

– w

e n e e d t

  • s

e t t l e a d e s c r i p t i

  • n

l a n g u a g e w i t h a s e t

  • f
  • p

e r a t

  • r

s m a n i p u l a t i n g s t r i n g s .

– w

e i n t e r p r e t t h e d a t a t h r

  • u

g h t h e d e s c r i p t i

  • n

l a n g u a g e

– w

e c

  • m

p u t e t h e c

  • m

p l e x i t y

  • f

t h e h y p

  • t

h e t i c a l

  • r

g a n i z a t i

  • n

s

slide-55
SLIDE 55

Hofstadter’s problems

A B C : A B D : : I J K : x R S T : R S U : : R R S S T T : x A B C : A B D : : B C A : x A B C : A B D : : A A B A B C : x I J K : I J L : : I J J K K K : x

problems of analogy

slide-56
SLIDE 56

Similar problem...

slide-57
SLIDE 57

Similar problem... ..and many many others.