Learning to Infer Program Sketches Maxwell Nye, Luke Hewitt, Josh - - PowerPoint PPT Presentation

learning to infer program sketches
SMART_READER_LITE
LIVE PREVIEW

Learning to Infer Program Sketches Maxwell Nye, Luke Hewitt, Josh - - PowerPoint PPT Presentation

Learning to Infer Program Sketches Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama Goal: We want to automatically write code from the kinds of specifications humans can easily provide, such as examples or natural language


slide-1
SLIDE 1

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama List Processing from IO:

[1, 2, 3, 4, 5] → [2, 4] [7, 8, 0, 9] → [8, 0]

Natural language + IO → code

“Consider an array of numbers, find elements in the given array not divisible by two” [1, 2, 3, 4, 5] → [1, 3, 5] [7, 8, 0, 9] → [7, 9]

Text Editing from IO:

Max Nye → Nye, M. Luke Hewitt → Hewitt, L.

Goal: We want to automatically write code from the kinds of specifications humans can easily provide, such as examples or natural language instruction.

slide-2
SLIDE 2

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

How might people solve problems like this?

Goal: Write a program which maps inputs to

  • utputs

Given:

[1, 2, 3, 4, 5] → [2, 4] [0, 6, 2, 7] → [0, 6, 2] [5, 10, 5, 1, 8] → [10, 8]

slide-3
SLIDE 3

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

How might people solve problems like this?

Goal: Write a program which maps inputs to

  • utputs

Given:

[1, 2, 3, 4, 5] → [2, 4] [0, 6, 2, 7] → [0, 6, 2] [5, 10, 5, 1, 8] → [10, 8]

People use a flexible trade-off between pattern recognition and reasoning

slide-4
SLIDE 4

Easy problem: Spec: [1, 2, 3, 4, 5] → [2, 4] [0, 6, 2, 7] → [0, 6, 2] [5, 10, 5, 1, 8] → [10, 8] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

slide-5
SLIDE 5

Easy problem: Spec: [1, 2, 3, 4, 5] → [2, 4] [0, 6, 2, 7] → [0, 6, 2] [5, 10, 5, 1, 8] → [10, 8] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama filter(lambda x: x%2==0, input)

slide-6
SLIDE 6

Easy problem: Spec: [1, 2, 3, 4, 5] → [2, 4] [0, 6, 2, 7] → [0, 6, 2] [5, 10, 5, 1, 8] → [10, 8] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama filter(lambda x: x%2==0, input)

Fast, using pattern recognition

slide-7
SLIDE 7

More difficult problem: Spec: [3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 3, 2, 1] → [10, 7, 1] [5, 1, 2, 13, 4] → [1, 13, 4] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

slide-8
SLIDE 8

More difficult problem: Spec: [3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 3, 2, 1] → [10, 7, 1] [5, 1, 2, 13, 4] → [1, 13, 4] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

(Fast, using pattern recognition)

filter(<SOMETHING>, input)

slide-9
SLIDE 9

More difficult problem: Spec: [3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 3, 2, 1] → [10, 7, 1] [5, 1, 2, 13, 4] → [1, 13, 4] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

(Fast, using pattern recognition)

filter(<SOMETHING>, input) filter(lambda x: x%3==1, input)

Symbolic reasoning

slide-10
SLIDE 10

More difficult problem: Spec: [3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 3, 2, 1] → [10, 7, 1] [5, 1, 2, 13, 4] → [1, 13, 4] Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

(Fast, using pattern recognition)

filter(<SOMETHING>, input)

(Slow)

filter(lambda x: x%3==1, input)

Symbolic reasoning

slide-11
SLIDE 11

Very difficult problem: Spec: [2, 5, 0, 16, 12] → 0 [4, 23, 11, 9, 25] → 25 [3, 29, 30, 14, 16] → 14

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

slide-12
SLIDE 12

Very difficult problem: Spec: [2, 5, 0, 16, 12] → 0 [4, 23, 11, 9, 25] → 25 [3, 29, 30, 14, 16] → 14 [1, 7, 6, 9, 5] → 7 [5, 5, 1, 8, 8, 12, 4] → 12 [0, 4, 8, 5, 1] → 0 [3, 7, 2, 9, 1] → 9 [1, 0, 3, 7, 3, 8] → 0

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

slide-13
SLIDE 13

Very difficult problem: Spec: [2, 5, 0, 16, 12] → 0 [4, 23, 11, 9, 25] → 25 [3, 29, 30, 14, 16] → 14 Solution:

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama <SOMETHING>

(Slow)

input[input[0]]

Symbolic reasoning

slide-14
SLIDE 14

Pattern recognition

(program specification) Program sketch Full program

Symbolic reasoning

Q: How do we model this? A: Program sketches

Solar-Lezama et al, 2008, Murali et al, 2017

filter(<HOLE>, input) filter( lambda x: x%3==1, input)

[3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 3, 2, 1] → [10, 7, 1]

Flexible trade-off between pattern recognition and reasoning

(e.g., guess and check) (e.g., neural network)

slide-15
SLIDE 15

Our system: SketchAdapt

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

Neural sketch generator

Program specification Program sketch Full program

Symbolic enumerator filter(<HOLE>, input) filter(lambda x: x%3==1, input)

[3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 1] → [10, 7, 1] [5, 1, 13, 4] → [1, 13, 4]

Neural recognizer

.25 .05 .02 .03 .25 .30

Production probabilities

...

Learned neural network

slide-16
SLIDE 16

Our system: SketchAdapt

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

Neural sketch generator

Program specification Program sketch Full program

Symbolic enumerator filter(<HOLE>, input) filter(lambda x: x%3==1, input)

[3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 1] → [10, 7, 1] [5, 1, 13, 4] → [1, 13, 4]

Neural recognizer

.25 .05 .02 .03 .25 .30

Production probabilities

...

Learned neural network

Sketch generator: RNN that proposes program sketches (c.f. RobustFill)

Devlin et al, 2017 Balog et al, 2016

slide-17
SLIDE 17

Our system: SketchAdapt

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

Neural sketch generator

Program specification Program sketch Full program

Symbolic enumerator filter(<HOLE>, input) filter(lambda x: x%3==1, input)

[3, 4, 5, 6, 7] → [4, 7] [10, 8, 7, 1] → [10, 7, 1] [5, 1, 13, 4] → [1, 13, 4]

Neural recognizer

.25 .05 .02 .03 .25 .30

Production probabilities

...

Learned neural network

Symbolic synthesizer: enumerator that fills in sketches, guided by neural recognizer (c.f DeepCoder) Sketch generator: RNN that proposes program sketches (c.f. RobustFill)

Devlin et al, 2017 Balog et al, 2016

slide-18
SLIDE 18

101 102 103 104 105

Number of candidates evaluated per problem

20 40 60 80 100

% of problems solved

List Processing: length 3 test programs

SketchAdapt (ours) Synthesizer only (Deepcoder) Generator only (RobustFill)

Results: list processing

SketchAdapt can recognize familiar problems and generalize to unfamiliar problems

Ours Pattern recognition only (neural network) Reasoning only (symbolic enumeration)

Trained on length 3 programs Length 3 test programs: SketchAdapt

slide-19
SLIDE 19

101 102 103 104 105

Number of candidates evaluated per problem

10 20 30 40 50

% of problems solved

List Processing: length 4 test programs

SketchAdapt (ours) Synthesizer only (Deepcoder) Generator only (RobustFill)

101 102 103 104 105

Number of candidates evaluated per problem

20 40 60 80 100

% of problems solved

List Processing: length 3 test programs

SketchAdapt (ours) Synthesizer only (Deepcoder) Generator only (RobustFill)

Results: list processing

SketchAdapt can recognize familiar problems and generalize to unfamiliar problems

Ours Pattern recognition only (neural network) Reasoning only (symbolic enumeration)

Length 3 test programs: Length 4 test programs: Trained on length 3 programs SketchAdapt SketchAdapt

slide-20
SLIDE 20

Natural language + IO examples → Code

slide-21
SLIDE 21

2000 4000 6000 8000 79214 (fuOO GDtDset)

1umber Rf trDining SrRgrDms useG

20 40 60 80 100

% Rf test SrRgrDms sROveG

AOgROisS

2ur mRGeO GenerDtRr RnOy (RRbust)iOO) 6ynthesizer RnOy (DeeSFRGer)

Natural language + IO examples → Code

Requires less data than pure neural approaches: SketchAdapt

slide-22
SLIDE 22

2000 4000 6000 8000 79214 (fuOO GDtDset)

1umber Rf trDining SrRgrDms useG

20 40 60 80 100

% Rf test SrRgrDms sROveG

AOgROisS

2ur mRGeO GenerDtRr RnOy (RRbust)iOO) 6ynthesizer RnOy (DeeSFRGer)

Natural language + IO examples → Code

Requires less data than pure neural approaches: Generalizes to unseen concepts: SketchAdapt

slide-23
SLIDE 23

Come see our poster: Today (Thurs) 06:30 - 09:00 PM @ Pacific Ballroom #182

Learning to Infer Program Sketches

Maxwell Nye, Luke Hewitt, Josh Tenenbaum, Armando Solar-Lezama

Count >0 (Map +1 input) Count >0 (Map (HOLE)) [1, 3, -4, 3]-> 3 [-3, 0, 2, -1]-> 2 [7,-4,-5, 2]-> 2

.25 .03 .02 .06 .40 .05

head tail +1

  • 1

input sum ...