Boosting Convergence of Timing Closure using Feature Selection in a - - PowerPoint PPT Presentation

boosting convergence of timing closure using feature
SMART_READER_LITE
LIVE PREVIEW

Boosting Convergence of Timing Closure using Feature Selection in a - - PowerPoint PPT Presentation

Boosting Convergence of Timing Closure using Feature Selection in a Learning-driven Approach Que Yanghua, Harnhua Ng, Nachiket Kapre yanghua.que@ntu.edu.sg, nachiket@ieee.org Claim 2 Claim Feature Selection helps boost AUC scores for


slide-1
SLIDE 1

Boosting Convergence of Timing Closure using Feature Selection in a Learning-driven Approach

Que Yanghua, Harnhua Ng, Nachiket Kapre yanghua.que@ntu.edu.sg, nachiket@ieee.org

slide-2
SLIDE 2

Claim

2

slide-3
SLIDE 3

Claim

  • Feature Selection helps boost AUC scores for

Timing Closure ML models by ~10%

2

slide-4
SLIDE 4

Claim

  • Feature Selection helps boost AUC scores for

Timing Closure ML models by ~10%

  • ML models predict timing closure of design by

modifying CAD tool parameters — commercial tool InTime, by Plunify Inc.

2

slide-5
SLIDE 5

Claim

  • Feature Selection helps boost AUC scores for

Timing Closure ML models by ~10%

  • ML models predict timing closure of design by

modifying CAD tool parameters — commercial tool InTime, by Plunify Inc.

  • For Altera Quartus


— ~80 parameters to 8-22 influential parameters

2

slide-6
SLIDE 6

FPGA CAD Flow

Verilog/VHDL Code FPGA CAD Tool Bitstream (area, delay, power)

3

slide-7
SLIDE 7

FPGA CAD Flow

Verilog/VHDL Code FPGA CAD Tool Bitstream (area, delay, power) CAD parameters

4

slide-8
SLIDE 8

FPGA CAD Flow

Verilog/VHDL Code FPGA CAD Tool Bitstream (area, delay, power) CAD parameters

1 2 3 4 5 6

  • 6500
  • 6000
  • 5500
  • 5000
  • 4500
  • 4000
  • 3500
  • 3000

Frequency of occurence Total Negative Slack (TNS) ecg

5

slide-9
SLIDE 9

InTime High-Level View

6

slide-10
SLIDE 10

InTime High-Level View

6

slide-11
SLIDE 11

InTime High-Level View

  • Position: 


— Verified RTL designs expensive to edit
 — For timing closure, use CAD parameters

6

slide-12
SLIDE 12

InTime High-Level View

  • Position: 


— Verified RTL designs expensive to edit
 — For timing closure, use CAD parameters

  • InTime


— free RTL, play with CAD tool parameters
 — Problem: exhaustive search intractable 
 — Solution: use machine learning!

6

slide-13
SLIDE 13

InTime High-Level View

[FPGA’15 Designer’s Day] Preliminary results on customer designs (limited ability to discuss specifics)
 [FCCM’15 Full] Extended results quantifying ML effects on open-source benchmarks
 [FPGA’16 Short] Case-for “design-specific” learning instead of building a generic model
 [FCCM’16 Short] Classifier accuracy exploration across ML strategies, and hyper-parameter tuning

7

slide-14
SLIDE 14

Outline

  • Brief intro of InTime flow and ML techniques
  • Justifying the approach


— Opportunity for using ML (Slack distribution)
 — The need for running ML (Entropy/Correlation)

  • Review of Feature Selection
  • Experimental results


— Impact of features/run samples
 — ROC curves across designs
 — Comparing vs. FCCM’16 results

8

slide-15
SLIDE 15

Outline

  • Brief intro of InTime flow and ML techniques
  • Justifying the approach


— Opportunity for using ML (Slack distribution)
 — The need for running ML (Entropy/Correlation)

  • Review of Feature Selection
  • Experimental results


— Impact of features/run samples
 — ROC curves across designs
 — Comparing vs. FCCM’16 results

9

slide-16
SLIDE 16

InTime High-Level View

  • Position: 


— Verified RTL designs expensive to edit
 — For timing closure, use CAD parameters

  • InTime


— free RTL, play with CAD tool parameters
 — Problem: exhaustive search intractable 
 — Solution: use machine learning!

10

slide-17
SLIDE 17

How InTime works

  • Simply tabulate results


— record input CAD parameters + timing slack

  • Build a model for predicting [GOOD/BAD]

11

slide-18
SLIDE 18

How InTime works

12

slide-19
SLIDE 19

Outline

  • Brief intro of InTime flow and ML techniques
  • Justifying the approach


— Opportunity for using ML (Slack distribution)
 — The need for running ML (Entropy/Correlation)

  • Review of Feature Selection
  • Experimental results


— Impact of features/run samples
 — ROC curves across designs
 — Comparing vs. FCCM’16 results

13

slide-20
SLIDE 20

Q&A

  • Do this really work?
  • What’s the opportunity in timing slack spread?
  • Do we really need machine learning?
  • How unique are the final converged solutions?
  • What is the coverage scope of our tool?

14

slide-21
SLIDE 21

Do this really work?

15

slide-22
SLIDE 22

Results — No Learning

16

slide-23
SLIDE 23

Results — with Learning

17

slide-24
SLIDE 24

18

slide-25
SLIDE 25

What’s the opportunity in timing slack spread?

19

slide-26
SLIDE 26

Parameter Exploration

20

slide-27
SLIDE 27

Do we really need machine learning?

21

slide-28
SLIDE 28

Results (aes)

22

slide-29
SLIDE 29

Results (aes)

best classification

23

slide-30
SLIDE 30

How unique are the final converged solutions?

24

slide-31
SLIDE 31

Dissimilarity

25

slide-32
SLIDE 32

What is the coverage scope of our tool?

26

slide-33
SLIDE 33

Entropy in solutions

27

slide-34
SLIDE 34

So, what’s the bottomline?

28

slide-35
SLIDE 35

29

slide-36
SLIDE 36

Outline

  • Brief intro of InTime flow and ML techniques
  • Justifying the approach


— Opportunity for using ML (Slack distribution)
 — The need for running ML (Entropy/Correlation)

  • Review of Feature Selection
  • Experimental results


— Impact of features/run samples
 — ROC curves across designs
 — Comparing vs. FCCM’16 results

30

slide-37
SLIDE 37

Feature Selection

31

slide-38
SLIDE 38

Feature Selection

31

slide-39
SLIDE 39

Feature Selection

  • Hypothesis: Not all CAD

parameters affect timing

  • utcome

31

slide-40
SLIDE 40

Feature Selection

  • Hypothesis: Not all CAD

parameters affect timing

  • utcome
  • Can we find the most

relevant parameters?

31

slide-41
SLIDE 41

Feature Selection

  • Hypothesis: Not all CAD

parameters affect timing

  • utcome
  • Can we find the most

relevant parameters?

  • Feature selection: known

technique in ML circles
 — avoid noise during classification
 — avoid over-fitting

31

slide-42
SLIDE 42

Feature Selection

  • Hypothesis: Not all CAD

parameters affect timing

  • utcome
  • Can we find the most

relevant parameters?

  • Feature selection: known

technique in ML circles
 — avoid noise during classification
 — avoid over-fitting

31

slide-43
SLIDE 43

Techniques

  • OneR — use frequency of class labels
  • Information.Gain — uses entropy measure
  • Relief — clustering of parameters
  • Ensemble — combination of above…
slide-44
SLIDE 44

Outline

  • Brief intro of InTime flow and ML techniques
  • Justifying the approach


— Opportunity for using ML (Slack distribution)
 — The need for running ML (Entropy/Correlation)

  • Review of Feature Selection
  • Experimental results


— Impact of features/run samples
 — ROC curves across designs
 — Comparing vs. FCCM’16 results

33

slide-45
SLIDE 45

Q&A

  • How effective is feature selection?
  • How long does the learning process take?
  • What is the impact of choosing feature count?

34

slide-46
SLIDE 46

How effective is feature selection?

35

slide-47
SLIDE 47

36

slide-48
SLIDE 48

Classifier method doesn’t matter

37

slide-49
SLIDE 49

Baseline FCCM 2016 result

38

slide-50
SLIDE 50

39

slide-51
SLIDE 51

2-3x reduction in parallel FPGA CAD runs

40

slide-52
SLIDE 52

Outlier — fails to meet timing and quits

41

slide-53
SLIDE 53

How long does it take to learn?

42

slide-54
SLIDE 54

43

slide-55
SLIDE 55

Need atleast 20 runs

44

slide-56
SLIDE 56

Need 3 rounds x 30 runs configuration

45

slide-57
SLIDE 57

Better AUC the more we run

46

slide-58
SLIDE 58

How do we choose the correct subset of features

47

slide-59
SLIDE 59

48

slide-60
SLIDE 60

Goldilocks zone

49

slide-61
SLIDE 61

Too many features — large training set

50

slide-62
SLIDE 62

Too few features — more data required for other features

51

slide-63
SLIDE 63

Conclusions

  • Feature Selection helps boost AUC of InTime

machine learning by ~10%

  • Key idea — prune the set of Quartus CAD tool

parameters to explore to <22

  • Evidence continues to point towards design-

specificity

52

slide-64
SLIDE 64

Open-source flow

  • We are open-sourcing our ML routines


— http://bitbucket.org/spinosae/plunify-ml.git
 — README.md contains instructions for installing and running on your machine

  • Requires R (dependencies installed

automatically)

53

slide-65
SLIDE 65

Impact of feature count

54

slide-66
SLIDE 66

55

slide-67
SLIDE 67

Goldilocks zone

56

slide-68
SLIDE 68

57

slide-69
SLIDE 69

58

Information.Gain consistently best

slide-70
SLIDE 70

59

slide-71
SLIDE 71

Goldilocks zone

60

slide-72
SLIDE 72

61