Fast, General Parallel Computation for Norm Matloff University of - - PowerPoint PPT Presentation

fast general parallel computation for
SMART_READER_LITE
LIVE PREVIEW

Fast, General Parallel Computation for Norm Matloff University of - - PowerPoint PPT Presentation

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Fast, General Parallel Computation for Norm Matloff University of California at Machine Learning Davis Robin Elizabeth Yancey and Norm Matloff


slide-1
SLIDE 1

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Fast, General Parallel Computation for Machine Learning

Robin Elizabeth Yancey and Norm Matloff University of California at Davis P2PS Workshop, ICPP 2018

slide-2
SLIDE 2

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Outline

slide-3
SLIDE 3

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Outline

  • Motivation.
  • Software Alchemy.
  • Theoretical foundations.
  • Empirical investigation.
slide-4
SLIDE 4

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

slide-5
SLIDE 5

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

slide-6
SLIDE 6

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

  • Big Data: in n × p (cases × features) dataset, both n

AND p large.

slide-7
SLIDE 7

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

  • Big Data: in n × p (cases × features) dataset, both n

AND p large.

  • Compute-intensive algorithms: sorting, k-NN, matrix

inversion, iteration.

slide-8
SLIDE 8

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

  • Big Data: in n × p (cases × features) dataset, both n

AND p large.

  • Compute-intensive algorithms: sorting, k-NN, matrix

inversion, iteration.

  • Not generally embarrassingly parallel (EP).
slide-9
SLIDE 9

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

  • Big Data: in n × p (cases × features) dataset, both n

AND p large.

  • Compute-intensive algorithms: sorting, k-NN, matrix

inversion, iteration.

  • Not generally embarrassingly parallel (EP). (An exception:

Random Forests – grow different trees within different processes.)

slide-10
SLIDE 10

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Motivation

Characteristics of machine learning (ML) algorithms:

  • Big Data: in n × p (cases × features) dataset, both n

AND p large.

  • Compute-intensive algorithms: sorting, k-NN, matrix

inversion, iteration.

  • Not generally embarrassingly parallel (EP). (An exception:

Random Forests – grow different trees within different processes.)

  • Memory problems: The computation may not fit on a

single machine (esp. in R or GPUs).

slide-11
SLIDE 11

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Parallel ML: Desired Properties

slide-12
SLIDE 12

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Parallel ML: Desired Properties

  • Simple, easily implementable.
slide-13
SLIDE 13

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Parallel ML: Desired Properties

  • Simple, easily implementable. (And easily understood by

non-techies.)

slide-14
SLIDE 14

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Parallel ML: Desired Properties

  • Simple, easily implementable. (And easily understood by

non-techies.)

  • As general in applicability as possible.
slide-15
SLIDE 15

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy

slide-16
SLIDE 16

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy

alchemy: The medieval forerunner of chemistry...concerned particularly with attempts to convert base metals into gold... a seemingly magical process of transformation...

slide-17
SLIDE 17

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d.)

slide-18
SLIDE 18

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d.)

  • “Alchemical”: Converts non-EP problems to statistically

equivalent EP problems.

slide-19
SLIDE 19

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d.)

  • “Alchemical”: Converts non-EP problems to statistically

equivalent EP problems.

  • Developed independently by (Matloff, JSS, 2013) and

several others.

slide-20
SLIDE 20

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d.)

  • “Alchemical”: Converts non-EP problems to statistically

equivalent EP problems.

  • Developed independently by (Matloff, JSS, 2013) and

several others. EP: No programming challenge. :-)

slide-21
SLIDE 21

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d.)

  • “Alchemical”: Converts non-EP problems to statistically

equivalent EP problems.

  • Developed independently by (Matloff, JSS, 2013) and

several others. EP: No programming challenge. :-)

  • Not just Embarrassingly Parallel but also Embarrassingly
  • Simple. :-)
slide-22
SLIDE 22

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

slide-23
SLIDE 23

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
slide-24
SLIDE 24

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
  • Apply the procedure, e.g. neural networks (NNs), to each

chunk,

slide-25
SLIDE 25

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
  • Apply the procedure, e.g. neural networks (NNs), to each

chunk, using off-the-shelf SERIAL algorithms.

slide-26
SLIDE 26

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
  • Apply the procedure, e.g. neural networks (NNs), to each

chunk, using off-the-shelf SERIAL algorithms.

  • In regression case (continuous response variable) take final

estimate as average of the chunked estimates.

slide-27
SLIDE 27

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
  • Apply the procedure, e.g. neural networks (NNs), to each

chunk, using off-the-shelf SERIAL algorithms.

  • In regression case (continuous response variable) take final

estimate as average of the chunked estimates.

  • In classification case (categorical response variable), do

“voting.”

slide-28
SLIDE 28

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Software Alchemy (cont’d)

  • Break the data into chunks, one chunk per process.
  • Apply the procedure, e.g. neural networks (NNs), to each

chunk, using off-the-shelf SERIAL algorithms.

  • In regression case (continuous response variable) take final

estimate as average of the chunked estimates.

  • In classification case (categorical response variable), do

“voting.”

  • If have some kind of parametric model (incl. NNs), can

average the parameter values across chunks.

slide-29
SLIDE 29

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory

slide-30
SLIDE 30

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory

  • Theorem:
slide-31
SLIDE 31

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory

  • Theorem:

Say rows of data matrix are i.i.d., output of procedure asymptotically normal. Then the Software Alchemy estimator is fully statistically efficient, i.e. has the same asymptotic variance.

slide-32
SLIDE 32

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory

  • Theorem:

Say rows of data matrix are i.i.d., output of procedure asymptotically normal. Then the Software Alchemy estimator is fully statistically efficient, i.e. has the same asymptotic variance.

  • Conditions of theorem could be relaxed.
slide-33
SLIDE 33

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory

  • Theorem:

Say rows of data matrix are i.i.d., output of procedure asymptotically normal. Then the Software Alchemy estimator is fully statistically efficient, i.e. has the same asymptotic variance.

  • Conditions of theorem could be relaxed.
  • Can do some informal analysis of speedup (next slide).
slide-34
SLIDE 34

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont’d.)

slide-35
SLIDE 35

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont’d.)

Say original algorithm has time complexity O(nc).

slide-36
SLIDE 36

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont’d.)

Say original algorithm has time complexity O(nc).

  • Then Software Alchemy time for q processes is

O((n/q)c) = O(nc/qc), a speedup of qc.

slide-37
SLIDE 37

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont’d.)

Say original algorithm has time complexity O(nc).

  • Then Software Alchemy time for q processes is

O((n/q)c) = O(nc/qc), a speedup of qc.

  • If c > 1, get a superlinear speedup!
slide-38
SLIDE 38

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont’d.)

Say original algorithm has time complexity O(nc).

  • Then Software Alchemy time for q processes is

O((n/q)c) = O(nc/qc), a speedup of qc.

  • If c > 1, get a superlinear speedup!
  • In fact, even if the chunked computation is done serially,

time is O(q(n/q)c) = O(nc/qc−1), a speedup of qc−1, a win if c > 1.

slide-39
SLIDE 39

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

slide-40
SLIDE 40

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

slide-41
SLIDE 41

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

  • SA time is technically maxi chunktimei. If large variance,

this would may result in speedup of < qc.

slide-42
SLIDE 42

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

  • SA time is technically maxi chunktimei. If large variance,

this would may result in speedup of < qc.

  • If number of features p is a substantial fraction of n, the

asymptotic convergence may not have quite kicked in yet.

slide-43
SLIDE 43

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

  • SA time is technically maxi chunktimei. If large variance,

this would may result in speedup of < qc.

  • If number of features p is a substantial fraction of n, the

asymptotic convergence may not have quite kicked in yet.

  • If full algorithm time is not just O(f (n))) but O(g(n, p),

e.g. need p × p matrix inversion, then speedup is limited.

slide-44
SLIDE 44

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

  • SA time is technically maxi chunktimei. If large variance,

this would may result in speedup of < qc.

  • If number of features p is a substantial fraction of n, the

asymptotic convergence may not have quite kicked in yet.

  • If full algorithm time is not just O(f (n))) but O(g(n, p),

e.g. need p × p matrix inversion, then speedup is limited.

  • Above analysis ignores overhead time for distributing the

data.

slide-45
SLIDE 45

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Theory (cont.d)

Although...

  • SA time is technically maxi chunktimei. If large variance,

this would may result in speedup of < qc.

  • If number of features p is a substantial fraction of n, the

asymptotic convergence may not have quite kicked in yet.

  • If full algorithm time is not just O(f (n))) but O(g(n, p),

e.g. need p × p matrix inversion, then speedup is limited.

  • Above analysis ignores overhead time for distributing the
  • data. However, we advocate permanently distributed data

anyway (Hadoop, Spark, our partools package).

slide-46
SLIDE 46

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

slide-47
SLIDE 47

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

  • How many chunks? Having too many means chunks are

too small for the asymptotics.

slide-48
SLIDE 48

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

  • How many chunks? Having too many means chunks are

too small for the asymptotics.

  • Impact of tuning parameters.
  • E.g. in neural nets, user must choose number of hidden

layers, number of units per layer, etc.

slide-49
SLIDE 49

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

  • How many chunks? Having too many means chunks are

too small for the asymptotics.

  • Impact of tuning parameters.
  • E.g. in neural nets, user must choose number of hidden

layers, number of units per layer, etc. (Feng, 2016) has so many tuning parameters that the paper has a separate table to summarize them.

slide-50
SLIDE 50

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

  • How many chunks? Having too many means chunks are

too small for the asymptotics.

  • Impact of tuning parameters.
  • E.g. in neural nets, user must choose number of hidden

layers, number of units per layer, etc. (Feng, 2016) has so many tuning parameters that the paper has a separate table to summarize them.

  • Performance may depend crucially on the settings for those

parameters.

slide-51
SLIDE 51

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Other Issues

  • How many chunks? Having too many means chunks are

too small for the asymptotics.

  • Impact of tuning parameters.
  • E.g. in neural nets, user must choose number of hidden

layers, number of units per layer, etc. (Feng, 2016) has so many tuning parameters that the paper has a separate table to summarize them.

  • Performance may depend crucially on the settings for those

parameters.

  • What if best tuning parameter settings for chunks are not

the same as the best for the full data?

slide-52
SLIDE 52

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Empirical Investigation

slide-53
SLIDE 53

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Empirical Investigation

  • Recommender systems
  • Famous example: Predict rating user i would give to movie

j, based on what i has said about other movies, and what ratings j got from other users.

  • Maximum Likelihood
  • Matrix factorization
  • k-NN model
  • General ML applications
  • Logistic
  • Neural networks
  • Random forests
  • k-NN
slide-54
SLIDE 54

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Recommender Systems Datasets

slide-55
SLIDE 55

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Recommender Systems Datasets

  • Movie Lens: User ratings of movies. We used the 1

million- and 20 million-record versions.

  • Book Crossings: Book reviews, about 1 million records.
  • Jester: Joke reviews, about 6 million records.
  • No optimization of tuning parameters; focus is just on run

time.

  • No data cleaning.
  • Timings on a quad core machine with hyperthreading.
slide-56
SLIDE 56

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Prediction Methods

slide-57
SLIDE 57

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Prediction Methods

  • MLE: Rating of item i by user j is

Yij = µ + γ′Xi + αi + βj + ǫij where Xi is a vector of covariates for user i (e.g. age), and µ + αi and µ + βj are overall means.

  • Nonnegative matrix factorization: Find low-rank

matrices W and H such that the matrix A of all Yij,

  • bserved or not, is approx. WH. Fill in missing values

from the latter.

  • k-Nearest Neighbor: The k users with ratings patterns

closest to that of user i and who have rated item j are collected, and the average of their item-j ratings computed. Report: Scatter, train and test times, MAPE or prop. correct class.

slide-58
SLIDE 58

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

NMF, MovieLens 20M

slide-59
SLIDE 59

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

NMF, MovieLens 20M

chunks scatter train. pred. mean abs. error full

  • 34.046

0.346 0.649 2 13.49 18.679 0.647 0.647 4 21.86 10.444 1.113 0.656

Table: NMF Model, MovieLens Data, 20M

Approaching linear speedup.

slide-60
SLIDE 60

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

k-NN, Jester Data

slide-61
SLIDE 61

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

k-NN, Jester Data

# of chunks time (sec) mean abs. error full 259.601 4.79 2 76.440 4.60 4 58.133 4.36 8 81.185 3.89

Table: k-NN Model, Jester Data

Superlinear speedup for 2, 4 chunks.

slide-62
SLIDE 62

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

k-NN, Jester Data

# of chunks time (sec) mean abs. error full 259.601 4.79 2 76.440 4.60 4 58.133 4.36 8 81.185 3.89

Table: k-NN Model, Jester Data

Superlinear speedup for 2, 4 chunks. Note improved accuracy, probably due to nonoptimal k in full set.

slide-63
SLIDE 63

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

MLE, Book Crossings

slide-64
SLIDE 64

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

MLE, Book Crossings

chunks scatter train. pred. mean abs. error full

  • 1114.155

0.455 2.67 2 5.101 685.757 0.455 2.72 4 11.134 423.018 1.173 2.77 8 10.918 246.668 1.470 2.82

Table: MLE Model, Book Crossings Data

Sublinear speedup due to matrix inversion, but still faster at 8 chunks.

slide-65
SLIDE 65

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

MLE, MovieLens Data

slide-66
SLIDE 66

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

MLE, MovieLens Data

chunks scatter train. pred. mean abs. error full

  • 99.028

0.267 0.710 2 4.503 100.356 0.317 0.737 4 2.596 73.055 0.469 0.752 8 8.408 100.356 0.483 0.764

Table: MLE Model, MovieLens Data, 1M

Speedup limited due to matrix inversion.

slide-67
SLIDE 67

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

General ML Applications

slide-68
SLIDE 68

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

General ML Applications

Methods: Logistic regression; neural nets; k-NN; random forests. Datasets:

  • NYC taxi data: Trip times, fares, location etc.
  • Forest cover data: Predict type of ground cover from

satellite data.

  • Last.fm: Popularity of songs.
slide-69
SLIDE 69

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Logit, NYC Taxi Data

slide-70
SLIDE 70

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Logit, NYC Taxi Data

# of chunks time

  • prop. correct class.

full 40.641 0.694 2 38.753 0.694 4 23.501 0.694 8 14.320 0.694

Table: Logistic Model, NYC Taxi Data

Have matrix inversion here too, but still getting speedup at 8 threads (and up to 32 on another machine, 16 cores).

slide-71
SLIDE 71

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

NNs, Last.fm Data

slide-72
SLIDE 72

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

NNs, Last.fm Data

# of chunks time mean abs. error full 486.259 221.41 2 325.567 211.94 4 254.306 210.15 8 133.495 221.41

Table: Neural nets, Last.fm data, 5 hidden layers

Sublinear, but still improving at 8 chunks. Better prediction with 2, 4 chunks; tuning thus suboptimal in full case.

slide-73
SLIDE 73

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

k-NN, NYC Taxi Data

slide-74
SLIDE 74

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

k-NN, NYC Taxi Data

# of chunks time mean abs. error full 87.463 456.00 2 48.110 451.08 4 25.75 392.13 8 17.413 424.36

Table: k-NN, NYC TaxiData

Superlinear speedup at 4 chunks, with better prediction error; k too large in full?

slide-75
SLIDE 75

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

RF, Forest Cover Data

slide-76
SLIDE 76

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

RF, Forest Cover Data

# of chunks time

  • prob. correct class.

full 841.884 0.955 2 485.171 0.941 4 236.518 0.919 6 194.803 0.911

Table: Random Forests, Forest Cover Data

As noted, EP anyway, but still interesting.

slide-77
SLIDE 77

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

GPU Settings

slide-78
SLIDE 78

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

GPU Settings

Use of Software Alchemy with GPUs.

  • In a multi-GPU setting, chunking is a natural solution,

hence SA.

  • If GPU memory insufficinet, use SA serially. Still may get

a speedup (per earlier slide).

slide-79
SLIDE 79

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Conclusions, Comments

slide-80
SLIDE 80

Fast, General Parallel Computation for Machine Learning Robin Elizabeth Yancey and Norm Matloff University of California at Davis

Conclusions, Comments

  • Software Alchemy extremely simple, statistically valid —

same statistical accuracy.

  • Generally got linear or even superlinear speedup on most

recommender systems and other ML algorithms.

  • We used our partools package, which is based on a

“Leave It There” philosophy: Keep an object distributed as long as possible, including as a distributed file. Thus no scatter time needed.