Advanced Data Science on Spark Reza Zadeh @Reza_Zadeh | - - PowerPoint PPT Presentation

advanced data science on spark
SMART_READER_LITE
LIVE PREVIEW

Advanced Data Science on Spark Reza Zadeh @Reza_Zadeh | - - PowerPoint PPT Presentation

Advanced Data Science on Spark Reza Zadeh @Reza_Zadeh | http://reza-zadeh.com Data Science Problem Data growing faster than processing speeds Only solution is to parallelize on large clusters Wide use in both enterprises and web industry How do


slide-1
SLIDE 1

Reza Zadeh

Advanced Data Science on Spark

@Reza_Zadeh | http://reza-zadeh.com

slide-2
SLIDE 2

Data Science Problem

Data growing faster than processing speeds Only solution is to parallelize on large clusters

» Wide use in both enterprises and web industry

How do we program these things?

slide-3
SLIDE 3

Use a Cluster

Convex Optimization Matrix Factorization Machine Learning Numerical Linear Algebra Large Graph analysis Streaming and online algorithms

Following ¡lectures ¡on ¡http://stanford.edu/~rezab/dao ¡ ¡ ¡ ¡

slide-4
SLIDE 4

Outline

Data Flow Engines and Spark The Three Dimensions of Machine Learning Communication Patterns Advanced Optimization State of Spark Ecosystem

slide-5
SLIDE 5

Traditional Network Programming

Message-passing between nodes (e.g. MPI) Very difficult ery difficult to do at scale:

» How to split problem across nodes?

  • Must consider network & data locality

» How to deal with failures? (inevitable at scale) » Even worse: stragglers (node not failed, but slow) » Ethernet networking not fast » Have to write programs for each machine

Rarely used in commodity datacenters

slide-6
SLIDE 6

Disk vs Memory

L1 cache reference: 0.5 ns L2 cache reference: 7 ns Mutex lock/unlock: 100 ns Main memory reference: 100 ns Disk seek: 10,000,000 ns

slide-7
SLIDE 7

Network vs Local

Send 2K bytes over 1 Gbps network: 20,000 ns Read 1 MB sequentially from memory: 250,000 ns Round trip within same datacenter: 500,000 ns Read 1 MB sequentially from network: 10,000,000 ns Read 1 MB sequentially from disk: 30,000,000 ns Send packet CA->Netherlands->CA: 150,000,000 ns

slide-8
SLIDE 8

Data Flow Models

Restrict the programming interface so that the system can do more automatically Express jobs as graphs of high-level operators

» System picks how to split each operator into tasks and where to run each task » Run parts twice fault recovery

Biggest example: MapReduce

Map Map Map Reduce Reduce

slide-9
SLIDE 9
  • iter. 1
  • iter. 2

. . . . . .

Input

file system read file system write file system read file system write

Input query 1 query 2 query 3 result 1 result 2 result 3

. . . . . .

file system read

Commonly spend 90% of time doing I/O

Example: Iterative Apps

slide-10
SLIDE 10

MapReduce evolved

MapReduce is great at one-pass computation, but inefficient for multi-pass algorithms No efficient primitives for data sharing

» State between steps goes to distributed file system » Slow due to replication & disk storage

slide-11
SLIDE 11

Verdict

MapReduce algorithms research doesn’t go to waste, it just gets sped up and easier to use Still useful to study as an algorithmic framework, silly to use directly

slide-12
SLIDE 12

Spark Computing Engine

Extends a programming language with a distributed collection data-structure

» “Resilient distributed datasets” (RDD)

Open source at Apache

» Most active community in big data, with 50+ companies contributing

Clean APIs in Java, Scala, Python Community: SparkR, being released in 1.4!

slide-13
SLIDE 13

Key Idea

Resilient Distributed Datasets (RDDs)

» Collections of objects across a cluster with user controlled partitioning & storage (memory, disk, ...) » Built via parallel transformations (map, filter, …) » The world only lets you make make RDDs such that they can be:

Automatically rebuilt on failure

slide-14
SLIDE 14

Resilient Distributed Datasets (RDDs)

Main idea: Resilient Distributed Datasets

» Immutable collections of objects, spread across cluster » Statically typed: RDD[T] has objects of type T

val sc = new SparkContext() val lines = sc.textFile("log.txt") // RDD[String]

  • // Transform using standard collection operations

val errors = lines.filter(_.startsWith("ERROR")) val messages = errors.map(_.split(‘\t’)(2))

  • messages.saveAsTextFile("errors.txt")

lazily evaluated kicks off a computation

slide-15
SLIDE 15

MLlib: Available algorithms

classification: classification: logistic regression, linear SVM, naïve Bayes, least squares, classification tree regr egression: ession: generalized linear models (GLMs), regression tree collaborative filtering: collaborative filtering: alternating least squares (ALS), non-negative matrix factorization (NMF) clustering: clustering: k-means|| decomposition: decomposition: SVD, PCA

  • ptimization:
  • ptimization: stochastic gradient descent, L-BFGS
slide-16
SLIDE 16

The Three Dimensions

slide-17
SLIDE 17

ML Objectives

Almost all machine learning objectives are

  • ptimized using this update

w is a vector of dimension d we’re trying to find the best w via optimization

slide-18
SLIDE 18

Scaling

1) Data size 2) Number of models 3) Model size

slide-19
SLIDE 19

Logistic Regression ¡

Goal: ¡find ¡best ¡line ¡separating ¡two ¡sets ¡of ¡points ¡

+ – + + + + + + + + – – – – – – – – +

target ¡

random ¡initial ¡line ¡

slide-20
SLIDE 20

Data Scaling

data ¡= ¡spark.textFile(...).map(readPoint).cache() ¡ ¡ w ¡= ¡numpy.random.rand(D) ¡ ¡ for ¡i ¡in ¡range(iterations): ¡ ¡ ¡ ¡ ¡gradient ¡= ¡data.map(lambda ¡p: ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡(1 ¡/ ¡(1 ¡+ ¡exp(-­‑p.y ¡* ¡w.dot(p.x)))) ¡* ¡p.y ¡* ¡p.x ¡ ¡ ¡ ¡ ¡).reduce(lambda ¡a, ¡b: ¡a ¡+ ¡b) ¡ ¡ ¡ ¡ ¡w ¡-­‑= ¡gradient ¡ ¡ print ¡“Final ¡w: ¡%s” ¡% ¡w ¡

slide-21
SLIDE 21

Separable Updates

Can be generalized for » Unconstrained optimization » Smooth or non-smooth » LBFGS, Conjugate Gradient, Accelerated Gradient methods, …

slide-22
SLIDE 22

Logistic Regression Results

500 1000 1500 2000 2500 3000 3500 4000 1 5 10 20 30 Running T Running Time (s) ime (s) Number of Iterations Number of Iterations Hadoop Spark

110 s / iteration first iteration 80 s further iterations 1 s

100 GB of data on 50 m1.xlarge EC2 machines

¡

slide-23
SLIDE 23

Behavior with Less RAM

68.8 58.1 40.7 29.7 11.5 20 40 60 80 100 0% 25% 50% 75% 100% Iteration time (s) Iteration time (s) % of working set in memory % of working set in memory

slide-24
SLIDE 24

Lots of little models

Is embarrassingly parallel Most of the work should be handled by data flow paradigm ML pipelines does this

slide-25
SLIDE 25

Hyper-parameter Tuning

slide-26
SLIDE 26

Model Scaling

Linear models only need to compute the dot product of each example with model Use a BlockMatrix to store data, use joins to compute dot products Coming in 1.5

slide-27
SLIDE 27

Model Scaling

Data joined with model (weight):

slide-28
SLIDE 28

Life of a Spark Program

slide-29
SLIDE 29

Life of a Spark Program

1) Create some input RDDs from external data or parallelize a collection in your driver program. 2) Lazily transform them to define new RDDs using transformations like filter() or map() ¡ 3) Ask Spark to cache() any intermediate RDDs that will need to be reused. 4) Launch actions such as count() and collect() to kick off a parallel computation, which is then optimized and executed by Spark.

slide-30
SLIDE 30

Example Transformations

map() ¡ intersection() ¡ cartesion() ¡ flatMap() ¡ ¡ distinct() ¡ pipe() ¡ filter() ¡ ¡ groupByKey() ¡ coalesce() ¡ mapPartitions() ¡ reduceByKey() ¡ repartition() ¡ mapPartitionsWithIndex() ¡ sortByKey() ¡ partitionBy() ¡ sample() ¡ join() ¡ ... ¡ union() ¡ cogroup() ¡ ... ¡

slide-31
SLIDE 31

Example Actions

reduce() ¡ takeOrdered() ¡ collect() ¡ saveAsTextFile() ¡ count() ¡ saveAsSequenceFile() ¡ first() ¡ saveAsObjectFile() ¡ take() ¡ countByKey() ¡ takeSample() ¡ foreach() ¡ saveToCassandra() ¡ ... ¡

slide-32
SLIDE 32

Communication Patterns

None: None: Map, Filter (embarrassingly parallel) All-to-one: All-to-one: reduce One-to-all: One-to-all: broadcast All-to-all: All-to-all: reduceByKey, groupyByKey, Join

slide-33
SLIDE 33

Communication Patterns

slide-34
SLIDE 34

Shipping code to the cluster

slide-35
SLIDE 35

RDD à Stages à Tasks

rdd1.join(rdd2) .groupBy(…) .filter(…)

RDD ¡Objects ¡

build ¡operator ¡DAG ¡

DAG ¡Scheduler ¡

split ¡graph ¡into ¡ stages ¡of ¡tasks ¡ submit ¡each ¡ stage ¡as ¡ready ¡

DAG ¡

Task ¡Scheduler ¡

TaskSet ¡

launch ¡tasks ¡via ¡ cluster ¡manager ¡ retry ¡failed ¡or ¡ straggling ¡tasks ¡

Cluster ¡ manager ¡

Worker ¡

execute ¡tasks ¡ store ¡and ¡serve ¡ blocks ¡

Block ¡ manager ¡ Threads ¡ Task ¡

slide-36
SLIDE 36

Example Stages

= ¡cached ¡partition ¡ = ¡RDD ¡ join ¡ filter ¡ groupBy ¡ Stage ¡3 ¡ Stage ¡1 ¡ Stage ¡2 ¡ A: ¡ B: ¡ C: ¡ D: ¡ E: ¡ F: ¡ map ¡ = ¡lost ¡partition ¡

slide-37
SLIDE 37

Talking to Cluster Manager

Manager can be: YARN Mesos Spark Standalone

slide-38
SLIDE 38

Shuffling (everyday)

slide-39
SLIDE 39

How would you do a reduceByKey on a cluster? Sort! Decades of research has given us algorithms such as TimSort

slide-40
SLIDE 40

Shuffle

=

groupByKey sortByKey reduceByKey Sort: use advances in sorting single-machine memory-disk operations for all-to-all communication

slide-41
SLIDE 41

Sorting

Distribute Timsort, which is already well- adapted to respecting disk vs memory Sample points to find good boundaries Each machines sorts locally and builds an index

slide-42
SLIDE 42

Sorting (shuffle)

Distributed TimSort

slide-43
SLIDE 43

Example Join

slide-44
SLIDE 44

Broadcasting

slide-45
SLIDE 45

Broadcasting

Often needed to propagate current guess for

  • ptimization variables to all machines

The exact wrong way to do it is with “one machines feeds all” – use bit-torrent instead Needs log(p) rounds of communication

slide-46
SLIDE 46

Bit-torrent Broadcast

slide-47
SLIDE 47

Broadcast Rules

Create with SparkContext.broadcast(initialVal) Access with .value inside tasks (first task on each node to use it fetches the value) Cannot be modified after creation

slide-48
SLIDE 48

Replicated Join

slide-49
SLIDE 49

Optimization Example: Gradient Descent

slide-50
SLIDE 50

Logistic Regression

Already saw this with data scaling Need to optimize with broadcast

slide-51
SLIDE 51

Model Broadcast: LR

slide-52
SLIDE 52

Model Broadcast: LR

Use ¡via ¡.value ¡ Call ¡sc.broadcast ¡ Rebroadcast ¡with ¡sc.broadcast ¡

slide-53
SLIDE 53

Separable Updates

Can be generalized for » Unconstrained optimization » Smooth or non-smooth » LBFGS, Conjugate Gradient, Accelerated Gradient methods, …

slide-54
SLIDE 54

State of the Spark ecosystem

slide-55
SLIDE 55

Most active open source community in big data 200+ 200+ developers, 50+ 50+ companies contributing

Spark Community

Giraph Storm 50 100 150

Contributors in past year

slide-56
SLIDE 56

Project Activity

MapReduce YARN HDFS Storm Spark 200 400 600 800 1000 1200 1400 1600 MapReduce YARN HDFS Storm Spark

50000 100000 150000 200000 250000 300000 350000

Commits Lines of Code Changed

Activity in past 6 months

slide-57
SLIDE 57

Continuing Growth

source: ohloh.net

Contributors per month to Spark

slide-58
SLIDE 58

Conclusions

slide-59
SLIDE 59

Spark and Research

Spark has all its roots in research, so we hope to keep incorporating new ideas!

slide-60
SLIDE 60

Conclusion

Data flow engines are becoming an important platform for numerical algorithms While early models like MapReduce were inefficient, new ones like Spark close this gap More info: spark.apache.org