CS 744: Resilient Distributed Datasets Shivaram Venkataraman Fall - - PowerPoint PPT Presentation

cs 744 resilient distributed datasets
SMART_READER_LITE
LIVE PREVIEW

CS 744: Resilient Distributed Datasets Shivaram Venkataraman Fall - - PowerPoint PPT Presentation

CS 744: Resilient Distributed Datasets Shivaram Venkataraman Fall 2020 ADMINISTRIVIA , posted some to notes y submission - Assignment 1: Due Sep 21, Monday at 10pm! Piazza ! - Assignment 2: ML will be released Sep 22 Course - Final


slide-1
SLIDE 1

CS 744: Resilient Distributed Datasets

Shivaram Venkataraman Fall 2020

slide-2
SLIDE 2

ADMINISTRIVIA

  • Assignment 1: Due Sep 21, Monday at 10pm!
  • Assignment 2: ML will be released Sep 22
  • Final project details: Next week

y

, posted some submission notes to Piazza ! Course
slide-3
SLIDE 3

MOTIVATION: Programmability

Most real applications require multiple MR steps – Google indexing pipeline: 21 steps – Analytics queries (e.g. sessions, top K): 2-5 steps – Iterative algorithms (e.g. PageRank): 10’s of steps Multi-step jobs create spaghetti code – 21 MR steps à 21 mapper and reducer classes

Discussion

.

Apt!

  • ur
inputs and outputs to

xi

slide-4
SLIDE 4

MOTIVATION: Performance

MR only provides one pass of computation – Must write out data to file system in-between Expensive for apps that need to reuse data – Multi-step algorithms (e.g. PageRank) – Interactive data mining y

Latency

  • r
Tine mtor run has a lower board in time to read input write outfit
  • to
disk

↳ amp

slide-5
SLIDE 5

Programmability

  • #include "mapreduce/mapreduce.h"
  • // User’s map function
  • class SplitWords: public Mapper {
  • public:
  • virtual void Map(const MapInput& input)
  • {
  • const string& text = input.value();
  • const int n = text.size();
  • for (int i = 0; i < n; ) {
  • // Skip past leading whitespace
  • while (i < n && isspace(text[i]))
  • i++;
  • // Find word end
  • int start = i;
  • while (i < n && !isspace(text[i]))
  • i++;
  • if (start < i)
  • Emit(text.substr(
start,i-start),"1");
  • }
  • }
  • };
  • REGISTER_MAPPER(SplitWords);
  • // User’s reduce function
  • class Sum: public Reducer {
  • public:
  • virtual void Reduce(ReduceInput* input)
  • {
  • // Iterate over all entries with the
  • // same key and add the values
  • int64 value = 0;
  • while (!input->done()) {
  • value += StringToInt(
  • input->value());
  • input->NextValue();
  • }
  • // Emit sum for input->key()
  • Emit(IntToString(value));
  • }
  • };
  • REGISTER_REDUCER(Sum);
  • int main(int argc, char** argv) {
  • ParseCommandLineFlags(argc, argv);
  • MapReduceSpecification spec;
  • for (int i = 1; i < argc; i++) {
  • MapReduceInput* in= spec.add_input();
  • in->set_format("text");
  • in->set_filepattern(argv[i]);
  • in->set_mapper_class("SplitWords");
  • }
  • // Specify the output files
  • MapReduceOutput* out = spec.output();
  • ut->set_filebase("/gfs/test/freq");
  • ut->set_num_tasks(100);
  • ut->set_format("text");
  • ut->set_reducer_class("Sum");
  • // Do partial sums within map
  • ut->set_combiner_class("Sum");
  • // Tuning parameters
  • spec.set_machines(2000);
  • spec.set_map_megabytes(100);
  • spec.set_reduce_megabytes(100);
  • // Now run it
  • MapReduceResult result;
  • if (!MapReduce(spec, &result)) abort();
  • return 0;
  • }

Google MapReduce WordCount:

  • e

7

E

3

slide-6
SLIDE 6

APACHE Spark Programmability

val val file = spark.textFile(“hdfs://...”) val val counts = file.flatMap(line => line.split(“ ”)) .map(word => (word, 1)) .reduceByKey(_ + _) counts.save(“out.txt”)

fewer

lines

  • r

② zsgtehisawenrseda

?

code

hand

.

function

Medway

, functions inline functions

q

soul

  • T -

mean

④ www.wetwtfk

③ peak

to
  • perate

xoadfoftitewt.in setting

claiming safely

  • n
collections ( local) as

local

slide-7
SLIDE 7

APACHE Spark

Programmability: clean, functional API – Parallel transformations on collections – 5-10x less code than MR – Available in Scala, Java, Python and R Performance – In-memory computing primitives – Optimization across operators

slide-8
SLIDE 8

Spark Concepts

Resilient distributed datasets (RDDs) – Immutable, partitioned collections of objects – May be cached in memory for fast reuse Operations on RDDs – Transformations (build RDDs) – Actions (compute results) Restricted shared variables – Broadcast, accumulators

yonce

we create this we cannot

chase

ii%

*

contents

L

> track

charges

using lineage records
  • q
'

↳-

; a split
  • r
r..

ruined

' # .

T

i - :

[ ÷:*

a:*

.:*: :

. Ii:

slide-9
SLIDE 9

Example: Log Mining

Find error messages present in log files interactively (Example: HTTP server logs)

lines = spark.textFile(“hdfs://...”) errors = lines.filter(_.startsWith(“ERROR”)) messages = errors.map(_.split(‘\t’)(2)) messages.cache() Block 1 Block 2 Block 3 Worker Worker Worker

Driver

messages.filter(_.contains(“foo”)).count tasks results Base RDD Transformed RDD Action for C ) 2

effete:

  • timer
. .net , inofte

⇐→

RDDC string)

(put

this in cache create gon RDP from agile

filter

, meth ,

#

g court
  • .

HI

f⇒*

.
  • E
⇒ ¥ .

#

Lemery

rare

*⇒river )

mainsheets

.

handle

memory

µ data
slide-10
SLIDE 10

Example: Log Mining

Find error messages present in log files interactively (Example: HTTP server logs)

lines = spark.textFile(“hdfs://...”) errors = lines.filter(_.startsWith(“ERROR”)) messages = errors.map(_.split(‘\t’)(2)) messages.cache() Block 1 Block 2 Block 3 Worker Worker Worker

Driver

messages.filter(_.contains(“foo”)).count messages.filter(_.contains(“bar”)).count . . . tasks results Cache 1 Cache 2 Cache 3

Result: full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data) Result: search 1 TB data in 5-7 sec (vs 170 sec for on-disk data)

  • D

E

  • =
.

÷÷÷

→ -
slide-11
SLIDE 11

Fault Recovery

messages = textFile(...).filter(_.startsWith(“ERROR”)) .map(_.split(‘\t’)(2))

HDFS File Filtered RDD Mapped RDD filter

(func = _.contains(...))

map

(func = _.split(...))

D

  • > D -
  • n
court & D

"

  • ra?*

. "⇒

D

. . . . > D

DI

Base cache (messages) RDD ( HDFD " narrow

clefs

"

transform

par# Hors

paadraj.nl?Yai!mIem

② ?

file

Transform : filter

function

slide-12
SLIDE 12

Other RDD Operations

Transformations (define a new RDD) map filter sample groupByKey reduceByKey cogroup flatMap union join cross mapValues ... Actions (output a result) collect reduce take fold count saveAsTextFile saveAsHadoopFile ...

slide-13
SLIDE 13

DEPENDENCIES

MM.pe#ration

°

O

f

FILE

"

Pifermediate

  • files
  • ng

slide-14
SLIDE 14

Job Scheduler (1)

Captures RDD dependency graph Pipelines functions into “stages”

join union groupBy map Stage 3 Stage 1 Stage 2 A: B: C: D: E: F: G: = cached partition

Driver ftp..es?5fim

↳ w 2

I

filter , map

, mop

  • ne Task
  • narrow
slide-15
SLIDE 15

Job Scheduler (2)

Cache-aware for data reuse, locality Partitioning-aware to avoid shuffles

join union groupBy map Stage 3 Stage 1 Stage 2 A: B: C: D: E: F: G: = cached partition also

in MR

E

  • a! partition
  • scheduler fails
? ? *

Job

aborts

slide-16
SLIDE 16

CHECKPOINTING

rdd = sc.parallelize(1 to 100, 2).map(x à 2*x) rdd.checkpoint()
slide-17
SLIDE 17

SuMMARY

Spark: Generalize MR programming model Support in-memory computations with RDDs Job Scheduler: Pipelining, locality-aware

reduce why at Driver 2

MapReduce

aviate

}

↳ 1£"

"

"" " mimic Scala

API

reduce →spark behaves like this
slide-18
SLIDE 18

DISCUSSION

https://forms.gle/4JDXfpRuVaXmQHxD8

slide-19
SLIDE 19

" '

""T

aq

.

paomptatim

faction

( like

, Count)

Ds -

Driver ( H5t3t6)

③I

= 15

/

↳ bottleneck

⇐ ID

p

partitions

rkeM .

woyapoiffffmh

you tear we : d bytes .

← reduction dothi?:p×dqtes=Driv#

in

at

D-doi-hgredraBykeyq.dk

→ neg::#

Decor'I÷gamxID→Y:L

, #arena

  • paikmpy.ci
. ) D

tin

:%÷rdeW

"

peuilnkpm

" ppp

. seek
slide-20
SLIDE 20

When would reduction trees be better than using `reduce` in Spark? When would they not be ?

  • verheld
with

doing

work in stages

scheduling

, task creation shuffle
  • verheads
Compute & data

tramnitted

is

small

tree Reduce

might

be slow .
slide-21
SLIDE 21 Device is full error / partition Disk is full hunt big ! simple data → local disk n
  • "

df

  • h
" ¥1

n

dstat → Tool Python

↳ cpu , network disk until

name

  • a. →
  • Ppa
"

D →

ppnfuthogg.gg

. enabled some job info
slide-22
SLIDE 22

NEXT STEPS

Next week: Resource Management

  • Mesos,YARN
  • DRF

Assignment 1 is due soon!

Review form ↳ when is MR better spark vice versa

multiple

passes → Spark is better when
  • ver
data & data fits in memory

÷¥÷¥i:÷m

. :::b:

vs

Frequency of

memory speed

failure