http://cs246.stanford.edu TAs : Bahman Bahmani Juthika Dabholkar - - PowerPoint PPT Presentation

http cs246 stanford edu tas
SMART_READER_LITE
LIVE PREVIEW

http://cs246.stanford.edu TAs : Bahman Bahmani Juthika Dabholkar - - PowerPoint PPT Presentation

CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu TAs : Bahman Bahmani Juthika Dabholkar Pierre Kreitmann Lu Li Aditya Ramesh Office hours: Jure: Tuesdays 9-10am, Gates 418


slide-1
SLIDE 1

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

http://cs246.stanford.edu

slide-2
SLIDE 2

 TAs:

  • Bahman Bahmani
  • Juthika Dabholkar
  • Pierre Kreitmann
  • Lu Li
  • Aditya Ramesh

 Office hours:

  • Jure: Tuesdays 9-10am, Gates 418
  • See course website for TA office hours

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2

slide-3
SLIDE 3

 Course website:

http://cs246.stanford.edu

  • Lecture slides (at least 6h before the lecture)
  • Announcements, homeworks, solutions
  • Readings!

 Readings: Book Mining of Massive Datasets

by Anand Rajaraman and Jeffrey D. Ullman Free online: http://i.stanford.edu/~ullman/mmds.html

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 3

slide-4
SLIDE 4

 4 longer homeworks: 40%

  • Theoretical and programming questions
  • All homeworks (even if empty) must be handed in
  • Assignments take time. Start early!
  • How to submit?
  • Paper: Box outside the class and in the Gates east wing
  • We will grade on paper!
  • You should also submit electronic copy:
  • 1 PDF/ZIP file (writeups, experimental results, code)
  • Submission website: http://cs246.stanford.edu/submit/
  • SCPD: Only submit electronic copy & send us email
  • 7 late days for the quarter:
  • Max 5 late days per assignment

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 4

slide-5
SLIDE 5

 Short weekly quizzes: 20%

  • Short e-quizzes on Gradiance (see course website!)
  • First quiz is already online
  • You have 7 days to complete it. No late days!

 Final exam: 40%

  • March 19 at 8:30am

 It’s going to be fun and hard work 

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 5

slide-6
SLIDE 6

 Homework schedule:  No class: 1/16: Martin Luther King Jr.

2/20: President’s day

Date Out In 1/11 HW1 1/25 HW2 HW1 2/8 HW3 HW2 2/22 HW4 HW3 3/7 HW4

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 6

slide-7
SLIDE 7

 Recitation sessions:

  • Review of probability and statistics
  • Installing and working with Hadoop
  • We prepared a virtual machine with Hadoop preinstalled
  • HW0 helps you write your first Hadoop program
  • See course website!
  • We will announce the dates later
  • Sessions will be recorded

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 7

slide-8
SLIDE 8

 Algorithms (CS161)

  • Dynamic programming, basic data structures

 Basic probability (CS109 or Stat116)

  • Moments, typical distributions, MLE, …

 Programming (CS107 or CS145)

  • Your choice, but C++/Java will be very useful

 We provide some background, but

the class will be fast paced

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 8

slide-9
SLIDE 9

 CS345a: Data mining got split into 2 courses

  • CS246: Mining massive datasets:
  • Methods/algorithms oriented course
  • Homeworks (theory & programming)
  • No class project
  • CS341: Project in mining massive datasets:
  • Project oriented class
  • Lectures/readings related to the project
  • Unlimited access to Amazon EC2 cluster
  • We intend to keep the class small
  • Taking CS246 is basically prerequisite

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 9

slide-10
SLIDE 10

 For questions/clarifications use Piazza!

  • If you don’t have @stanford.edu email address

email us and we will register you

 To communicate with the course staff use

  • cs246-win1112-staff@lists.stanford.edu

 We will post announcements to

  • cs246-win1112-all@lists.stanford.edu
  • If you are not registered or auditing send us email

and we will subscribe you!

 You are welcome to sit-in & audit the class

  • Send us email saying that you will be auditing

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 10

slide-11
SLIDE 11

 Much of the course will be devoted to

ways to data mining on the Web:

  • Mining to discover things about the Web
  • E.g., PageRank, finding spam sites
  • Mining data from the Web itself
  • E.g., analysis of click streams, similar products at

Amazon, making recommendations

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 11

slide-12
SLIDE 12

 Much of the course will be devoted to

large scale computing for data mining

 Challenges:

  • How to distribute computation?
  • Distributed/parallel programming is hard

 Map-reduce addresses all of the above

  • Google’s computational/data manipulation model
  • Elegant way to work with big data

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 12

slide-13
SLIDE 13

 High-dimensional data:

  • Locality Sensitive Hashing
  • Dimensionality reduction
  • Clustering

 The data is a graph:

  • Link Analysis: PageRank, Hubs & Authorities

 Machine Learning:

  • k-NN, Perceptron, SVM, Decision Trees

 Data is infinite:

  • Mining data streams

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 13

slide-14
SLIDE 14

 Applications:

  • Association Rules
  • Recommender systems
  • Advertising on the Web
  • Web spam detection

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 14

slide-15
SLIDE 15
slide-16
SLIDE 16

 Discovery of patterns and models that are:

  • Valid: hold on new data with some certainty
  • Useful: should be possible to act on the item
  • Unexpected: non-obvious to the system
  • Understandable: humans should be able to

interpret the pattern

 Subsidiary issues:

  • Data cleansing: detection of bogus data
  • Visualization: something better than MBs of output
  • Warehousing of data (for retrieval)

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 16

slide-17
SLIDE 17

 Predictive Methods

  • Use some variables to predict unknown
  • r future values of other variables

 Descriptive Methods

  • Find human-interpretable patterns that

describe the data

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 17

slide-18
SLIDE 18

 Scalability  Dimensionality  Complex and Heterogeneous Data  Data Quality  Data Ownership and Distribution  Privacy Preservation  Streaming Data

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 18

slide-19
SLIDE 19

 Overlaps with:

  • Databases: Large-scale (non-main-memory) data
  • Machine learning: Complex methods, small data
  • Statistics: Models

 Different cultures:

  • To a DB person, data mining

is an extreme form of analytic processing – queries that examine large amounts of data

  • Result is the query answer
  • To a statistician, data-mining is

the inference of models

  • Result is the parameters of the model

Machine Learning/ Pattern Recognition Statistics/ AI Data Mining Database systems

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 19

slide-20
SLIDE 20

 A big data-mining risk is that you will

“discover” patterns that are meaningless.

 Bonferroni’s principle: (roughly) if you look in

more places for interesting patterns than your amount of data will support, you are bound to find crap

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 20

slide-21
SLIDE 21

 Joseph Rhine was a parapsychologist in the

1950’s who hypothesized that some people had Extra-Sensory Perception

 He devised an experiment where subjects

were asked to guess 10 hidden cards – red or blue

 He discovered that almost 1 in 1000 had ESP –

they were able to get all 10 right!

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 21

slide-22
SLIDE 22

 He told these people they had ESP and called

them in for another test of the same type

 Alas, he discovered that almost all of them

had lost their ESP

 What did he conclude?  He concluded that you shouldn’t tell people

they have ESP; it causes them to lose it 

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 22

slide-23
SLIDE 23
slide-24
SLIDE 24

Memory Disk CPU

Machine Learning, Statistics “Classical” Data Mining

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 24

slide-25
SLIDE 25

 20+ billion web pages x 20KB = 400+ TB  1 computer reads 30-35 MB/sec from disk

  • ~4 months to read the web

 ~1,000 hard drives to store the web  Takes even more to do something useful

with the data!

 Standard architecture is emerging:

  • Cluster of commodity Linux nodes
  • Gigabit ethernet interconnect

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 25

slide-26
SLIDE 26

Mem Disk CPU Mem Disk CPU

Switch Each rack contains 16-64 nodes Mem Disk CPU Mem Disk CPU

Switch Switch 1 Gbps between any pair of nodes in a rack 2-10 Gbps backbone between racks In Aug 2006 Google had ~450,000 machines

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 26

slide-27
SLIDE 27

 Large-scale computing for data mining

problems on commodity hardware

 Challenges:

  • How do you distribute computation?
  • How can we make it easy to write distributed

programs?

  • Machines fail:
  • One server may stay up 3 years (1,000 days)
  • If you have 1,0000 servers, expect to loose 1/day
  • In Aug 2006 Google had ~450,000 machines

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 27

slide-28
SLIDE 28

 Idea:

  • Bring computation close to the data
  • Store files multiple times for reliability

 Map-reduce addresses these problems

  • Google’s computational/data manipulation model
  • Elegant way to work with big data
  • Storage Infrastructure – File system
  • Google: GFS
  • Hadoop: HDFS
  • Programming model
  • Map-Reduce

1/9/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 28

slide-29
SLIDE 29

 Problem

  • If nodes fail, how to store data persistently?

 Answer

  • Distributed File System:
  • Provides global file namespace
  • Google GFS; Hadoop HDFS;

 Typical usage pattern

  • Huge files (100s of GB to TB)
  • Data is rarely updated in place
  • Reads and appends are common

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 29

slide-30
SLIDE 30

 Chunk Servers

  • File is split into contiguous chunks
  • Typically each chunk is 16-64MB
  • Each chunk replicated (usually 2x or 3x)
  • Try to keep replicas in different racks

 Master node

  • a.k.a. Name Nodes in Hadoop’s HDFS
  • Stores metadata
  • Might be replicated

 Client library for file access

  • Talks to master to find chunk servers
  • Connects directly to chunkservers to access data

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 30

slide-31
SLIDE 31

 Reliable distributed file system  Data kept in “chunks” spread across machines  Each chunk replicated on different machines

  • Seamless recovery from disk or machine failure

C0 C1 C2 C5

Chunk server 1

D1 C5

Chunk server 3

C1 C3 C5

Chunk server 2

C2 D0 D0

Bring computation directly to the data!

C0 C5

Chunk server N

C2 D0

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 31

slide-32
SLIDE 32

Warm-up task:

 We have a huge text document  Count the number of times each

distinct word appears in the file

 Sample application:

  • Analyze web server logs to find popular URLs

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 32

slide-33
SLIDE 33

Case 1:

  • File too large for memory, but all <word, count>

pairs fit in memory

Case 2:

 Count occurrences of words:

  • words(doc.txt) | sort | uniq -c
  • where words takes a file and outputs the words in it,
  • ne per a line

 Captures the essence of MapReduce

  • Great thing is it is naturally parallelizable

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 33

slide-34
SLIDE 34

 Sequentially read a lot of data  Map:

  • Extract something you care about

 Group by key: Sort and Shuffle  Reduce:

  • Aggregate, summarize, filter or transform

 Write the result

Outline stays the same, map and reduce change to fit the problem

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 34

slide-35
SLIDE 35

v k k v k v map v k v k

k v map Input key-value pairs Intermediate key-value pairs

k v

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 35

slide-36
SLIDE 36

k v

k v k v k v Intermediate key-value pairs group reduce reduce k v k v k v

k v

k v k v v v v Key-value groups Output key-value pairs

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 36

slide-37
SLIDE 37

 Input: a set of key/value pairs  Programmer specifies two methods:

  • Map(k, v)  <k’, v’>*
  • Takes a key value pair and outputs a set of key value pairs
  • E.g., key is the filename, value is a single line in the file
  • There is one Map call for every (k,v) pair
  • Reduce(k’, <v’>*)  <k’, v’’>*
  • All values v’ with same key k’ are reduced

together and processed in v’ order

  • There is one Reduce function call per unique key k’

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 37

slide-38
SLIDE 38

The crew of the space shuttle Endeavor recently returned to Earth as ambassadors, harbingers of a new era of space exploration. Scientists at NASA are saying that the recent assembly of the Dextre bot is the first step in a long- term space-based man/machine partnership. '"The work we're doing now -- the robotics we're doing -- is what we're going to need to do to build any work station

  • r habitat structure on the

moon or Mars," said Allard Beutel.

Big document (the, 1) (crew, 1) (of, 1) (the, 1) (space, 1) (shuttle, 1) (Endeavor, 1) (recently, 1) …. (crew, 1) (crew, 1) (space, 1) (the, 1) (the, 1) (the, 1) (shuttle, 1) (recently, 1) … (crew, 2) (space, 1) (the, 3) (shuttle, 1) (recently, 1) … MAP:

reads input and produces a set of key value pairs

Group by key:

Collect all pairs with same key

Reduce:

Collect all values belonging to the key and output

(key, value) Provided by the programmer Provided by the programmer (key, value) (key, value) Sequentially read the data Only sequential reads

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 38

slide-39
SLIDE 39

map(key, value): // key: document name; value: text of the document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(key, result)

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 39

slide-40
SLIDE 40

Map-Reduce environment takes care of:

 Partitioning the input data  Scheduling the program’s execution across a

set of machines

 Handling machine failures  Managing required inter-machine

communication

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 40

slide-41
SLIDE 41

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 41

Big document MAP:

reads input and produces a set of key value pairs

Group by key:

Collect all pairs with same key

Reduce:

Collect all values belonging to the key and output

slide-42
SLIDE 42

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 43

slide-43
SLIDE 43

 Input and final output are stored on a

distributed file system:

  • Scheduler tries to schedule map tasks “close” to

physical storage location of input data

 Intermediate results are stored on local FS

  • f map and reduce workers

 Output is often input to another map

reduce task

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 44

slide-44
SLIDE 44

 Master data structures:

  • Task status: (idle, in-progress, completed)
  • Idle tasks get scheduled as workers become

available

  • When a map task completes, it sends the master

the location and sizes of its R intermediate files,

  • ne for each reducer
  • Master pushes this info to reducers

 Master pings workers periodically

to detect failures

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 45

slide-45
SLIDE 45

 Map worker failure

  • Map tasks completed or in-progress at worker are

reset to idle

  • Reduce workers are notified when task is

rescheduled on another worker

 Reduce worker failure

  • Only in-progress tasks are reset to idle

 Master failure

  • MapReduce task is aborted and client is notified

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 46

slide-46
SLIDE 46

 M map tasks, R reduce tasks  Rule of a thumb:

  • Make M and R much larger than the number of

nodes in cluster

  • One DFS chunk per map is common
  • Improves dynamic load balancing and speeds

recovery from worker failure

 Usually R is smaller than M

  • because output is spread across R files

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 47

slide-47
SLIDE 47

 Fine granularity tasks: map tasks >> machines

  • Minimizes time for fault recovery
  • Can pipeline shuffling with map execution
  • Better dynamic load balancing

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 48

slide-48
SLIDE 48

 Problem

  • Slow workers significantly lengthen the job

completion time:

  • Other jobs on the machine
  • Bad disks
  • Weird things

 Solution

  • Near end of phase, spawn backup copies of tasks
  • Whichever one finishes first “wins”

 Effect

  • Dramatically shortens job completion time

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 49

slide-49
SLIDE 49

 Often a map task will produce many pairs of

the form (k,v1), (k,v2), … for the same key k

  • E.g., popular words in the Word Count example

 Can save network time by

pre-aggregating values at the mapper:

  • combine(k, list(v1))  v2
  • Combiner is usually same

as the reduce function

 Works only if reduce

function is commutative and associative

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 50

slide-50
SLIDE 50

 Inputs to map tasks are created by contiguous

splits of input file

 Reduce needs to ensure that records with the

same intermediate key end up at the same worker

 System uses a default partition function:

  • hash(key) mod R

 Sometimes useful to override:

  • E.g., hash(hostname(URL)) mod R ensures URLs

from a host end up in the same output file

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 51

slide-51
SLIDE 51

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 52

slide-52
SLIDE 52

 Suppose we have a large web corpus  Look at the metadata file

  • Lines of the form (URL, size, date, …)

 For each host, find the total number of bytes

  • i.e., the sum of the page sizes for all URLs from

that host

 Other examples:

  • Link analysis and graph processing
  • Machine Learning algorithms

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 53

slide-53
SLIDE 53

 Statistical machine translation:

  • Need to count number of times every 5-word

sequence occurs in a large corpus of documents

 Very easy with MapReduce:

  • Map:
  • Extract (5-word sequence, count) from document
  • Reduce:
  • Combine counts

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 54

slide-54
SLIDE 54

 Compute the natural join R(A,B) ⋈ S(B,C)  R and S each are stored in files  Tuples are pairs (a,b) or (b,c)

1/9/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 55

A B a1 b1 a2 b1 a3 b2 a4 b3 B C b2 c1 b2 c2 b3 c3

A C a3 c1 a3 c2 a4 c3

=

slide-55
SLIDE 55

 Use a hash function h from B-values to 1...k  A Map process turns:

  • Each input tuple R(a,b) into key-value pair (b,(a,R))
  • Each input tuple S(b,c) into (b,(c,S))

 Map processes send each key-value pair with

key b to Reduce process h(b).

  • Hadoop does this automatically; just tell it what k is.

 Each Reduce process matches all the pairs

(b,(a,R)) with all (b,(c,S)) and outputs (a,b,c).

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 56

slide-56
SLIDE 56

1.

Communication cost = total I/O of all processes.

2.

Elapsed communication cost = max of I/O along any path.

3.

(Elapsed ) computation costs analogous, but count only running time of processes.

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 57

slide-57
SLIDE 57

 For a map-reduce algorithm:

  • Communication cost = input file size + 2 × (sum of

the sizes of all files passed from Map processes to Reduce processes) + the sum of the output sizes of the Reduce processes.

  • Elapsed communication cost is the sum of the

largest input + output for any map process, plus the same for any reduce process

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 58

slide-58
SLIDE 58

 Either the I/O (communication) or processing

(computation) cost dominates

  • Ignore one or the other

 Total costs tell what you pay in rent from your

friendly neighborhood cloud

 Elapsed costs are wall-clock time using

parallelism

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 59

slide-59
SLIDE 59

 Total communication

cost = O(|R|+|S|+|R ⋈ S|)

 Elapsed communication cost = O(s)

  • We’re going to pick k and the number of Map

processes so I/O limit s is respected

  • We put a limit s on the amount of input or output that

any one process can have. s could be:

  • What fits in main memory
  • What fits on local disk

 With proper indexes, computation cost is linear

in the input + output size

  • So computation costs are like comm. costs

1/9/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 60

slide-60
SLIDE 60

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 61

slide-61
SLIDE 61

 Google

  • Not available outside Google

 Hadoop

  • An open-source implementation in Java
  • Uses HDFS for stable storage
  • Download: http://lucene.apache.org/hadoop/

 Aster Data

  • Cluster-optimized SQL Database that also

implements MapReduce

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 62

slide-62
SLIDE 62

 Ability to rent computing by the hour

  • Additional services e.g., persistent storage

 Amazon’s “Elastic Compute Cloud” (EC2)  Aster Data and Hadoop can both be run on

EC2

 For CS341 (offered next quarter) Amazon will

provide free access for the class

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 63

slide-63
SLIDE 63

 Jeffrey Dean and Sanjay Ghemawat:

MapReduce: Simplified Data Processing on Large Clusters

  • http://labs.google.com/papers/mapreduce.html

 Sanjay Ghemawat, Howard Gobioff, and Shun-

Tak Leung: The Google File System

  • http://labs.google.com/papers/gfs.html

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 64

slide-64
SLIDE 64

 Hadoop Wiki

  • Introduction
  • http://wiki.apache.org/lucene-hadoop/
  • Getting Started
  • http://wiki.apache.org/lucene-

hadoop/GettingStartedWithHadoop

  • Map/Reduce Overview
  • http://wiki.apache.org/lucene-hadoop/HadoopMapReduce
  • http://wiki.apache.org/lucene-

hadoop/HadoopMapRedClasses

  • Eclipse Environment
  • http://wiki.apache.org/lucene-hadoop/EclipseEnvironment

 Javadoc

  • http://lucene.apache.org/hadoop/docs/api/

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 65

slide-65
SLIDE 65

 Releases from Apache download mirrors

  • http://www.apache.org/dyn/closer.cgi/lucene/had
  • op/

 Nightly builds of source

  • http://people.apache.org/dist/lucene/hadoop/nig

htly/

 Source code from subversion

  • http://lucene.apache.org/hadoop/version_control

.html

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 66

slide-66
SLIDE 66

 Programming model inspired by functional language

primitives

 Partitioning/shuffling similar to many large-scale sorting

systems

  • NOW-Sort ['97]

 Re-execution for fault tolerance

  • BAD-FS ['04] and TACC ['97]

 Locality optimization has parallels with Active

Disks/Diamond work

  • Active Disks ['01], Diamond ['04]

 Backup tasks similar to Eager Scheduling in Charlotte

system

  • Charlotte ['96]

 Dynamic load balancing solves similar problem as River's

distributed queues

  • River ['99]

1/8/2012 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 67