M. Carmen Ruiz, Diego Prez and Damas Gruska CPA 2017: The 39th Commu - - PowerPoint PPT Presentation

m carmen ruiz diego p rez and damas gruska
SMART_READER_LITE
LIVE PREVIEW

M. Carmen Ruiz, Diego Prez and Damas Gruska CPA 2017: The 39th Commu - - PowerPoint PPT Presentation

M. Carmen Ruiz, Diego Prez and Damas Gruska CPA 2017: The 39th Commu mmunica icati ting ng Process ess Archit itectu ectures es Malta ta 20-23 23 August ust Outline 1. Motivation 2. Our work 3. Formal Modelling of Map/Reduce 4.


slide-1
SLIDE 1
  • M. Carmen Ruiz, Diego Pérez and Damas Gruska

CPA 2017: The 39th Commu mmunica icati ting ng Process ess Archit itectu ectures es Malta ta 20-23 23 August ust

slide-2
SLIDE 2

Outline

1. Motivation

  • 2. Our work
  • 3. Formal Modelling of Map/Reduce
  • 4. Performance Evaluation
  • 5. Validation
  • 6. Performance-Cost tradeoff
  • 7. Conclusions
slide-3
SLIDE 3

Motivation

slide-4
SLIDE 4

Motivation

  • Such data provide the opportunity for social scientists to

conduct a wide variety

  • f

research analysis and has demonstrated to be of great interest.

slide-5
SLIDE 5

Motivation

  • However, performing a

longitudinal analysis

  • f

this huge data becomes a Big-Data problem since the volume of this data is produced continuously all around the world.

slide-6
SLIDE 6

Motivation

This fact hampers data harvesting, storage and analysis by using traditional tools or processing infrastructures. New processing paradigms and computational environments have arisen. One of the main contributions to this matter has been the

Map/Reduce paradigm and its

Open–Source implementation (Hadoop)

slide-7
SLIDE 7

Motivation

slide-8
SLIDE 8

Motivation

  • Moreover, Cloud computing provides several features that

become of interest in conjunction with Hadoop, such as high availability and distributed environment provisioning.

  • Simultaneously, the growing trend of the Cloud computing

paradigm, due to its benefits in terms of storage, computing power and flexibility offers a possibility to handle this massive amounts of data at reasonable cost.

slide-9
SLIDE 9

Motivation

  • Hadoop requires a distributed environment (a cluster or

virtual cluster) in order to perform any Hadoop application execution.

  • The number of resources

dedicated to this task (number

  • f

virtual machines dedicated to a Hadoop virtual cluster) determines the application performance.

slide-10
SLIDE 10

Motivation

  • The Cloud pay–per–use model must be taken into

account in order to minimise the cost since the number of virtual machines hired for a certain study is related to

  • perational expenses.
slide-11
SLIDE 11

Our work

We present a formalization of the Map/Reduce paradigm which is used to evaluate performance parameters and make a trade–off analysis on the number of workers versus processing time and resource cost.

  • Timed Process Algebra BTC
  • BAL Tool
slide-12
SLIDE 12

Map/Reduce Paradigm

The basis of Map/Reduce consists in splitting the input data into data chunks that are distributed to the worker nodes where they are processed. Later on, the results are combined and collected.

slide-13
SLIDE 13

Map/Reduce Paradigm

slide-14
SLIDE 14

BTC (Bounded True Concurrency)

  • Timed algebra
  • It takes into account that the available resources in a system

must be shared by all the processes. This evolves into two types of delays:

  • Delays related to the synchronization of processes
  • Delays related to the allocation of resources
  • True Concurrency —> (a|b) ≠ (a.b + b.a)
  • Homogeneous / Heterogeneous Resources
  • Preemptable / Non-preemptable Resources
slide-15
SLIDE 15

BTC (Bounded True Concurrency)

Sintaxis P ::= stop | a.P | < b, α > .P | P ⊕ P | P ||A P | recX.P Types of actions:

  • Timed actions (ActT )
  • Untimed actions (ActU)
  • Special actions (ActS)

[[P]]Z , N

N = {N1, N2, ..., Nm} Nº of resources of each type Z = {Z1, Z2, ..., Zm} Zi = {b1, b2, ..., bi} actions which need resources of type i

slide-16
SLIDE 16

BAL TOOL

slide-17
SLIDE 17

Specification Wizard File Syntax Analyser Graph Generator Performance Evaluator Results

BTC Syntax Branch-and-bound techniques DBL Scheme Parallel computing Grid computing BTC Operational Semantics

Syntax Error System Specification

BAL TOOL

slide-18
SLIDE 18

Formal Modelling of Map/Reduce

[[sys_Map_Red]]Z, N ≡ [[BLOCK || BLOCK || . . . || BLOCK || OVERLAP || SYN_CLEANUP || SYN_SETUP]] {act_worker}, {n} BLOCK ≡ SETUP . MAP . REDUCE . CLEAN SETUP ≡ < setup, tS > . synR.synRR MAP ≡ < act_worker > . synS . < recordReader, tRr > . < map, tM > . < act_worker > . synSS REDUCE ≡ < act_worker > . < shuffle, tSh > . < sort, tSrt > . < reduce, tR > . < output, tOpt > . < act_worker > CLEAN ≡ synC.synCC . < clean, tC > OVERLAP ≡ synS. . . . .synS.synSS.synSS. . . . .synSS SYN_CLEANUP ≡ synC. . . . .synC.synCC.synCC . . . .synCC SYN_SETUP ≡ synR. . . . .synR.synRR.synRR. . . .synRR

BTC Specification

slide-19
SLIDE 19

Formal Modelling of Map/Reduce

PRECONDITIONS OF EACH TRANSITION

Main Task Sub-Task Parameter Setup Setup tS Map Record Reader tRr Map tM Reduce Shuflle tSh Sort tSrt Reduce tR Output tOpt Clean Up Clean Up tC

slide-20
SLIDE 20

Performance Evaluation

  • The focus lies on the study of Hadoop framework to obtain

the utmost performance with the minimum number of resources or minimum cost.

  • A temporal analysis of the Hadoop behaviour has been

performed.

  • The results show the performance in terms of the number of

resource needed.

  • The results of this analysis allow:
  • users to know the configuration that best suits their

requirements.

  • Cloud providers help to establish their service catalogue.
slide-21
SLIDE 21

Performance Evaluation

  • The number of resources needed depends mainly on the

type of application and the volume of data to be processed.

  • In order to be able to carry out the performance evaluation,

we chose a concrete application:

H.265 encoding Hadoop application

slide-22
SLIDE 22

H.265 encoding

  • The most widely encoding standard used nowadays is

H.264. However, its successor, known as H.265 (or HEVC), has shown to improve H.264 and represents the future of video encoding.

  • This application exploits the HEVC encoder within a

Hadoop application, exploiting the distributed processing

  • f video chunks, across multiple computing resources.
  • In order to evaluate the performance of this application,

the encoded video sequence used has been “BasketballDrill” (832x480).

slide-23
SLIDE 23

H.265 encoding

  • Time that each block needs to execute the phases and

sub-phases that make up the model of Map/Reduce.

Main Task Sub-Task Parameter Time (ms) Setup Setup tS

125 ms

Map Record Reader tRr

506 ms

Map tM

64044 ms

Reduce Shuflle tSh

16187 ms

Sort tSrt

500 ms

Reduce tR

75 ms

Output tOpt

65 ms

Clean Up Clean Up tC

125 ms

slide-24
SLIDE 24

Performance Evaluation

  • BTC models of the Map/Reduce behaviour
  • The application to be studied
  • The volume of data to be processed.

 To replace the input parameters in the Map/Reduce Model  To provide the number of data blocks.  To establish the number of workers (variable n)

Application

(include information into the model)

slide-25
SLIDE 25

Performance Evaluation

BAL tool

  • Checks the syntax of the specification
  • Builds the transition graph
  • Carries out the performance analysis

The result is the time that the application takes to analyse this amount of chunks.

slide-26
SLIDE 26

Performance Evaluation

Data has been obtained for different configurations (1 master VM + # workers VMs) Workers ers Execution cution Time Impr mprovemen ement 2 10m 51s 3 07m 20s 32,41% 4 05m 25s 26,14% 5 04m 35s 15,38% 6 03m 47s 17,45% 7 03m 30s 7,49% 8 02m 43s 22,38% 9 02m 28s 9,20% Workers ers Execution cution Time Impr mprovemen ement 10 02m 26s 1,35% 11 02m 26s 0% 12 02m 26s 0% 13 02m 26s 0% 14 02m 26s 0% 15 02m 26s 0% 16 01m 21s 44,52%

slide-27
SLIDE 27
  • 2 Xeon e5462 CPU (4 Cores)
  • 32 GB of main memory
  • 60 GB of storage.
  • NFS
  • Gigabit Ethernet network.
  • OS: CentOS 6.2 Linux
  • OpenNebula
  • Virtualization software: KVM
  • Headnode: + 1TB of storage shared between compute nodes

Validation

Cloud infrastructure deployed at the UCLM

Real observation Formal model

slide-28
SLIDE 28

SentiStrength tool for Hadoop

  • SentiStrength tool conducts longitudinal analysis of social

media data.

  • This application is used by COSMOS project whose objective

is to translate the underlying social observation and analysis mechanisms into an embedded research tool that supports the development and execution of social media research analysis.

  • For this study, the application has performed the sentiment

analysis of:

  • ≈ 100 million tweets (≈ 15Gb of plain text), which have

been split into 300 blocks of equal size.

slide-29
SLIDE 29

Performance-Cost Tradeoff

  • Since the virtual infrastructure used to validate the formal

model follows the specifications provided by Amazon EC2 provider M1.small instances, the hiring costs stated by Amazon are been considered.

  • We use the ”Simple Monthly Calculator” tool, which provides

an automated hiring cost in terms of the number of resources, time required and location Amazon EC2 evaluates the cost in hours time slots.

slide-30
SLIDE 30

Performance-Cost Tradeoff

Workers Execution Time Price 2 7h 15m 02s 43,92$ 3 4h 50m 05s 36,60$ 4 3h 37m 33s 36,60$ 5 2h 54m 03s 32,94$ 6 2h 25m 15s 38,43$ 7 2h 04m 45s 43,92$ 8 1h 49m 02s 32,94$ 9 1h 36m 59s 36,60$

300 B L O C K S R E S U LT S

  • Time and cost required to perform a longitudinal sentiment

analysis of 100 million Tweets (divided into 300 blocks) within the Amazon EC2 Cloud.

Data has been obtained for different configurations (1 master VM + # workers VMs)

slide-31
SLIDE 31

Conclusions

  • We have developed a formal model to allow users and service

managers to evaluate cost and performance in terms of the deployment strategy, or to choose the best deployment strategy in terms of the expected cost/performance, with the objective

  • f a better Cloud resource hiring in terms of user’s time and

costs restrictions.

slide-32
SLIDE 32

Conclusions

In detail:

  • A process-algebraic formalisation of the Map/reduce paradigm.
  • A model of the behaviour of the application SentiStrength for the

Hadoop module.

  • Performance evaluation from the process-algebraic formalisation.
  • Validation of the results against an actual implementation.
  • Use of the results to reduce the operational expenses on IT

resources by helping on choosing the optimal resource hiring.

slide-33
SLIDE 33
  • M. Carmen Ruiz, Diego Pérez and Damas Gruska

CPA 2017: The 39th Commu mmunica icati ting ng Process ess Archit itectu ectures es Malta ta 20-23 23 August ust