-Sandeep Palur and Ajay Anthony (A20302187) (A20306352) 1 - - PowerPoint PPT Presentation

sandeep palur and ajay anthony a20302187 a20306352
SMART_READER_LITE
LIVE PREVIEW

-Sandeep Palur and Ajay Anthony (A20302187) (A20306352) 1 - - PowerPoint PPT Presentation

-Sandeep Palur and Ajay Anthony (A20302187) (A20306352) 1 Int ntrodu roducti ction n to Clo loudkon udkon Clo loudk udkon on Ar Architectur hitecture Clo loudkon udkon Relo load aded ed Ar Archit hitecture ecture Clo


slide-1
SLIDE 1

1

  • Sandeep Palur and Ajay Anthony

(A20302187) (A20306352)

slide-2
SLIDE 2

2

Int ntrodu roducti ction n to Clo loudkon udkon Clo loudk udkon

  • n Ar

Architectur hitecture Clo loudkon udkon Relo load aded ed Ar Archit hitecture ecture Clo loudk udkon

  • n Rel

eloaded aded Imp mprovements rovements Benc nchm hmarkin rking g re resu sults lts Conclus nclusio ion Cont ntrib ribution utions De Demo mo

slide-3
SLIDE 3

 CloudKon is a compact, light-weight, scalable, and

distributed task execution framework .

 Built on following Amazon components:

  • EC2
  • SQS
  • DynamoDB

 Major Components in CloudKon:

  • Client
  • Server
  • Global Request Queue (SQS)
  • Client Response Queue (SQS)

3

slide-4
SLIDE 4

4

slide-5
SLIDE 5

5

 1. Improved concurrency  2. Bundled Response  3. Efficient Monitoring

slide-6
SLIDE 6

6

slide-7
SLIDE 7

 Serve

ver

  • Worker

rker Thr hread ead (WT) T)

  • 1. Pulls task bundles from global request queue.
  • 2. Creates task thread in optimal concurrency mode.
  • Task

sk Thre read ad (TT) T)

  • 1. Deletes the task from the global request queue.
  • 2. Checks for duplication with DynamoDB.
  • 3. Executes task and puts back response to client specific

array in Buffer.

7

slide-8
SLIDE 8
  • Buffer

ffer (BUF) F):

  • 1. Concurrent hash map
  • Key :Client Response Queue link.
  • Value: ArrayList of task responses.
  • Send

end Respo sponse nse Thr hrea ead (SRT) RT):

  • 1. Pulls message bundles from buffer.
  • 2. Sends bundled response to clients.
  • Mon
  • nit

itor Thre read ad (MT) T):

  • 1. Attaches object with task thread.
  • 2. Tracks utilization using object’s reference.

8

slide-9
SLIDE 9

 Client

  • Worker

rker Thr hread ead (WT) T):

  • 1. Creates client response queue.
  • 2. Submits task to global request queue.
  • 3. Pulls messages from it’s response queue.
  • 4. Creates task threads using maximum concurrency mode.
  • Task

sk Threa read (WT) T):

  • 1. Deletes message from response queue.
  • 2. Adds message in the concurrent ArrayList.

9

slide-10
SLIDE 10
  • 1. Improved concurrency
  • All tasks are processed concurrently.
  • Reduces Latency.
  • Increases throughput.
  • 2. Bundled Response:
  • Reduces network overhead.
  • Utilizes network bandwidth more effectively i.e. reducing the

probablity of network latency.

  • 3. Efficient Monitoring:
  • Reduces network overhead .
  • Reduces contention by 1/n, where n = no. of workers

10

slide-11
SLIDE 11

 Test-be

bed:

  • Ran on Amazon EC2 instances experiments on

us.east.1 datacenter of Amazon.

  • Instance type – m1.large
  • All instances have Linux OS with JRE 1.7

installed.

  • Each instance runs both client and server.
  • 2 client threads and 4 worker threads run on

each instance.

  • Each

instance submits 16000 tasks. (8000/thread)

  • Tasks: sleep 0 , 16, 128
slide-12
SLIDE 12

 Scripts and programs developed specifically for

benchmarking:

 1. Shell Scripts (Bash): Throughput, Latency, File

transfer from EC2 instances.

 2. Parallel-SSH: For parallel execution on EC2.  3. EC2 CLI (Command Line Interface): For instance

startup, terminate, Getting IP address, etc.

 4. AWS CLI (Command Line Interface): Mainly for

Dynamic Provisioning for SQS operations and EC2 dynamic instance startup.

slide-13
SLIDE 13

 Throughput:

  • sleep 0 tasks
slide-14
SLIDE 14

 Throughput Comparison:

slide-15
SLIDE 15

 Comparison of Sleeps for Throughput:

  • sleep 0 tasks
slide-16
SLIDE 16

 Efficiency:

  • Homogenous workloads.

0.00% 20.00% 40.00% 60.00% 80.00% 100.00% 100 200 300

Effici icien ency cy (percent ercentage) ge)

  • No. of Inst

stanc nces es

sl sleep ep 128

sleep 128 0% 20% 40% 60% 80% 100% 100 200 300

Effici icien ency cy (percent ercentage) ge)

  • No. of Instances

ces

sl sleep ep 16

sleep 16

slide-17
SLIDE 17

 Consistency:

  • sleep 16 tasks
slide-18
SLIDE 18

 Utilization:

  • sleep 100 tasks

1 2 4 8 16 32 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73

Utili iliza zation tion Time e (seco econds)

8 nodes

8 nodes 1 2 4 8 16 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70

Util iliza izati tion

  • n

Time e (seco econds)

4 nodes

4 nodes

slide-19
SLIDE 19

 The evaluation of the CloudKon proves that it is

highly scalable and achieves a stable performance

  • ver different scales.

 CloudKon achieves up to 87% efficiency.  CloudKon was able to outperform other systems

like Sparrow and MATRIX on scales of 128 instances or more in terms of throughput.

19

slide-20
SLIDE 20

 Throughput and efficiency experiments for sleep

(0,1,16,128) on the following scales (1,2,4,8,16,32,64,128,256,512,1024).

 Our code was used for throughput and efficiency

benchmarking experiments in CloudKon paper submitted for CCGRID 2014.

slide-21
SLIDE 21

21

slide-22
SLIDE 22

Questions??

22