Analysis of Scheduling Algorithms on a Parallel Programming Context - - PowerPoint PPT Presentation

analysis of scheduling algorithms on a parallel
SMART_READER_LITE
LIVE PREVIEW

Analysis of Scheduling Algorithms on a Parallel Programming Context - - PowerPoint PPT Presentation

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Analysis of Scheduling Algorithms on a Parallel Programming Context Peter N. Nyumu Supervisor: Alfredo Goldman IME - USP November 17, 2009 Peter N. Nyumu


slide-1
SLIDE 1

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Analysis of Scheduling Algorithms on a Parallel Programming Context

Peter N. Nyumu Supervisor: Alfredo Goldman

IME - USP

November 17, 2009

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-2
SLIDE 2

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Introduction Categories of Scheduling algorithm Scheduling in Operating System Parallel Computing Cluster Computing Grid Computing Final

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-3
SLIDE 3

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Motivation

◮ Understand Scheduling of tasks in grid enviroment

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-4
SLIDE 4

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Categories of scheduling algorithms

◮ Uniprocessor scheduling algorithms - involve one processor;

Divided in two

◮ Multiprocessor they is more than one processor available to

execute the jobs.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-5
SLIDE 5

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Scheduling in Operating System

◮ The scheduler is concerned mainly with: ◮ CPU Utilization - This is the capacity to keep the cpu busy as

much as possible as long there is job to process.

◮ Throughput - A measure of work in terms of the number of

processes that are completed per time unit.

◮ Turnaround time - The interval from the time of submission of

a process to the time of completion.

◮ Waiting time - The sum of periods spend waiting in the ready

queue.

◮ Response time - The time from the submission of a request

until the first response is produced.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-6
SLIDE 6

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Scheduling in Operating System

◮ The best optimization occur when It is possible to maximize

CPU utilization and throughput and to minimize turnaround time, waiting time, and response time.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-7
SLIDE 7

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Operating System Algorithms

◮ Round-Robin algorithm ◮ Weighted Round Robin Algorithm - weight factor ◮ Deficit Round Robin Algorithm - packets of different size ◮ Elastic Round Robin Algorithm - shared resources to multiple

request

◮ Fair Queuing Algorithm - max-min criterion ◮ FCFS ◮ Shortest Job First

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-8
SLIDE 8

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Operating System Algorithms

◮ Self-Clocked Fair Queuing Algorithm - virtual clock ◮ Start-time Fair Queuing Algorithm - schedule in increase of

the start tag

◮ Weighted Fair Queuing Algorithm - support different

bandwidth

◮ Earliest deadline first scheduling - earliest deadline has the

highest priority

◮ Least Laxity First scheduling - higher priority to

least(flexibility to schedule) laxity.

◮ Maximum Urgency First Algorithm - fixed and dynamic

priority scheduling.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-9
SLIDE 9

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Operating System Algorithms

◮ Poorly documented algorithms;

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-10
SLIDE 10

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final Categories of Scheduling algorithm Scheduling in Operating System

Operating System Algorithms

◮ Poorly documented algorithms; ◮ Foreground-background ◮ Gang scheduling ◮ Highest Response Ratio Next ◮ Lottery Scheduling is a probabilistic scheduling algorithm

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-11
SLIDE 11

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Brief look at Parallel computing

◮ In parallel computing the scheduling heuristics can be grouped

in to two categories: online mode and batch-mode heuristics.

◮ Parallel computing rely on multithreading for efficient

execution, thus relying on the schedule of the threads among the processors

◮ Two models used in parallel machine are Cilk and Kaapi, both

  • f them use work stealing algorithm for executation.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-12
SLIDE 12

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Brief look at cluster computing

◮ Cluster computing is best characterized as the integration of a

number of off-the shelf commodity computers and resources integrated through hardware, networks, and software to behave as a single computer

◮ The application model best suited for cluster enviroment is

the Parallel Tasks (PT) model

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-13
SLIDE 13

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Taxonomy of Grid Computing

◮ Local vs. Global ◮ Static vs. Dynamic ◮ Optimal vs. Suboptimal ◮ Approximate vs. Heuristic ◮ Distributed vs. Centralized ◮ Cooperative vs. Non-cooperative

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-14
SLIDE 14

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The master is responsible for decomposing the problem into

small tasks, as well as for gathering the partial results in order to produce the final result of the computation.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-15
SLIDE 15

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The master is responsible for decomposing the problem into

small tasks, as well as for gathering the partial results in order to produce the final result of the computation.

◮ The slave processes execute in a very simple cycle: receive a

message from the master with the next task, process the task, and send back the result to the master.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-16
SLIDE 16

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The master is responsible for decomposing the problem into

small tasks, as well as for gathering the partial results in order to produce the final result of the computation.

◮ The slave processes execute in a very simple cycle: receive a

message from the master with the next task, process the task, and send back the result to the master.

◮ In evaluating a Master-Slave application, two performance

measures of particular interest are speedup and efficiency.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-17
SLIDE 17

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The most important objective functions: the minimization of

the makespan (or total execution time), the minimization of the maximum response time (difference between completion time and release time), and the minimization of the sum of all response times.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-18
SLIDE 18

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The most important objective functions: the minimization of

the makespan (or total execution time), the minimization of the maximum response time (difference between completion time and release time), and the minimization of the sum of all response times.

◮ Have on-line scheduling problems; where release times and

sizes of incoming tasks are not known in advance.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-19
SLIDE 19

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ How to adapt the OS algorithms in Master/Slave context

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-20
SLIDE 20

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ How to adapt the OS algorithms in Master/Slave context ◮ For example:

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-21
SLIDE 21

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ How to adapt the OS algorithms in Master/Slave context ◮ For example: ◮ Round Robin (RR) is the simplest algorithm. It simply sends a

task to each slave one by one, according to a prescribed

  • rdering. This ordering first chooses the slave with the

smallest wi + ci , then the slave with the second smallest value, etc.

◮ No need of preemption

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-22
SLIDE 22

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ How to adapt the OS algorithms in Master/Slave context ◮ For example: ◮ Round Robin (RR) is the simplest algorithm. It simply sends a

task to each slave one by one, according to a prescribed

  • rdering. This ordering first chooses the slave with the

smallest wi + ci , then the slave with the second smallest value, etc.

◮ No need of preemption ◮ Other algorithms ;

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-23
SLIDE 23

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The problem ; MS|On − line; ri; wj; cj|Cmax

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-24
SLIDE 24

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ The problem ; MS|On − line; ri; wj; cj|Cmax ◮ scheduling different-size tasks on a homogeneous platform

reduced to a master and two identical slaves, without paying any cost for the communications from the master, and without any release time, is already an NP-hard problem

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-25
SLIDE 25

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ OAR scheduler

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-26
SLIDE 26

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ OAR scheduler ◮ Priority Queue, Use FCFS

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-27
SLIDE 27

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ This work will continue; - simulating the algorithms in on-line,

  • ff-line or batch mode is the next step.

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-28
SLIDE 28

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Master-Slave Application on Grid Computation

◮ This work will continue; - simulating the algorithms in on-line,

  • ff-line or batch mode is the next step.

◮ Check the results after two years...

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context

slide-29
SLIDE 29

Outline Introduction Parallel Computing Cluster Computing Grid Computing Final

Questions

Peter N. Nyumu Supervisor: Alfredo Goldman Analysis of Scheduling Algorithms on a Parallel Programming Context