On the Computational Complexity of Periodic Scheduling PhD defense - - PowerPoint PPT Presentation

on the computational complexity of periodic scheduling
SMART_READER_LITE
LIVE PREVIEW

On the Computational Complexity of Periodic Scheduling PhD defense - - PowerPoint PPT Presentation

On the Computational Complexity of Periodic Scheduling PhD defense Thomas Rothvo Real-time Scheduling Given: (synchronous) tasks 1 , . . . , n with i = ( c ( i ) , d ( i ) , p ( i )) Real-time Scheduling Given: (synchronous)


slide-1
SLIDE 1

On the Computational Complexity

  • f Periodic Scheduling

PhD defense Thomas Rothvoß

slide-2
SLIDE 2

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi))

slide-3
SLIDE 3

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time

slide-4
SLIDE 4

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline

slide-5
SLIDE 5

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period

slide-6
SLIDE 6

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0)

slide-7
SLIDE 7

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)

p(τi)

slide-8
SLIDE 8

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)

p(τi)

Settings:

◮ static priorities ↔ dynamic priorities

slide-9
SLIDE 9

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)

p(τi)

Settings:

◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor

slide-10
SLIDE 10

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)

p(τi)

Settings:

◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive

slide-11
SLIDE 11

Real-time Scheduling

Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)

p(τi)

Settings:

◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive

Implicit deadlines: d(τi) = p(τi) Constrained deadlines: d(τi) ≤ p(τi)

slide-12
SLIDE 12

Example: Implicit deadlines & static priorities

c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5

b b b

time 0 1 2 3 4 5 6 7 8 9 10

slide-13
SLIDE 13

Example: Implicit deadlines & static priorities

Theorem (Liu & Layland ’73)

Optimal priorities:

1 p(τi) for τi (Rate-monotonic schedule)

c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5

b b b

time 0 1 2 3 4 5 6 7 8 9 10

slide-14
SLIDE 14

Example: Implicit deadlines & static priorities

Theorem (Liu & Layland ’73)

Optimal priorities:

1 p(τi) for τi (Rate-monotonic schedule)

c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5

b b b

time 0 1 2 3 4 5 6 7 8 9 10

slide-15
SLIDE 15

Example: Implicit deadlines & static priorities

Theorem (Liu & Layland ’73)

Optimal priorities:

1 p(τi) for τi (Rate-monotonic schedule)

c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5

b b b

time 0 1 2 3 4 5 6 7 8 9 10

slide-16
SLIDE 16

Feasibility test for implicit-deadline tasks

Theorem (Lehoczky et al. ’89)

If p(τ1) ≤ . . . ≤ p(τn) then the response time r(τi) in a Rate-monotonic schedule is the smallest non-negative value s.t. c(τi) +

  • j<i

r(τi) p(τj)

  • c(τj) ≤ r(τi)

1 machine suffices ⇔ ∀i : r(τi) ≤ p(τi).

slide-17
SLIDE 17

Simultaneous Diophantine Approximation (SDA)

Given:

◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0

Decide: ∃Q ∈ {1, . . . , N} : max

i=1,...,n

  • αi − Z

Q

  • ≤ ε

Q

slide-18
SLIDE 18

Simultaneous Diophantine Approximation (SDA)

Given:

◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0

Decide: ∃Q ∈ {1, . . . , N} : max

i=1,...,n

  • αi − Z

Q

  • ≤ ε

Q ⇔ ∃Q ∈ {1, . . . , N} : max

i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε

slide-19
SLIDE 19

Simultaneous Diophantine Approximation (SDA)

Given:

◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0

Decide: ∃Q ∈ {1, . . . , N} : max

i=1,...,n

  • αi − Z

Q

  • ≤ ε

Q ⇔ ∃Q ∈ {1, . . . , N} : max

i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ NP-hard [Lagarias ’85]

slide-20
SLIDE 20

Simultaneous Diophantine Approximation (SDA)

Given:

◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0

Decide: ∃Q ∈ {1, . . . , N} : max

i=1,...,n

  • αi − Z

Q

  • ≤ ε

Q ⇔ ∃Q ∈ {1, . . . , N} : max

i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ NP-hard [Lagarias ’85] ◮ Gap version NP-hard [R¨

  • ssner & Seifert ’96,

Chen & Meng ’07]

slide-21
SLIDE 21

Simultaneous Diophantine Approximation (2)

Theorem (R¨

  • ssner & Seifert ’96, Chen & Meng ’07)

Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ {1, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n

O(1) log log n N} : max

i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n

O(1) log log n ε

slide-22
SLIDE 22

Simultaneous Diophantine Approximation (2)

Theorem (R¨

  • ssner & Seifert ’96, Chen & Meng ’07)

Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ {1, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n

O(1) log log n N} : max

i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n

O(1) log log n ε

Theorem (Eisenbrand & R. - APPROX’09)

Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , 2nO(1) · N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n

O(1) log log n · ε

even if ε ≤ (1

2)nO(1).

slide-23
SLIDE 23

Directed Diophantine Approximation (DDA)

Theorem (Eisenbrand & R. - APPROX’09)

Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌉ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n

O(1) log log n · N} : max

i=1,...,n |⌈Qαi⌉ − Qαi| ≤ 2nO(1) · ε

even if ε ≤ (1

2)nO(1).

slide-24
SLIDE 24

Directed Diophantine Approximation (DDA)

Theorem (Eisenbrand & R. - APPROX’09)

Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌉ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n

O(1) log log n · N} : max

i=1,...,n |⌈Qαi⌉ − Qαi| ≤ 2nO(1) · ε

even if ε ≤ (1

2)nO(1).

Theorem (Eisenbrand & R. - SODA’10)

Given α1, . . . , αn, w1, . . . , wn ≥ 0, N, ε > 0 it is NP-hard to distinguish

◮ ∃Q ∈ [N/2, N] : n i=1 wi (Qαi − ⌊Qαi⌋) ≤ ε ◮ ∄Q ∈ [1, n

O(1) log log n · N] : n

i=1 wi (Qαi − ⌊Qαi⌋) ≤ 2nO(1) · ε

even if ε ≤ (1

2)nO(1).

slide-25
SLIDE 25

Hardness of Response Time Computation

Theorem (Eisenbrand & R. - RTSS’08)

Computing response times for implicit-deadline tasks w.r.t. to a Rate-monotonic schedule, i.e. solving min

  • r ≥ 0 | c(τn) +

n−1

  • i=1
  • r

p(τi)

  • c(τi) ≤ r
  • (p(τ1) ≤ . . . ≤ p(τn)) is NP-hard (even to approximate within a

factor of n

O(1) log log n ).

◮ Reduction from Directed Diophantine Approximation

slide-26
SLIDE 26

Mixing Set

min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn

slide-27
SLIDE 27

Mixing Set

min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn

◮ Complexity status? [Conforti, Di Summa & Wolsey ’08]

slide-28
SLIDE 28

Mixing Set

min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn

◮ Complexity status? [Conforti, Di Summa & Wolsey ’08]

Theorem (Eisenbrand & R. - APPROX’09)

Solving Mixing Set is NP-hard.

  • 1. Model Directed Diophantine Approximation (almost) as

Mixing Set

  • 2. Simulate missing constraint with Lagrangian relaxation
slide-29
SLIDE 29

Testing EDF-schedulability

Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)

Theorem (Baruah, Mok & Rosier ’90)

Q − d(τi) p(τi)

  • + 1
  • · c(τi)
  • =: DBF(τi, Q)

d(τi) d(τi) + 1 · p(τi) d(τi) + 2 · p(τi) c(τi) 2c(τi) 3c(τi) Q DBF(τi, Q)

slide-30
SLIDE 30

Testing EDF-schedulability

Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)

Theorem (Baruah, Mok & Rosier ’90)

Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • · c(τi)
  • =: DBF(τi, Q)

≤ Q

slide-31
SLIDE 31

Testing EDF-schedulability

Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)

Theorem (Baruah, Mok & Rosier ’90)

Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • · c(τi)
  • =: DBF(τi, Q)

≤ Q

◮ Complexity status is “Problem 2” in Open Problems in

Real-Time Scheduling [Baruah & Pruhs ’09]

slide-32
SLIDE 32

Testing EDF-schedulability

Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)

Theorem (Baruah, Mok & Rosier ’90)

Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • · c(τi)
  • =: DBF(τi, Q)

≤ Q

◮ Complexity status is “Problem 2” in Open Problems in

Real-Time Scheduling [Baruah & Pruhs ’09]

Theorem (Eisenbrand & R. - SODA’10)

Testing EDF-schedulability is coNP-hard.

slide-33
SLIDE 33

Reduction (1)

Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0

slide-34
SLIDE 34

Reduction (1)

Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0 Define: Task set S = {τ0, . . . , τn} s.t.

◮ Yes: ∃Q ∈ [N/2, N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ ε

⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)

◮ No: ∄Q ∈ [N/2, 3N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ 3ε

⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q)

slide-35
SLIDE 35

Reduction (1)

Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0 Define: Task set S = {τ0, . . . , τn} s.t.

◮ Yes: ∃Q ∈ [N/2, N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ ε

⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)

◮ No: ∄Q ∈ [N/2, 3N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ 3ε

⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q) Task system: τi = (c(τi), d(τi), p(τi)) :=

  • wi, 1

αi , 1 αi

  • ∀i = 1, . . . , n

U :=

n

  • i=1

c(τi) p(τi)

slide-36
SLIDE 36

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)
slide-37
SLIDE 37

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)
slide-38
SLIDE 38

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋)

slide-39
SLIDE 39

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔

  • approx. error small
slide-40
SLIDE 40

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔

  • approx. error small

DBF({τ1, . . . , τn}, Q) Q Q

slide-41
SLIDE 41

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔

  • approx. error small

DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U

b

slide-42
SLIDE 42

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔

  • approx. error small

DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U

b

N/2 N

slide-43
SLIDE 43

Reduction (2)

Q · U −

n

  • i=1

Q − d(τi) p(τi)

  • + 1
  • c(τi)

=

n

  • i=1

Q p(τi) − Q p(τi)

  • c(τi)

=

n

  • i=1

wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔

  • approx. error small

DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U

b

N/2 N

b

Q∗

slide-44
SLIDE 44

Reduction (3)

Add a special task τ0 = (c(τ0), d(τ0), p(τ0)) := (3ε, N/2, ∞) N/2 N DBF(τ0, Q) Q Q

slide-45
SLIDE 45

Reduction (4)

Q N/2 N U

slide-46
SLIDE 46

Reduction (4)

DBF(S, Q) Q Q N/2 N U

slide-47
SLIDE 47

Reduction (4)

DBF(S, Q) Q Q N/2 N U β := U + ε

N

U − ε

N

slide-48
SLIDE 48

Algorithms for Multi-processor Scheduling

Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor

slide-49
SLIDE 49

Algorithms for Multi-processor Scheduling

Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:

◮ NP-hard (generalization of Bin Packing)

slide-50
SLIDE 50

Algorithms for Multi-processor Scheduling

Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:

◮ NP-hard (generalization of Bin Packing) ◮ asymp. 1.75-apx [Burchard et al. ’95]

slide-51
SLIDE 51

Algorithms for Multi-processor Scheduling

Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:

◮ NP-hard (generalization of Bin Packing) ◮ asymp. 1.75-apx [Burchard et al. ’95]

Theorem (Eisenbrand & R. - ICALP’08)

In time Oε(1) · n(1/ε)O(1) one can find an assignment to APX ≤ (1 + ε) · OPT + O(1) machines such that the RM-schedule is feasible if the machines are speed up by a factor of 1 + ε (resource augmentation).

  • 1. Rounding, clustering & merging small tasks
  • 2. Use relaxed feasibility notion
  • 3. Dynamic programming
slide-52
SLIDE 52

Algorithms for Multi-processor Scheduling (2)

Theorem

Let k ∈ N be an arbitrary parameter. In time O(n3) one can find an assignment of implicit-deadline tasks to APX ≤ 3 2 + 1 k

  • OPT + 9k

machines (→ asymptotic 3

2-apx).

  • 1. Create graph G = (S, E) with tasks as nodes and edges

(τ1, τ2) ∈ E :⇔ {τ1, τ2} RM-schedulable (on 1 machine)

  • 2. Define suitable edge weights
  • 3. Mincost matching ⇒ good assignment
slide-53
SLIDE 53

Algorithms for Multi-processor Scheduling (3)

P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}

slide-54
SLIDE 54

Algorithms for Multi-processor Scheduling (3)

P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =

  • 1

pack machine with {τi | xi = 1}

  • therwise
slide-55
SLIDE 55

Algorithms for Multi-processor Scheduling (3)

P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =

  • 1

pack machine with {τi | xi = 1}

  • therwise

Column-based LP (or configuration LP): min 1T λ

  • x∈P

λxx ≥ 1 λ ≥

slide-56
SLIDE 56

Algorithms for Multi-processor Scheduling (3)

P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =

  • 1

pack machine with {τi | xi = 1}

  • therwise

Column-based LP (or configuration LP): min 1T λ

  • x∈P

λxx ≥ 1 λ ≥

◮ For Bin Packing: OPT − OPTf ≤ O(log2 n)

[Karmarkar & Karp ’82]

slide-57
SLIDE 57

Algorithms for Multi-processor Scheduling (3)

P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =

  • 1

pack machine with {τi | xi = 1}

  • therwise

Column-based LP (or configuration LP): min 1T λ

  • x∈P

λxx ≥ 1 λ ≥

◮ For Bin Packing: OPT − OPTf ≤ O(log2 n)

[Karmarkar & Karp ’82]

Theorem

1.33 ≈ 4 3 ≤ sup

instances

OPT OPTf

  • ≤ 1 + ln(2) ≈ 1.69
slide-58
SLIDE 58

Algorithms for Multi-processor Scheduling (4)

Theorem (Eisenbrand & R. - ICALP’08)

For all ε > 0, there is no polynomial time algorithm with APX ≤ OPT + n1−ε unless P = NP (⇒ no AFPTAS).

◮ Cloning of 3-Partition instances

slide-59
SLIDE 59

Algorithms for Multi-processor Scheduling (5)

◮ FFMP is a simple First Fit like heuristic with running time

O(n log n)

slide-60
SLIDE 60

Algorithms for Multi-processor Scheduling (5)

◮ FFMP is a simple First Fit like heuristic with running time

O(n log n)

Theorem (Karrenbauer & R. - ESA’09)

For n tasks S = {τ1, . . . , τn} with u(τi) ∈ [0, 1] uniformly at random E[FFMP(S)] ≤ E[u(S)] + O(n3/4(log n)3/8) (average utilization → 100%).

◮ Reduce to known results from the average case analysis of

Bin Packing

slide-61
SLIDE 61

Dynamic vs. static priorities

◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi)

slide-62
SLIDE 62

Dynamic vs. static priorities

◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor

f := sup

instances

  • proc. speed for static priority schedule
  • proc. speed for arbitrary schedule
  • ≥ 1
slide-63
SLIDE 63

Dynamic vs. static priorities

◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor

f := sup

instances

  • proc. speed for static priority schedule
  • proc. speed for arbitrary schedule
  • ≥ 1

◮ Known: 3 2 ≤ f ≤ 2 [Baruah & Burns ’08]

slide-64
SLIDE 64

Dynamic vs. static priorities

◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor

f := sup

instances

  • proc. speed for static priority schedule
  • proc. speed for arbitrary schedule
  • ≥ 1

◮ Known: 3 2 ≤ f ≤ 2 [Baruah & Burns ’08]

Theorem (Davis, R., Baruah & Burns - RT Systems ’09)

f = 1 Ω ≈ 1.76 where Ω ≈ 0.567 is the unique positive real root of x = ln(1/x).

slide-65
SLIDE 65

The end Thanks for your attention