SLIDE 1 On the Computational Complexity
PhD defense Thomas Rothvoß
SLIDE 2
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi))
SLIDE 3
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time
SLIDE 4
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline
SLIDE 5
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period
SLIDE 6
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0)
SLIDE 7
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)
p(τi)
SLIDE 8
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)
p(τi)
Settings:
◮ static priorities ↔ dynamic priorities
SLIDE 9
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)
p(τi)
Settings:
◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor
SLIDE 10
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)
p(τi)
Settings:
◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive
SLIDE 11
Real-time Scheduling
Given: (synchronous) tasks τ1, . . . , τn with τi = (c(τi), d(τi), p(τi)) running time (relative) deadline period W.l.o.g.: Task τi releases job of length c(τi) at z · p(τi) and absolute deadline z · p(τi) + d(τi) (z ∈ N0) Utilization: u(τi) = c(τi)
p(τi)
Settings:
◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive
Implicit deadlines: d(τi) = p(τi) Constrained deadlines: d(τi) ≤ p(τi)
SLIDE 12 Example: Implicit deadlines & static priorities
c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5
b b b
time 0 1 2 3 4 5 6 7 8 9 10
SLIDE 13 Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1 p(τi) for τi (Rate-monotonic schedule)
c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5
b b b
time 0 1 2 3 4 5 6 7 8 9 10
SLIDE 14 Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1 p(τi) for τi (Rate-monotonic schedule)
c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5
b b b
time 0 1 2 3 4 5 6 7 8 9 10
SLIDE 15 Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1 p(τi) for τi (Rate-monotonic schedule)
c(τ1) = 1 d(τ1) = p(τ1) = 2 c(τ2) = 2 d(τ2) = p(τ2) = 5
b b b
time 0 1 2 3 4 5 6 7 8 9 10
SLIDE 16 Feasibility test for implicit-deadline tasks
Theorem (Lehoczky et al. ’89)
If p(τ1) ≤ . . . ≤ p(τn) then the response time r(τi) in a Rate-monotonic schedule is the smallest non-negative value s.t. c(τi) +
r(τi) p(τj)
1 machine suffices ⇔ ∀i : r(τi) ≤ p(τi).
SLIDE 17 Simultaneous Diophantine Approximation (SDA)
Given:
◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0
Decide: ∃Q ∈ {1, . . . , N} : max
i=1,...,n
Q
Q
SLIDE 18 Simultaneous Diophantine Approximation (SDA)
Given:
◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0
Decide: ∃Q ∈ {1, . . . , N} : max
i=1,...,n
Q
Q ⇔ ∃Q ∈ {1, . . . , N} : max
i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε
SLIDE 19 Simultaneous Diophantine Approximation (SDA)
Given:
◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0
Decide: ∃Q ∈ {1, . . . , N} : max
i=1,...,n
Q
Q ⇔ ∃Q ∈ {1, . . . , N} : max
i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ NP-hard [Lagarias ’85]
SLIDE 20 Simultaneous Diophantine Approximation (SDA)
Given:
◮ α1, . . . , αn ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0
Decide: ∃Q ∈ {1, . . . , N} : max
i=1,...,n
Q
Q ⇔ ∃Q ∈ {1, . . . , N} : max
i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ NP-hard [Lagarias ’85] ◮ Gap version NP-hard [R¨
Chen & Meng ’07]
SLIDE 21 Simultaneous Diophantine Approximation (2)
Theorem (R¨
- ssner & Seifert ’96, Chen & Meng ’07)
Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ {1, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n
O(1) log log n N} : max
i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n
O(1) log log n ε
SLIDE 22 Simultaneous Diophantine Approximation (2)
Theorem (R¨
- ssner & Seifert ’96, Chen & Meng ’07)
Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ {1, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n
O(1) log log n N} : max
i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n
O(1) log log n ε
Theorem (Eisenbrand & R. - APPROX’09)
Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , 2nO(1) · N} : max i=1,...,n |⌈Qαi⌋ − Qαi| ≤ n
O(1) log log n · ε
even if ε ≤ (1
2)nO(1).
SLIDE 23 Directed Diophantine Approximation (DDA)
Theorem (Eisenbrand & R. - APPROX’09)
Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌉ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n
O(1) log log n · N} : max
i=1,...,n |⌈Qαi⌉ − Qαi| ≤ 2nO(1) · ε
even if ε ≤ (1
2)nO(1).
SLIDE 24 Directed Diophantine Approximation (DDA)
Theorem (Eisenbrand & R. - APPROX’09)
Given α1, . . . , αn, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ {N/2, . . . , N} : max i=1,...,n |⌈Qαi⌉ − Qαi| ≤ ε ◮ ∄Q ∈ {1, . . . , n
O(1) log log n · N} : max
i=1,...,n |⌈Qαi⌉ − Qαi| ≤ 2nO(1) · ε
even if ε ≤ (1
2)nO(1).
Theorem (Eisenbrand & R. - SODA’10)
Given α1, . . . , αn, w1, . . . , wn ≥ 0, N, ε > 0 it is NP-hard to distinguish
◮ ∃Q ∈ [N/2, N] : n i=1 wi (Qαi − ⌊Qαi⌋) ≤ ε ◮ ∄Q ∈ [1, n
O(1) log log n · N] : n
i=1 wi (Qαi − ⌊Qαi⌋) ≤ 2nO(1) · ε
even if ε ≤ (1
2)nO(1).
SLIDE 25 Hardness of Response Time Computation
Theorem (Eisenbrand & R. - RTSS’08)
Computing response times for implicit-deadline tasks w.r.t. to a Rate-monotonic schedule, i.e. solving min
n−1
p(τi)
- c(τi) ≤ r
- (p(τ1) ≤ . . . ≤ p(τn)) is NP-hard (even to approximate within a
factor of n
O(1) log log n ).
◮ Reduction from Directed Diophantine Approximation
SLIDE 26
Mixing Set
min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn
SLIDE 27
Mixing Set
min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn
◮ Complexity status? [Conforti, Di Summa & Wolsey ’08]
SLIDE 28 Mixing Set
min css + cT y s + aiyi ≥ bi ∀i = 1, . . . , n s ∈ R≥0 y ∈ Zn
◮ Complexity status? [Conforti, Di Summa & Wolsey ’08]
Theorem (Eisenbrand & R. - APPROX’09)
Solving Mixing Set is NP-hard.
- 1. Model Directed Diophantine Approximation (almost) as
Mixing Set
- 2. Simulate missing constraint with Lagrangian relaxation
SLIDE 29 Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)
Theorem (Baruah, Mok & Rosier ’90)
Q − d(τi) p(τi)
- + 1
- · c(τi)
- =: DBF(τi, Q)
d(τi) d(τi) + 1 · p(τi) d(τi) + 2 · p(τi) c(τi) 2c(τi) 3c(τi) Q DBF(τi, Q)
SLIDE 30 Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :
n
Q − d(τi) p(τi)
- + 1
- · c(τi)
- =: DBF(τi, Q)
≤ Q
SLIDE 31 Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :
n
Q − d(τi) p(τi)
- + 1
- · c(τi)
- =: DBF(τi, Q)
≤ Q
◮ Complexity status is “Problem 2” in Open Problems in
Real-Time Scheduling [Baruah & Pruhs ’09]
SLIDE 32 Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi) ≤ p(τi)
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff ∀Q ≥ 0 :
n
Q − d(τi) p(τi)
- + 1
- · c(τi)
- =: DBF(τi, Q)
≤ Q
◮ Complexity status is “Problem 2” in Open Problems in
Real-Time Scheduling [Baruah & Pruhs ’09]
Theorem (Eisenbrand & R. - SODA’10)
Testing EDF-schedulability is coNP-hard.
SLIDE 33
Reduction (1)
Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0
SLIDE 34
Reduction (1)
Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0 Define: Task set S = {τ0, . . . , τn} s.t.
◮ Yes: ∃Q ∈ [N/2, N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ ε
⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)
◮ No: ∄Q ∈ [N/2, 3N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ 3ε
⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q)
SLIDE 35 Reduction (1)
Given: DDA instance α1, . . . , αn, w1, . . . , wn, N ∈ N, ε > 0 Define: Task set S = {τ0, . . . , τn} s.t.
◮ Yes: ∃Q ∈ [N/2, N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ ε
⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)
◮ No: ∄Q ∈ [N/2, 3N] : n i=1 wi(Qαi − ⌊Qαi⌋) ≤ 3ε
⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q) Task system: τi = (c(τi), d(τi), p(τi)) :=
αi , 1 αi
U :=
n
c(τi) p(τi)
SLIDE 36 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
SLIDE 37 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
SLIDE 38 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋)
SLIDE 39 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔
SLIDE 40 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔
DBF({τ1, . . . , τn}, Q) Q Q
SLIDE 41 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔
DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U
b
SLIDE 42 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔
DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U
b
N/2 N
SLIDE 43 Reduction (2)
Q · U −
n
Q − d(τi) p(τi)
=
n
Q p(τi) − Q p(τi)
=
n
wi(Qαi − ⌊Qαi⌋) ⇒ DBF({τ1, . . . , τn}, Q) Q ≈ U ⇔
DBF({τ1, . . . , τn}, Q) Q Q scm(p(τ1), . . . , p(τn)) U
b
N/2 N
b
Q∗
SLIDE 44
Reduction (3)
Add a special task τ0 = (c(τ0), d(τ0), p(τ0)) := (3ε, N/2, ∞) N/2 N DBF(τ0, Q) Q Q
SLIDE 45
Reduction (4)
Q N/2 N U
SLIDE 46
Reduction (4)
DBF(S, Q) Q Q N/2 N U
SLIDE 47
Reduction (4)
DBF(S, Q) Q Q N/2 N U β := U + ε
N
U − ε
N
SLIDE 48
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor
SLIDE 49
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:
◮ NP-hard (generalization of Bin Packing)
SLIDE 50
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:
◮ NP-hard (generalization of Bin Packing) ◮ asymp. 1.75-apx [Burchard et al. ’95]
SLIDE 51 Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi) = p(τi)), multi-processor Known results:
◮ NP-hard (generalization of Bin Packing) ◮ asymp. 1.75-apx [Burchard et al. ’95]
Theorem (Eisenbrand & R. - ICALP’08)
In time Oε(1) · n(1/ε)O(1) one can find an assignment to APX ≤ (1 + ε) · OPT + O(1) machines such that the RM-schedule is feasible if the machines are speed up by a factor of 1 + ε (resource augmentation).
- 1. Rounding, clustering & merging small tasks
- 2. Use relaxed feasibility notion
- 3. Dynamic programming
SLIDE 52 Algorithms for Multi-processor Scheduling (2)
Theorem
Let k ∈ N be an arbitrary parameter. In time O(n3) one can find an assignment of implicit-deadline tasks to APX ≤ 3 2 + 1 k
machines (→ asymptotic 3
2-apx).
- 1. Create graph G = (S, E) with tasks as nodes and edges
(τ1, τ2) ∈ E :⇔ {τ1, τ2} RM-schedulable (on 1 machine)
- 2. Define suitable edge weights
- 3. Mincost matching ⇒ good assignment
SLIDE 53
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
SLIDE 54 Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =
pack machine with {τi | xi = 1}
SLIDE 55 Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =
pack machine with {τi | xi = 1}
Column-based LP (or configuration LP): min 1T λ
λxx ≥ 1 λ ≥
SLIDE 56 Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =
pack machine with {τi | xi = 1}
Column-based LP (or configuration LP): min 1T λ
λxx ≥ 1 λ ≥
◮ For Bin Packing: OPT − OPTf ≤ O(log2 n)
[Karmarkar & Karp ’82]
SLIDE 57 Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable} λx =
pack machine with {τi | xi = 1}
Column-based LP (or configuration LP): min 1T λ
λxx ≥ 1 λ ≥
◮ For Bin Packing: OPT − OPTf ≤ O(log2 n)
[Karmarkar & Karp ’82]
Theorem
1.33 ≈ 4 3 ≤ sup
instances
OPT OPTf
SLIDE 58
Algorithms for Multi-processor Scheduling (4)
Theorem (Eisenbrand & R. - ICALP’08)
For all ε > 0, there is no polynomial time algorithm with APX ≤ OPT + n1−ε unless P = NP (⇒ no AFPTAS).
◮ Cloning of 3-Partition instances
SLIDE 59
Algorithms for Multi-processor Scheduling (5)
◮ FFMP is a simple First Fit like heuristic with running time
O(n log n)
SLIDE 60
Algorithms for Multi-processor Scheduling (5)
◮ FFMP is a simple First Fit like heuristic with running time
O(n log n)
Theorem (Karrenbauer & R. - ESA’09)
For n tasks S = {τ1, . . . , τn} with u(τi) ∈ [0, 1] uniformly at random E[FFMP(S)] ≤ E[u(S)] + O(n3/4(log n)3/8) (average utilization → 100%).
◮ Reduce to known results from the average case analysis of
Bin Packing
SLIDE 61
Dynamic vs. static priorities
◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi)
SLIDE 62 Dynamic vs. static priorities
◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor
f := sup
instances
- proc. speed for static priority schedule
- proc. speed for arbitrary schedule
- ≥ 1
SLIDE 63 Dynamic vs. static priorities
◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor
f := sup
instances
- proc. speed for static priority schedule
- proc. speed for arbitrary schedule
- ≥ 1
◮ Known: 3 2 ≤ f ≤ 2 [Baruah & Burns ’08]
SLIDE 64 Dynamic vs. static priorities
◮ Setting: constrained-deadline tasks, i.e. d(τi) ≤ p(τi) ◮ Worst-case speedup factor
f := sup
instances
- proc. speed for static priority schedule
- proc. speed for arbitrary schedule
- ≥ 1
◮ Known: 3 2 ≤ f ≤ 2 [Baruah & Burns ’08]
Theorem (Davis, R., Baruah & Burns - RT Systems ’09)
f = 1 Ω ≈ 1.76 where Ω ≈ 0.567 is the unique positive real root of x = ln(1/x).
SLIDE 65
The end Thanks for your attention