Worst-case Bounds and Optimized Cache on Mth Request Cache Insertion Policies under Elastic Conditions
Niklas Carlsson, Linköping University Derek Eager, University of Saskatchewan
- Proc. IFIP Performance, Toulouse, France, Dec. 2018.
Worst-case Bounds and Optimized Cache on M th Request Cache - - PowerPoint PPT Presentation
Worst-case Bounds and Optimized Cache on M th Request Cache Insertion Policies under Elastic Conditions Niklas Carlsson, Linkping University Derek Eager, University of Saskatchewan Proc. IFIP Performance , Toulouse, France, Dec. 2018. Motivation
Niklas Carlsson, Linköping University Derek Eager, University of Saskatchewan
2
resource demands
third-party operated Content Distribution Networks (CDNs) and cloud-based content delivery platforms
is expected to increase as new content providers enter the market Problem: Individual content provider that wants to minimize its delivery costs under the assumptions that
resource demands
third-party operated Content Distribution Networks (CDNs) and cloud-based content delivery platforms
is expected to increase as new content providers enter the market Problem: Individual content provider that wants to minimize its delivery costs under the assumptions that
insertion policies when using a Time-to-Live (TTL) based eviction policy
new insertion
requested again soon
granularities for computation and storage
5
Within this context, we
ratios of different classes of cache on Mth request cache insertion policies,
inter-request distributions,
(deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter- request distributions, and
the relative cost performance of the policies.
Our results show that a window-based cache on 2nd request policy (with parameter selected based on the best worst-case bounds) provides good average performance across the different distributions and the full parameter ranges of each considered distribution
6
Within this context, we
ratios of different classes of cache on Mth request cache insertion policies,
inter-request distributions,
(deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter- request distributions, and
the relative cost performance of the policies.
Our results show that a window-based cache on 2nd request policy (using a single threshold parameter optimized to minimize the best worst-case costs) provides good average performance across the different distributions and the full parameter ranges of each considered distribution
7
8
insertion policies when using a Time-to-Live (TTL) based eviction policy
in the cache, the system must, in an online fashion, decide whether the
Storage close to end-user (normalized storage cost 1 per time unit) Backhaul bandwidth (remote bandwidth cost R)
9
insertion policies when using a Time-to-Live (TTL) based eviction policy
in the cache, the system must, in an online fashion, decide whether the
Storage close to end-user (normalized storage cost 1 per time unit) Backhaul bandwidth (remote bandwidth cost R)
10
miss
R
T R
T R
T R Always on 1st (T)
T R Always on 1st (T) R T
T R Always on 1st (T) R a3
T R Always on 1st (T) R T a3
T R Always on 1st (T) R T a3 a4
T R Always on 1st (T) R T a3 a4 T R
T R Always on 1st (T) R T a3 a4 T R
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T)
T R Always on 1st (T) R T a3 a4 T R R (cnt=1) Always on 2nd (T)
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R (cnt=2) T
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R (cnt=1)
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R (cnt=1) Single-window on 2nd (T)
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R (cnt Single-window on 2nd (T) R T
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R (cnt=1)
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T R (cnt=2)
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T a4 R
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T a4 R (cnt=1) R
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T a4 R R
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T a4 R R Dual-window on 2nd (W ≤ T), here W = T/2 R T R R R R
T R Always on 1st (T) R T a3 a4 T R R Always on 2nd (T) R T a3 a4 R R Single-window on 2nd (T) R T a4 R R Single-window on 3rd (T) R T R R R R
39
R R a3 a4 R “Oracle” policy: Keep in cache until (at least) the next inter-request arrival i whenever ai < R; otherwise, do not cache.
R R a3 a4 R “Oracle” policy: Keep in cache until (at least) the next inter-request arrival i whenever ai < R; otherwise, do not cache.
st
T R R T a3 a4 T R
st
st
?? Given arbitrary worst- case request sequence
st
Case: T ≤ R R R R R T T T
??
st
Case: T ≤ R R R R R T T T
… [some steps] …
st
Case: T ≤ R R R R R T T T
… [some steps] …
st
Case: T ≤ R R R R R T T T
… [some steps] …
Bound monotonically decreasing in range 0 ≤ T ≤ R. Bound tight when T R (and equal to 2); achieved with T+ spacing Similar approach for case when R ≤ T
49
Policy Parameters Optimal choice Tight bound Always 1st T T = R 2 Always Mth Single-window Mth Dual-window 2nd
50
Policy Parameters Optimal choice Tight bound Always 1st T T = R 2 Always Mth T T = R M+1 Single-window Mth T T = R M+1 Dual-window 2nd W, T W = T = R 3
51
Policy Parameters Optimal choice Tight bound Always 1st T T = R 2 Always Mth T T = R M+1 Single-window Mth T T = R M+1 Dual-window 2nd W, T W = T = R 3
will see that window-based policies are good on average (across different distributions and distribution parameters)
52
R R a3 a4 R “Oracle” policy: Keep in cache until (at least) the next inter-request arrival i whenever ai < R; otherwise, do not cache.
R R a3 a4 R “Oracle” policy: Keep in cache until (at least) the next inter-request arrival i whenever ai < R; otherwise, do not cache.
Rate of new requests Cost ai
(per request)
Cost R
(per request)
R R a3 a4 R “Oracle” policy: Keep in cache until (at least) the next inter-request arrival i whenever ai < R; otherwise, do not cache.
… [some steps] …
56
Exponential Erlang Deterministic Pareto Short-tailed Heavy-tailed
57
Exponential Erlang Deterministic Pareto Short-tailed Heavy-tailed
“Static baseline” policy: Either “always remote” or “always local”.
58
Exponential Erlang Deterministic Short-tailed
“Static baseline” policy: Either “always remote” or “always local”.
59
Exponential Erlang Deterministic Short-tailed
“Static baseline” policy: Either “always remote” or “always local”.
60
Exponential Erlang Deterministic Short-tailed
“Static baseline” policy: Either “always remote” or “always local”. … is online optimal for these cases!!
61
62
63
… in fact, for Pareto the optimal static baseline can be far from optimal
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
st
T R R T a3 a4 T R
No extension Extension case
71
72
73
74
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1)
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1) Window on 2nd peaks at (1.052,1.588)
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1) Window on 2nd peaks at (1.052,1.588) Static peaks at (1,1.582)
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1) Window on 2nd peaks at (1.052,1.588) Static peaks at (1,1.582)
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1) Window on 2nd peaks at (1.052,1.588) Static peaks at (1,1.582)
rates, but at an increased peak cost (at somewhat higher rates)
Always on Mth asymptotes at M/(M+1) Window on 2nd peaks at (1.052,1.588) Static peaks at (1,1.582)
81
and inter-request times become increasingly deterministic (rightmost fig)
Erlang k=2 Erlang k=4 Deterministic Increasingly deterministic inter-request times
82
α 1 (and tm is small). E.g., large peak cost ratio in left-most fig
distributions, suggesting that single-window on 2nd with T = R is a good choice
α =1.1 α =1.25 α = 2
83
84
Pareto, α =1.25 Exponential Erlang, k=4
85
better (always local or always remote) for each individual video ...
86
… highlighting importance of selective insertions.
Top (more than 20) Middle (4-20 views) Tail (1-3 views)
87
Worst-case bounds for the optimal cost and competitive cost ratios
Average cost expressions and bounds
bad when heavy tailed or request rates are not known
Numeric and trace-based evaluations reveal insights into the relative cost performance of the policies
Window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance
individual file objects typically are not known and can change quickly ...
Niklas Carlsson (niklas.carlsson@liu.se)
Worst-case Bounds and Optimized Cache on Mth Request Cache Insertion Policies under Elastic Conditions
Niklas Carlsson and Derek Eager