Asymptotically Exact TTL-Approximations of the Cache Replacement - - PowerPoint PPT Presentation

asymptotically exact ttl approximations of the cache
SMART_READER_LITE
LIVE PREVIEW

Asymptotically Exact TTL-Approximations of the Cache Replacement - - PowerPoint PPT Presentation

Asymptotically Exact TTL-Approximations of the Cache Replacement Algorithms LRU(m) and h-LRU Nicolas Gast 1 , Benny Van Houdt 2 ITC 2016 September 13-15, W urzburg, Germany 1 Inria 2 University of Antwerp Nicolas Gast 1 / 24 Caches are


slide-1
SLIDE 1

Asymptotically Exact TTL-Approximations of the Cache Replacement Algorithms LRU(m) and h-LRU

Nicolas Gast1, Benny Van Houdt2

ITC 2016

September 13-15, W¨ urzburg, Germany

1Inria 2University of Antwerp Nicolas Gast – 1 / 24

slide-2
SLIDE 2

Caches are everywhere

User/Application Data source slow cache fast Examples: Processor Database CDN

Nicolas Gast – 2 / 24

slide-3
SLIDE 3

Caching policies

Popularity-oblivious policies

◮ Cache-replacement policies3 (LRU, RANDOM), ◮ TTL-caches4.

Popularity-aware policies / learning

◮ LFU and variants5 ◮ Optimal policies for network of caches6 3started with [King 1971, Gelenbe 1973] 4e.g., Fofack e al 2013, Berger et al. 2014 5Optimizing TTL Caches under Heavy-Tailed Demands (Ferragut et al. 2016) 6Adaptive Caching Networks with Optimality Guarantees (Ioannidis and Yeh, 2016) Nicolas Gast – 3 / 24

slide-4
SLIDE 4

Caching policies

Popularity-oblivious policies

◮ Cache-replacement policies3 (LRU, RANDOM), ◮ TTL-caches4.

Popularity-aware policies / learning

◮ LFU and variants5 ◮ Optimal policies for network of caches6 3started with [King 1971, Gelenbe 1973] 4e.g., Fofack e al 2013, Berger et al. 2014 5Optimizing TTL Caches under Heavy-Tailed Demands (Ferragut et al. 2016) 6Adaptive Caching Networks with Optimality Guarantees (Ioannidis and Yeh, 2016) Nicolas Gast – 3 / 24

slide-5
SLIDE 5

Contributions (and Outline)

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 4 / 24

slide-6
SLIDE 6

Outline

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 5 / 24

slide-7
SLIDE 7

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-8
SLIDE 8

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-9
SLIDE 9

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-10
SLIDE 10

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

(do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-11
SLIDE 11

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

(do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-12
SLIDE 12

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-13
SLIDE 13

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

copy Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-14
SLIDE 14

The two policies generalize the LRU policy

LRU: hit : do nothing miss : evict the LRU (least-recently used) item.

cache

(do nothing) Example with this stream of requests: (note: similar to RANDOM, FIFO) (Assumption: Object are assumed to have the same size.)

Nicolas Gast – 6 / 24

slide-15
SLIDE 15

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

exchange h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-16
SLIDE 16

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

copy h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-17
SLIDE 17

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

copy h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-18
SLIDE 18

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

exchange h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-19
SLIDE 19

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

exchange h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-20
SLIDE 20

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

copy h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-21
SLIDE 21

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

exchange h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

copy copy

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-22
SLIDE 22

The LRU( m) and h-LRU policies

LRU( m)7: exchange the requested item with the LRU of next list

virtual cache

exchange h-LRU8: copy the requested item in the next list (and evict the LRU)

virtual cache

7Variant of RAND(

m) of [G, Van Houdt 2015]

8Introduced as k-LRU in [Martina et al. 2014] Nicolas Gast – 7 / 24

slide-23
SLIDE 23

Outline

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 8 / 24

slide-24
SLIDE 24

In this talk: Performance analysis and comparison

Qualitatively: less popular popular items It takes time to adapt

9(RAND(

m) in [G, Van Houdt, 2015]) for which product form solution exist.

10heuristic for h-LRU [Martina et al. 2014] Nicolas Gast – 9 / 24

slide-25
SLIDE 25

In this talk: Performance analysis and comparison

Qualitatively: less popular popular items It takes time to adapt Quantitatively: Related work: Variants9 or less accurate approximation10 We present TTL approximations for MAP arrival (in the talk: IRM).

9(RAND(

m) in [G, Van Houdt, 2015]) for which product form solution exist.

10heuristic for h-LRU [Martina et al. 2014] Nicolas Gast – 9 / 24

slide-26
SLIDE 26

Pure LRU: the Che-approximation

Cache eviction after T start new timer reset timer

Nicolas Gast – 10 / 24

slide-27
SLIDE 27

Pure LRU: the Che-approximation

Cache eviction after T start new timer reset timer If the request of object k is a Poisson process of intensity λk: Object k is in cache with probability πk(T) = 1 − e−λkT (TTL)

Nicolas Gast – 10 / 24

slide-28
SLIDE 28

Pure LRU: the Che-approximation

Cache eviction after T start new timer reset timer If the request of object k is a Poisson process of intensity λk: Object k is in cache with probability πk(T) = 1 − e−λkT (TTL) T satisfies

  • k

πk(T) = cache size . (Fixed point)

Nicolas Gast – 10 / 24

slide-29
SLIDE 29

The TTL-approximation for LRU(m)

Cache 1 Cache 2 Cache 3 exchange

Nicolas Gast – 11 / 24

slide-30
SLIDE 30

The TTL-approximation for LRU(m)

Cache 1 Cache 2 Cache 3 eviction after T1 eviction after T2 eviction after T3 start new timer start new timer start new timer reset timer If the request of object k is a Poisson process of intensity λk: Object k is in cache ℓ with probability πk,i(T1 . . . Th) ∝

  • i=1

(eλkTi − 1) T1 . . . Th satisfy

  • k

πk,i(T1 . . . Th) = size of list i.

Nicolas Gast – 11 / 24

slide-31
SLIDE 31

The TTL-approximation for h-LRU

Cache 1 Cache 2 Cache 3 copy First idea: track the lists in which an object are. [Martina et al. 14] Problem: number of states = 2h.

Nicolas Gast – 12 / 24

slide-32
SLIDE 32

The TTL-approximation for h-LRU

Cache 1 Cache 2 Cache 3 eviction after T ′

1

eviction after T ′

2

eviction after T ′

3

start new timer start new timer start new timer reset timer Solution: change model (track the greatest ID of the list in which the item appears by assuming that T1 ≤ T2 ≤ . . . Th) The TTL model can be solved exactly (see paper). Once T1 . . . Tk have been computed, Tk+1 satisfies a fixed point equation.

Nicolas Gast – 12 / 24

slide-33
SLIDE 33

Outline

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 13 / 24

slide-34
SLIDE 34

Is the approximation accurate?

Example (10-LRU, with a cache size n/10 and a Zipf popularity)

Simulation (Our approximation) [Martina et al. 14]) n = 1000 0.51506 0.51552 (+0.088%) 0.50796 (-1.380%) n = 10000 0.56124 0.56130 (+0.012%) 0.55447 (-1.206%)

Nicolas Gast – 14 / 24

slide-35
SLIDE 35

Is the approximation accurate?

Example (10-LRU, with a cache size n/10 and a Zipf popularity)

Simulation (Our approximation) [Martina et al. 14]) n = 1000 0.51506 0.51552 (+0.088%) 0.50796 (-1.380%) n = 10000 0.56124 0.56130 (+0.012%) 0.55447 (-1.206%) Numerically, TTL approximation have proven to be very accurate [Dan and Towsley 1990, Martina at al. 14, Che, 2002] Theoretical guarantees exist for LRU [Fricker et al. 12] We prove that our approximation is asymptotically exact.

Nicolas Gast – 14 / 24

slide-36
SLIDE 36

Asymptotic exactness of the approximation

2000 4000 6000 8000 10000 number of requests 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 probability in cache

1 list (200) 4 lists (50/50/50/50)

simulation

2000 4000 6000 8000 10000 number of requests 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 probability in cache

approx 1 list (200) approx 4 lists (50/50/50/50)

ODE approx.

Figure: Popularities of objects change every 2000 steps.

We develop an ODE approximation We show that it is accurate This ODE has the same fixed point as the TTL approximation

Nicolas Gast – 15 / 24

slide-37
SLIDE 37

Asymptotic exactness of the approximation

2000 4000 6000 8000 10000 number of requests 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 probability in cache

1 list (200) 4 lists (50/50/50/50)

simulation

2000 4000 6000 8000 10000 number of requests 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 probability in cache

approx 1 list (200) approx 4 lists (50/50/50/50)

ODE approx.

2000 4000 6000 8000 10000 number of requests 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 probability in cache

1 list (200) 4 lists (50/50/50/50)

  • de aprox (1 list)
  • de approx (4 lists)

Figure: Popularities of objects change every 2000 steps.

We develop an ODE approximation We show that it is accurate This ODE has the same fixed point as the TTL approximation

Nicolas Gast – 15 / 24

slide-38
SLIDE 38

Convergence result and idea of the proof

Idea of the proof.

We study the empirical distribution of the request dates. We use stochastic approximation to prove the convergence to an infinite dimensional deterministic ODE.

Nicolas Gast – 16 / 24

slide-39
SLIDE 39

Outline

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 17 / 24

slide-40
SLIDE 40

Qualitative remarks

less popular popular items

In general, adding more lists:

Improves the steady-state performancea, Decreases the response time.

aThis is not true in full generality, even for IRM. The same counter-example

as in [G., Van Houdt 2015] works.

Nicolas Gast – 18 / 24

slide-41
SLIDE 41

Quantitative remark 1: On synthetic traces: LRU(m, m) and 2-LRU perform similarly

LRU(m, m): exchange 2-LRU: copy

Nicolas Gast – 19 / 24

slide-42
SLIDE 42

Quantitative remark 1: On synthetic traces: LRU(m, m) and 2-LRU perform similarly

LRU(m, m): exchange 2-LRU: copy

50 100 150 200 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Cache Size Hit Probability

Zipf α=0.8, n = 1000, correlation in IRTs

Hypo2 LRU Hypo2 2−LRU Hypo2 LRM(m,m) Hypo10 LRU Hypo10 2−LRU Hypo10 LRU(m,m) Nicolas Gast – 19 / 24

slide-43
SLIDE 43

Quantitative remark 1: LRU is insensitive to correlations between requests time

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.8 0.82 0.84 0.86 0.88 0.9 0.92

hit probability Lag−1 autocorrelation ρ1

Zipf α=0.8, n = 1000, m = 100

LRU 2−LRU LRU(m,m) LRU(m/2,m/2) Nicolas Gast – 20 / 24

slide-44
SLIDE 44

Quantitative remark 1: LRU is insensitive to correlations between requests time

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.8 0.82 0.84 0.86 0.88 0.9 0.92

hit probability Lag−1 autocorrelation ρ1

Zipf α=0.8, n = 1000, m = 100

LRU 2−LRU LRU(m,m) LRU(m/2,m/2) Nicolas Gast – 20 / 24

slide-45
SLIDE 45

Quantitative remark 1: We verified on a web trace11 that having virtual list seems to improve performance.

LRU(m,m) LRU(m/2,m/2) LRU(m,m/2,m/2)

11[Bianchi et al. 2013] Nicolas Gast – 21 / 24

slide-46
SLIDE 46

Quantitative remark 1: We verified on a web trace11 that having virtual list seems to improve performance.

64 128 256 512 1024 2048 4096 8192 16384 32768 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6

Cache Size m hit probability / LRU hit probability

LRU LRU(m/2,m/2) LRU(m/3,m/3,m/3) LRU(m,m) LRU(m,m/2,m/2)

LRU(m,m) LRU(m/2,m/2) LRU(m,m/2,m/2)

11[Bianchi et al. 2013] Nicolas Gast – 21 / 24

slide-47
SLIDE 47

Outline

1

Two cache replacement policies

2

Performance analysis via TTL approximation

3

Asymptotic exactness of the approximation

4

Comparison between LRU, LRU( m) and h-LRU

5

Conclusion

Nicolas Gast – 22 / 24

slide-48
SLIDE 48

Conclusion

Characterize list-based cache replacement policies We provide TTL approximation

◮ New or improved approximations ◮ Exact for large cache

Theoretical interests:

◮ Prove equivalence between TTL and cache replacement policies ◮ Show that these approximation work for MAP

Practical applications:

◮ Comparison of LRU(m) and h-LRU. ◮ Our results can be used to tune such algorithms. Nicolas Gast – 23 / 24

slide-49
SLIDE 49

Questions or comments?

http://mescal.imag.fr/membres/nicolas.gast nicolas.gast@inria.fr Supported by EU project quancol

. ........ . . . ... ... ... ... ... ... ... http://www.quanticol.eu

Nicolas Gast – 24 / 24

slide-50
SLIDE 50

Hyperexponential

1 qz/(1 + z) q/(1 + z) z − qz/(1 + z) 1/z − q/(1 + z) Fire rate: Proba(0)=z/(1 + z). Fire rate = z. Proba(1)=1/(1 + z). Fire rate = 1/z. Coefficient of variation: z 1 + z 2 z2 + 1 1 + z 2z2 − 1.

Nicolas Gast – 1 / 1