scaling your cache caching at scale
play

Scaling Your Cache & Caching at Scale Alex Miller @puredanger - PowerPoint PPT Presentation

Scaling Your Cache & Caching at Scale Alex Miller @puredanger Mission Why does caching work? Whats hard about caching? How do we make choices as we design a caching architecture? How do we test a cache for performance?


  1. Scaling Your Cache & Caching at Scale Alex Miller @puredanger

  2. Mission • Why does caching work? • What’s hard about caching? • How do we make choices as we design a caching architecture? • How do we test a cache for performance?

  3. What is caching?

  4. Lots of data

  5. Memory Hierarchy Clock cycles to access Register 1 L1 cache 3 L2 cache 15 RAM 200 Disk 10000000 Remote disk 1000000000 1E+00 1E+01 1E+02 1E+03 1E+04 1E+05 1E+06 1E+07 1E+08 1E+09

  6. Facts of Life Register Fast Small Expensive L1 Cache L2 Cache Main Memory Local Disk Slow Big Cheap Remote Disk

  7. Caching to the rescue!

  8. Temporal Locality Hits: 0% Cache: Stream:

  9. Temporal Locality Hits: 0% Cache: Stream: Stream: Cache: 65% Hits:

  10. Non-uniform distribution Web page hits, ordered by rank 3200 100% 2400 75% 1600 50% 800 25% 0 0% Page views, ordered by rank Pageviews per rank % of total hits per rank

  11. Temporal locality + Non-uniform distribution

  12. 17000 pageviews assume avg load = 250 ms cache 17 pages / 80% of views cached page load = 10 ms new avg load = 58 ms trade memory for latency reduction

  13. The hidden benefit: reduces database load Memory Database line of over provisioning

  14. A brief aside... • What is Ehcache? • What is Terracotta?

  15. Ehcache Example CacheManager manager = new CacheManager(); Ehcache cache = manager.getEhcache("employees"); cache.put(new Element(employee.getId(), employee)); Element element = cache.get(employee.getId()); <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="false"/>

  16. Terracotta App Node App Node App Node App Node Terracotta Terracotta Server Server App Node App Node App Node App Node

  17. But things are not always so simple...

  18. Pain of Large Data Sets • How do I choose which elements stay in memory and which go to disk? • How do I choose which elements to evict when I have too many? • How do I balance cache size against other memory uses?

  19. Eviction When cache memory is full, what do I do? • Delete - Evict elements • Overflow to disk - Move to slower, bigger storage • Delete local - But keep remote data

  20. Eviction in Ehcache Evict with “Least Recently Used” policy: <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="false"/>

  21. Spill to Disk in Ehcache Spill to disk: <diskStore path="java.io.tmpdir"/> <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="true" maxElementsOnDisk="1000000" diskExpiryThreadIntervalSeconds="120" diskSpoolBufferSizeMB="30" />

  22. Terracotta Clustering Terracotta configuration: <terracottaConfig url="server1:9510,server2:9510"/> <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="false"> <terracotta/> </cache>

  23. Pain of Stale Data • How tolerant am I of seeing values changed on the underlying data source? • How tolerant am I of seeing values changed by another node?

  24. Expiration TTI=4 0 1 2 3 4 5 6 7 8 9 TTL=4

  25. TTI and TTL in Ehcache <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="false"/>

  26. Replication in Ehcache <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution. RMICacheManagerPeerProviderFactory" properties="hostName=fully_qualified_hostname_or_ip, peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> <cache name="employees" ...> <cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory” properties="replicateAsynchronously=true, replicatePuts=true, replicatePutsViaCopy=false, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true asynchronousReplicationIntervalMillis=1000"/> </cache>

  27. Terracotta Clustering Still use TTI and TTL to manage stale data between cache and data source Coherent by default but can relax with coherentReads=”false”

  28. Pain of Loading • How do I pre-load the cache on startup? • How do I avoid re-loading the data on every node?

  29. Persistent Disk Store <diskStore path="java.io.tmpdir"/> <cache name="employees" maxElementsInMemory="1000" memoryStoreEvictionPolicy="LRU" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="true" maxElementsOnDisk="1000000" diskExpiryThreadIntervalSeconds="120" diskSpoolBufferSizeMB="30" diskPersistent="true" />

  30. Bootstrap Cache Loader Bootstrap a new cache node from a peer: <bootstrapCacheLoaderFactory class="net.sf.ehcache.distribution. RMIBootstrapCacheLoaderFactory" properties="bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000" propertySeparator=",” /> On startup, create background thread to pull the existing cache data from another peer.

  31. Terracotta Persistence Nothing needed beyond setting up Terracotta clustering. Terracotta will automatically bootstrap: - the cache key set on startup - cache values on demand

  32. Pain of Duplication • How do I get failover capability while avoiding excessive duplication of data?

  33. Partitioning + Terracotta Virtual Memory • Each node (mostly) holds data it has seen • Use load balancer to get app-level partitioning • Use fine-grained locking to get concurrency • Use memory flush/fault to handle memory overflow and availability • Use causal ordering to guarantee coherency

  34. Scaling Your Cache

  35. Scalability Continuum causal YES NO NO YES YES ordering 2 or more 2 or more 2 or more JVMS 2 or more JVMS 2 or more JVMs 2 or more big JVMs 2 or more JVMs 2 or more 2 or more 2 or more JVMs lots of JVMs # JVMs lots of 2 or more 2 or more JVMs JVMS JVMS 1 JVM JVMs JVMs JVMs JVMs runtime Terracotta Ehcache Ehcache Ehcache Terracotta FX Terracotta FX OSS RMI disk store Ehcache FX Ehcache FX Ehcache DX Ehcache EX and FX management management and control and control more scale 21

  36. Caching at Scale

  37. Know Your Use Case • Is your data partitioned (sessions) or not (reference data)? • Do you have a hot set or uniform access distribution? • Do you have a very large data set? • Do you have a high write rate (50%)? • How much data consistency do you need?

  38. Types of caches Name Communication Advantage Broadcast multicast low latency invalidation Replicated multicast offloads db Datagrid point-to-point scalable Distributed 2-tier point-to- all of the above point

  39. Common Data Patterns I/O pattern Locality Hot set Rate of change Catalog/ low low low customer Inventory high high high Conversations high high low Catalogs/customers Inventory Conversations • • • warm all the fine-grained sticky load data into locking balancer • • cache write-behind to DB disconnect • High TTL conversations from DB

  40. Build a Test • As realistic as possible • Use real data (or good fake data) • Verify test does what you think • Ideal test run is 15-20 minutes

  41. Cache Warming

  42. Cache Warming • Explicitly record cache warming or loading as a testing phase • Possibly multiple warming phases

  43. Lots o’ Knobs

  44. Things to Change • Cache size • Read / write / other mix • Key distribution • Hot set • Key / value size and structure • # of nodes

  45. Lots o’ Gauges

  46. Things to Measure • Application throughput (TPS) • Application

  47. Benchmark and Tune • Create a baseline • Run and modify parameters • Test, observe, hypothesize, verify • Keep a run log

  48. Bottleneck Analysis

  49. Pushing It • If CPUs are not all busy... • Can you push more load? • Waiting for I/O or resources • If CPUs are all busy... • Latency analysis

  50. I/O Waiting • Database • Connection pooling • Database tuning • Lazy connections • Remote services

  51. Locking and Concurrency Threads Locks Key Value 1 2 get 2 3 get 2 4 5 6 8 t u p 7 8 put 12 9 10 11 12 13 14 15 16

  52. Locking and Concurrency Threads Locks Key Value g e t 1 2 2 get 2 3 4 5 6 put 8 7 8 9 10 2 1 t u p 11 12 13 14 15 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend