1
play

1 Harvest Harvest- -Style ICP Hierarchies Style ICP Hierarchies - PDF document

Caching for a Better Web Caching for a Better Web Performance is a major concern in the Web Proxy caching is the most widely used method to improve Large Large- -Scale Web Caching and Content Scale Web Caching and Content Web performance


  1. Caching for a Better Web Caching for a Better Web Performance is a major concern in the Web Proxy caching is the most widely used method to improve Large Large- -Scale Web Caching and Content Scale Web Caching and Content Web performance • Duplicate requests to the same document served from cache Delivery Delivery • Hits reduce latency, network utilization, server load • Misses increase latency (extra hops) Jeff Chase Hits CPS 212: Distributed Information Systems Internet Fall 2000 Misses Misses Clients Proxy Cache Servers [Source: Geoff Voelker] Cache Effectiveness Cache Effectiveness Cooperative Web Proxy Caching Cooperative Web Proxy Caching Sharing and/or coordination of cache state among multiple Web Previous work has shown that hit rate increases with proxy cache nodes population size [Duska et al. 97, Breslau et al. 98] Effectiveness of proxy cooperation depends on: However, single proxy caches have practical limits ♦ Inter-proxy communication distance ♦ Proxy utilization and load balance • Load, network topology, organizational constraints ♦ Size of client population served One technique to scale the client population is to have proxy caches cooperate Proxy Clients Internet Clients Clients [Source: Geoff Voelker] [Source: Geoff Voelker] Hierarchical Hierarchical Caches Caches Content- Content -Sharing Among Peers Sharing Among Peers Idea : place caches at exchange or Idea : Since siblings are “close” in the network, allow switching points in the network, and origin Web site them to share their cache contents directly. cache at each level of the hierarchy. (e.g., U.S. Congress) INTERNET INTERNET upstream Resolve misses through the parent. downstream clients clients clients clients clients clients 1

  2. Harvest Harvest- -Style ICP Hierarchies Style ICP Hierarchies Issues for Cache Hierarchies Issues for Cache Hierarchies • With ICP: query traffic within “families” (size n ) INTERNET Inter-sibling ICP traffic (and aggregate overhead) is quadratic with n. Examples Idea: multicast probes within each Query-handling overhead grows linearly with n. Harvest [Schwartz96] “family”: pick first hit response or Squid (NLANR) wait for all miss responses. • miss latency NetApp NetCache Object passes through every cache from origin to client: deeper hierarchies scale better, but impose higher latencies. • storage A recently-fetched object is replicated at every level of the tree. object request • effectiveness object response Interior cache benefits are limited by capacity if objects are not query (probe) client likely to live there long (e.g., LRU). query response Hashing: Cache Array Routing Protocol (CARP) Hashing: Cache Array Routing Protocol (CARP) Issues for CARP Issues for CARP • no way to exploit network locality at each level INTERNET e.g., relies on local browser caches to absorb repeats • load balancing • hash can be balanced and/or weighted with a load factor reflecting Microsoft Proxy Server the capacity/power of each server • must rebalance on server failures g-p q-u Reassigns (1/n) th of cached URLs for array size n. v-z a-f URLs from failed server are evenly distributed among the remaining n-1 servers. Advantages • miss penalty and cost to compute the hash 1. single-hop request resolution In CARP, hash cost is linear in n : hash with each node and pick 2. no redundant caching of objects hash 3. allows client-side implementation the “winner”. function “GET www.hotsite.com” 4. no new cache-cache protocols 5. reconfigurable Directory Directory- -based: Summary Cache for ICP based: Summary Cache for ICP A Summary- A Summary -ICP Hierarchy ICP Hierarchy Idea : each caching server replicates the cache directory INTERNET (“summary”) of each of its peers (e.g., siblings). miss Summary caches at each level of the hierarchy e.g., Squid configured [Cao et. al. Sigcomm98] reduce inter-sibling miss queries by 95+%. to use cache digests • Query a peer only if its local summary indicates a hit. • To reduce storage overhead for summaries, implement the summaries compactly using Bloom Filters . hit May yield false hits (e.g., 1%), but not false misses. Each summary is three orders of magnitude smaller than the cache itself, and can be updated by multicasting just the flipped bits. object request object response query client query response 2

  3. Issues for Directory- Issues for Directory -Based Caches Based Caches On the Scale and Performance.... On the Scale and Performance.... [Wolman/Voelker/.../Levy99] is a key paper in this area over • Servers update their summaries lazily. the last few years. Update when “new” entries exceed some threshold percentage. • first negative result in SOSP (?) Update delays may yield false hits and/or false misses. • illustrates tools for evaluating wide-area systems • Other ways to reduce directory size? simulation and analytical modeling • illustrates fundamental limits of caching Vicinity cache [Gadde/Chase/Rabinovich98] benefits dictated by reference patterns and object rate of change Subsetting by popularity [Gadde/Chase/Rabinovich97] forget about capacity, and assume ideal cooperation • What are the limits to scalability? • ties together previous work in the field If we grow the number of peers? wide-area cooperative caching strategies If we grow the cache sizes? analytical models for Web workloads • best traces UW Trace Characteristics UW Trace Characteristics A Multi A Multi- -Organization Trace Organization Trace University of Washington (UW) is a large and diverse client Trace UW population Duration 7 days Approximately 50K people HTTP objects 18.4 million UW client population contains 200 independent campus HTTP requests 82.8 million organizations Avg. requests/sec 137 Museums of Art and Natural History Total Bytes 677 GB Schools of Medicine, Dentistry, Nursing Servers 244,211 Departments of Computer Science, History, and Music Clients 22,984 A trace of UW is effectively a simultaneous trace of 200 diverse client organizations • Key: Tagged clients according to their organization in trace [Source: Geoff Voelker] [Source: Geoff Voelker] Cooperation Across Organizations Cooperation Across Organizations Ideal Hit Rates for UW proxies Ideal Hit Rates for UW proxies Treat each UW organization as an independent “company” Ideal hit rate - infinite storage, ignore cacheability, expirations Evaluate cooperative caching among these organizations Average ideal local How much Web document reuse is there among these hit rate: 43% organizations? • Place a proxy cache in front of each organization. • What is the benefit of cooperative caching among these 200 proxies? [Source: Geoff Voelker] [Source: Geoff Voelker] 3

  4. Ideal Hit Rates for UW proxies Ideal Hit Rates for UW proxies Sharing Due to Affiliation Sharing Due to Affiliation Ideal hit rate - infinite storage, ignore cacheability, expirations Average ideal local hit rate: 43% Explore benefits of perfect cooperation rather than a particular algorithm Average ideal hit rate increases from 43% to 69% with cooperative UW organizational sharing vs. random organizations caching Difference in weighted averages across all orgs is ~5% [Source: Geoff Voelker] [Source: Geoff Voelker] Cacheable Hit Rates for Cacheable Hit Rates for Scaling Cooperative Caching Scaling Cooperative Caching UW proxies UW proxies Organizations of this size can benefit significantly from cooperative Cacheable hit rate - same as ideal, caching but doesn’t ignore cacheability But…we don’t need cooperative caching to handle the entire UW population size Cacheable hit rates are much • A single proxy (or small cluster) can handle this entire population! lower than ideal (average is 20%) • No technical reason to use cooperative caching for this environment Average cacheable hit rate • In the real world, decisions of proxy placement are often political increases from 20% to 41% or geographical with (perfect) cooperative caching How effective is cooperative caching at scales where a single cache cannot be used? [Source: Geoff Voelker] [Source: Geoff Voelker] Hit Rate vs. Client Population Hit Rate vs. Client Population In the Paper... In the Paper... 1. Do we believe this? What are some possible sources of Curves similar to other studies error in this tracing/simulation study? • [e.g., Duska97, Breslau98] What impact might they have? Small organizations 2. Why are “ideal” hit rates so much higher for the MS trace, • Significant increase in hit rate but the cacheable hit rates are the same? as client population increases What is the correlation between sharing and cacheability? • The reason why cooperative caching is effective for UW 3. Why report byte hit rates as well as object hit rates? Large organizations Is the difference significant? What does this tell us about reference patterns? • Marginal increase in hit rate as client population increases 4. How can it be that byte hit rate increases with population, while bandwidth consumed is linear? [Source: Geoff Voelker] 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend