i nterposed proportional sharing f or a storage service
play

I nterposed Proportional Sharing f or a Storage Service Utility Wei - PowerPoint PPT Presentation

I nterposed Proportional Sharing f or a Storage Service Utility Wei J in J asleen Kaur UNC - Chapel Hill J ef f Chase Duke Univer sit y Resource Sharing in Ut ilit ies Resour ce ef f iciency Adapt ivit y Sur ge prot ect ion Client s


  1. I nterposed Proportional Sharing f or a Storage Service Utility Wei J in J asleen Kaur UNC - Chapel Hill J ef f Chase Duke Univer sit y

  2. Resource Sharing in Ut ilit ies Resour ce ef f iciency Adapt ivit y Sur ge prot ect ion Client s Robust ness “Pay as you gr ow” Economy of scale shared service Request f lows Aggregat ion e.g., st or age arr ay • Resour ce sharing of f ers import ant benef it s. • But sharing must be “f air ” t o pr ot ect user s. • Shared ser vices of t en have cont r act ual perf ormance t ar get s f or gr oups of client s or r equest s. • Service Level Agr eement s or SLAs

  3. Goals • Per f or mance isolat ion – Localize t he damage f rom unbudget ed demand surges. • Dif f er ent iat ed service qualit y – Of f er predict able, conf igur able perf or mance (e.g., mean r esponse t ime) f or st able r equest st r eams. • Non-invasive – Ext ernal cont r ol of a “black box” or “black cloud” – Gener alize t o a r ange of ser vices – No changes t o service st ruct ur e or implement at ion

  4. I nt erposed Request Scheduling I scheduler client s shar ed ser vice e.g., st orage arr ay e.g., r out er – I nt er cept and t hrot t le or r eorder r equest s on t he pat h bet ween t he client s and t he service [e.g., Lumb03]. – Build t he scheduler int o net wor k swit ching component s, or int o t he client s (e.g., server s in a ut ilit y dat a cent er). – Manage r equest t r af f ic r at her t han r equest execut ion.

  5. Alt ernat ive Approaches • Ext end scheduler f or each resource in a ser vice. – Cello, Xen, VMwar e, Resour ce Cont ainers, et c. – Precise but invasive, and must coor dinat e scheduler s t o manage sharing of an aggregat e resource (server, array). • Facade [Lumb03] uses Earliest Deadline First in an int er posed r equest scheduler t o meet r esponse t ime t ar get s. – Does not pr ovide isolat ion, t hough pr ior it y can help. – Can admission cont rol make isolat ion unnecessary? • SLEDS [Chambliss03] is a per -client net wor k st or age cont roller using leaky bucket rat e t hr ot t ling. – Flows cannot exceed conf igur ed rat e even if resources ar e idle.

  6. Proport ional Sharing • Each f low is assigned a weight • . • Allocat e resour ces among act ive f lows in pr oport ion t o t heir weight s. – Work-conser ving: allocat e surplus propor t ionally • Fair ness – Lag is t he dif f erence in weight ed work done on behalf of a pair of f lows. – Pr ove a const ant worst -case bound on lag f or any pair of f lows t hat are act ive over any int erval. – “Use it or lose it ”: no penalt y f or consuming surplus r esources.

  7. Weight s as Shares • Weight s def ine a conf igur ed or assured service rat e. – Adj ust weight s t o meet per f or mance t arget s. • I dealize weight s as shares of t he service’s capacit y t o ser ve request s. – Normalize weight s t o sum t o one. • For net work services, your mileage may vary. – Deliver ed ser vice rat e depends on request dist ribut ion, cross-t alk, hot spot s, et c. – Pr emise: behavior is suf f icient ly r egular t o adj ust weight s under f eedback cont r ol.

  8. I nt erposed Request Scheduling I I scheduler dept h D shar ed ser vice e.g., st orage arr ay e.g., r out er – Dispat ch/ issue up t o D r equest s or D unit s of wor k. – I ssue r equest s t o r espect weight s assigned t o each f low. – Choose D t o balance ser ver ut ilizat ion and t ight resource cont r ol. – Request concurr ency is def ined/ cont rolled by t he ser ver .

  9. Overview • Backgr ound on propor t ional share scheduling – Virt ual Clock [Zhang90] – Weight ed Fair Queuing [Demer s89] – St ar t -t ime Fair Queuing or SFQ [Goyal97] • New dept h-cont rolled var iant s f or int erposed scheduling – Why SFQ is not suf f icient : concurr ency. – New algor it hm: SFQ(D) – Ref inement : FSFQ(D) • Decent r alized t hr ot t ling wit h Request Windows (RW) • Proven f airness r esult s and exper iment al evaluat ion

  10. A Request Flow 0 =10 1 =5 2 =10 c f c f c f 0 1 2 p f p f p f 0 ) 1 ) 2 ) A(p f A(p f A(p f t ime Consider a f low f of service r equest s. – Could be packet s, CPU demands, I / Os, r equest s f or a ser vice – Each request has a dist inct arr ival t ime (serialize arr ivals). – Each r equest has a cost : packet lengt h, ser vice dur at ion, et c.

  11. Request Cost s • Can apply t o any service if we can est imat e t he cost of each request . • Relat ively easy t o est imat e cost f or block st or age. • Fair ness result s are relat ive t o t he est imat ed cost s; t hey are only as accurat e as t he est imat es.

  12. A Flow wit h a Share 10 10 5 0 2 p f 1 p f arr ival p f 10 10 5 0 1 2 dispat ch p f p f p f Consider a sequent ial unit r esour ce: capacit y is 1 unit wor k/ t ime unit . – Suppose f low f has a conf igur ed shar e of 50% ( • f = 0.5). – f is assur ed T unit s of ser vice in T/ • f unit s of real t ime. – How t o implement shar es/ weight s in an int erposed r equest scheduler?

  13. Virt ual Clock Each arr iving r equest is t agged wit h a st art (eligible) t ime and a f inish t ime. 10 10 5 0 1 2 p f p f p f 0 ) = 0 1 ) = 20 2 ) = 30 S(p f S(p f S(p f 0 ) = 20 1 ) = 30 2 ) = 50 F(p f F(p f S(p f View t he t ags as a vir t ual clock f or each f low. i )= F(p f i-1 ) S(p f Each request advances t he f low’s clock by t he i c f amount of real t ime unt il it s next request i ) = S(p f i ) + F(p f must be served. • f I f t he f low complet es wor k at it s conf igured [Zhang90] ser vice r at e, t hen virt ual t ime • real t ime.

  14. Sharing wit h Virt ual Clock 10 8 10 10 5 5 0 0 16 20 26 30 virt ual real 0 10 18 23 28 38 Vir t ual clock scheduler [Zhang90] orders t he r equest s/ packet s by t heir virt ual clock t ags. This example: – shows t wo f lows each at • = 50% – assumes bot h f lows ar e act ive and backlogged What if a f low does not consume it s conf igured shar e?

  15. Virt ual Clock is Unf air 10 5 10 5 penalized unf airly 8 10 5 inactive 10 5 10 8 10 5 5 0 20 30 0 16 26 50 A scheduler is work-conser ving if t he resour ce is never lef t idle while a request is queued await ing service. Virt ual Clock is wor k-conser ving, but it is unf air: an act ive f low is penalized f or consuming idle r esources. The lag is unbounded: really want a “use it or lose it ” policy.

  16. Weight ed Fair Queuing i )= max (v(A(p f i )), F(p f i-1 )) S(p f C •v(t) i • for active flows i c f •t • • i i ) = S(p f i ) + F(p f • f Def ine syst em vir t ual t ime v(t ), which advances wit h t he pr ogr ess of t he act ive f lows. – Less compet it ion speeds up v(t ); mor e slows it down. Advance (lagging) clock of a newly act ive f low t o t he syst em vir t ual t ime, t o r elinquish it s claim t o r esour ces it lef t idle. How t o maint ain v(t )? – Too f ast ? Rever t s t o FI FO. – Too slow? Rever t s t o Virt ual Clock.

  17. St art -Time Fair Queuing (SFQ) 10 5 10 5 Vir t ual clock derived f r om act ive f low. 8 10 5 inactive 10 5 10 8 10 5 5 0 20 30 30 46 50 56 SFQ der ives v(t ) f rom t he st ar t t ag of t he r equest in service. Use t he resour ce it self t o drive t he global clock. – Or der r equest s by st ar t t ag [Goyal97]. – Cheap t o comput e v(t ). max max c f c g + – Fair even if capacit y (service r at e) C varies. • f • g – Lag bet ween t wo backlogged f lows is bounded by:

  18. SFQ f or I nt erposed Scheduling? SFQ scheduler f or ser vice dept h D st or age service Challenge: concurr ency. – Up t o D r equest s ar e “in service” concurr ent ly. – SFQ vir t ual t ime v(t ) is no longer uniquely def ined. – Direct adapt at ion: Min-SFQ(D) t akes min of r equest s in ser vice.

  19. Min-SFQ is Unf air 1. Green has insuf f icient 2. Request burst f or Green concurrency in request st ream 6 6 6 6 6 6 6 • f = .25 • g = .75 6 6 6 6 6 6 6 virt ual 6 6 6 inactive 0 8 24 48 0 24 72 16 Green is act ive enough t o Purple st arves unt il Green’s ret ain it s virt ual clock, but lags virt ual clock cat ches up. arbit rarily f ar behind. Problem: v(t ) advances wit h t he slowest act ive f low: clock skew causes t he algor it hm t o degr ade t o Vir t ual Clock, which is unf air .

  20. SFQ(D) SFQ f or D issue slot s dept h D Solut ion: t ake v(t ) f rom clocks of backlogged f lows. – Take v(t ) as min t ag of queued request s await ing dispat ch. – (The st art t ag of t he request t hat will issue next .) – I mplement at ion: t ake v(t ) f rom t he last issued request . – Equivalent t o scheduling t he sequence of issue slot s wit h SFQ.

  21. SFQ(D) Lag Bounds Apply SFQ bounds t o issued r equest s. dispat ch 10 10 0 p f complet e SFQ lag bounds apply t o request s issued under SFQ(D). Fr om t his we can derive t he lag bound f or request s complet ed under SFQ(D). max max c f c g Lag bet ween t wo backlogged (D+1) + f lows f and g is bounded by: • f • g

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend