ken birman i
play

Ken Birman i Cornell University. CS5410 Fall 2008. Last time: - PowerPoint PPT Presentation

Ken Birman i Cornell University. CS5410 Fall 2008. Last time: standards We looked mostly at big architectural standards But there are also standard ways to build cloud i f infrastructure support. Today: review many of the things one


  1. Ken Birman i Cornell University. CS5410 Fall 2008.

  2. Last time: standards… � We looked mostly at big architectural standards � But there are also standard ways to build cloud i f infrastructure support. � Today: review many of the things one normally finds in a cloud computing setting discuss what role each in a cloud computing setting, discuss what role each plays � Our goal is not to talk about best implementations yet g p y � We’ll do that later � Rather, focus on structure and roles and functionality

  3. Data center advertises itself to Firewall, usually with network the outside world through one or If needed, machines in the address translation capabilities. more IP addresses “DMZ” (demilitarized zone) can ( ) Hard to make TCP connections Hard to make TCP connections (“multihoming”) per location accept incoming TCP or UDP or to send UDP packets from the A glimpse inside eStuff.com Either a server that builds web requests and create “tunnels” outside to the inside pages, or a web service Internal naming convention and g dispatcher or a PHP interface to dispatcher, or a PHP interface to routing infrastructure needed to a database DMZ Internally there is often some deliver sub ‐ requests to services form of high ‐ speed event that will perform them notification “message bus”, g , “front-end applications” front-end applications Many services will have some perhaps supporting multicast form of load ‐ balancer to control routing of requests among its Service is often scaled out for Pub-sub combined with point-to-point replicas replicas performance. Raises issues of communication technologies like TCP replication of data it uses, if that data changes over time. LB LB LB LB LB LB service service service service service service

  4. More components � Data center has a physical structure (racks of machines) and a logical structure (the one we just saw) � Something must map logical roles to physical machines S hi l i l l h i l hi � Must launch the applications needed on them � And then monitor them and relaunch if crashes ensue � And then monitor them and relaunch if crashes ensue � Poses optimization challenges � We probably have multiple data centers We probably have multiple data centers � Must control the external DNS, tell it how to route � Answer could differ for different clients

  5. More components � Our data center has a security infrastructure involving keys, certificates storing them, permissions � Something may need to decide not just where to put S hi d d id j h services, but also which ones need to be up, and how replicated they should be replicated they should be � Since server locations can vary and server group members change, we need to track this information g and use it to adapt routing decisions � The server instances need a way to be given parameters and environment data d i d

  6. More components � Many kinds of events may need to be replicated � Parameter or configuration changes that force services t d to adapt themselves t th l � Updates to the data used by the little service groups ( (which may not be so small…) y ) � Major system ‐ wide events, like “we’re being attacked!” or “Scotty, take us to Warp four” � Leads to what are called event notification infrastructures, also called publish ‐ subscribe systems or event queuing middleware systems or event queuing middleware systems

  7. More components � Status monitoring components � To detect failures and other big events � To help with performance tuning and adaptation � To assist in debugging � Even for routine load balancing � Even for routine load ‐ balancing � Load balancers (now that we’re on that topic…) � Which need to know about loads and membership � Which need to know about loads and membership � But also may need to do deep packet inspection to look for things like session id’s

  8. More, and more, and more… � Locking service � Helps prevent concurrency conflicts, such as two services trying to create the identical file i t i t t th id ti l fil � Global file system � Could be as simple as a normal networked file system or � Could be as simple as a normal networked file system, or as fancy as Google’s GFS � Databases � Often, these run on clusters with their own scaling solutions…

  9. Let’s drill down… � Suppose one wanted to build an application that � Has some sort of “dynamic” state (receives updates) � Load ‐ balances queries � Is fault ‐ tolerant � H � How would we do this? ld d thi ?

  10. Today’s prevailing solution Back-end shared database system Middle tier runs business logic Clients

  11. Concerns? � Potentially slow (especially during failures) � Doesn’t work well for applications that don’t split cleanly between “persistent” state (that can be stored l l b “ i ” ( h b d in the database) and “business logic” (which has no persistent state) persistent state)

  12. Can we do better? � What about some form of in ‐ memory database � Could be a true database � Or it could be any other form of storage “local” to the business logic tier � This eliminates the back end database � This eliminates the back ‐ end database � More accurately, it replaces the single back ‐ end with a set of local services, one per middle ‐ tier node � This is a side ‐ effect of the way that web services are defined: the middle ‐ tier must be stateless � But how can we build such a thing? B h b ild h hi ?

  13. Today’s prevailing solution Middle tier and in ‐ memory database co ‐ resident on same node database co resident on same node Backend database Backend database Is now local to middle tier servers: A form of abstraction Stateless middle tier In-memory database such runs business logic as Oracle Times-Ten Clients

  14. Services with in ‐ memory state � Really, several cases � We showed a stateless middle tier running business l logic and talking to an in ‐ memory database i d t lki t i d t b � But in our datacenter architecture, the stateless tier was “on top” and we might need to implement replicated p g p p services of our very own, only some of which are databases or use them � So we should perhaps decouple the middle tier and not S h ld h d l h iddl i d assume that every server instance has its very own middle tier partner…. p

  15. Better picture, same “content” These guys are the stateless middle tier running the business logic g DMZ “front-end applications” front-end applications Pub-sub combined with point-to-point And these are the in ‐ memory y communication technologies like TCP database, or the home ‐ brew service, or whatever LB LB LB LB LB LB service service service service service service

  16. More load ‐ spreading steps � If every server handles all the associated data… � Then if the underlying data changes, every server needs t to see every update d t � For example, in an inventory service, the data would be the inventory for a given kind of thing, like a book. y g g, � Updates would occur when the book is sold or restocked � Obvious idea: partition the database so that groups of servers handle just a part of the inventory (or whatever) � Router needs to be able to extract keys from request: R d b bl k f another need for “deep packet inspection” in routers

  17. A RAPS of RACS (Jim Gray) � RAPS: A reliable array of partitioned subservices � RACS: A reliable array of cloned server processes A set of RACS RAPS x y z Pmap “B C”: {x y z} (equivalent replicas) Pmap B-C : {x, y, z} (equivalent replicas) Ken Birman searching Here, y gets picked, perhaps based on load for “digital camera”

  18. RAPS of RACS in Data Centers Services are hosted at data centers but accessible system -wide S i h t d t d t t b t ibl t id Data center B Data center A Query source Update source pmap pmap pmap Logical partitioning of services l2P map Logical services map to a physical Server pool resource pool, perhaps many to one Operators can control pmap, l2P map, other parameters. Large -scale multicast used to disseminate updates

  19. Partitioning increases challenge � Previously, routing to a server was just a question of finding some representative of the server � A kind of “anycast” A ki d f “ ” � But now, in a service ‐ specific way, need to � Extract the partitioning key (different services will have � Extract the partitioning key (different services will have different notions of what this means!) � Figure out who currently handles that key g y y � Send it to the right server instance (RAPS) � Do so in a way that works even if the RAPS membership is changing when we do it! h h d

  20. Drill down more: dynamicism P starts our service and is its first P starts our service and is its first � Talking to a RAPS while its membership changes could Q joins and needs to rendezvous member, hence its initial leader to learn that P is up and is the be very tricky! current leader. Q becomes next R joins Now we would say that R joins. Now we would say that If P crashes or just terminates, Q If P crashes or just terminates Q in rank p the “group view” (the takes over and is the new leader. membership) is {P,Q,R} The view is now {Q,R} q r � Th li � The client system will probably get “old” mapping data t t ill b bl t “ ld” i d t � Hence may try and talk to p when the service is being represented by q or r represented by q, or r…

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend