cs533
play

CS533 One or more systems, real or hypothetical Modeling and - PDF document

Overview CS533 One or more systems, real or hypothetical Modeling and Performance You want to evaluate their performance Evaluation of Network and What technique do you choose? Computer Systems Analytic Modeling?


  1. Overview CS533 • One or more systems, real or hypothetical Modeling and Performance • You want to evaluate their performance Evaluation of Network and • What technique do you choose? Computer Systems – Analytic Modeling? – Simulation? Selection of Techniques and – Measurement? Metrics (Chapter 3) 1 2 Selecting an Evaluation Technique Outline (1 of 4) • Selecting an Evaluation Technique • What life-cycle stage of the system? • Selecting Performance Metrics – Measurement only when something exists – If new, analytical modeling or simulation are only – Case Study options • When are results needed? (often, yesterday!) • Commonly Used Performance Metrics • Setting Performance Requirements – Analytic modeling only choice – Simulations and measurement can be same • But Murphy’s Law strikes measurement more – Case Study • What tools and skills are available? – Maybe languages to support simulation – Tools to support measurement (ex: packet sniffers, source code to add monitoring hooks) – Skills in analytic modeling (ex: queuing theory) 3 4 Selecting an Evaluation Technique Selecting an Evaluation Technique (2 of 4) (3 of 4) • Level of accuracy desired? • What are the alternatives? – Analytic modeling coarse (if it turns out to be – Can explore trade-offs easiest with analytic accurate, even the analysts are surprised!) models , simulations moderate, measurement most – Simulation has more details, but may abstract difficult • Ex: QFind – determine impact (tradeoff) of RTT and OS key system details • Difficult to measure RTT tradeoff – Measurement may sound real, but workload, • Easy to simulate RTT tradeoff in network, not OS configuration, etc., may still be missing • Cost? • Accuracy can be high to none without proper design – Measurement generally most expensive – Even with accurate data, still need to draw – Analytic modeling cheapest (pencil and paper) proper conclusions – Simulation often cheap but some tools expensive • Ex: so response time is 10.2351 with 90% • Traffic generators, network simulators confidence. So what? What does it mean ? 5 6 1

  2. Selecting an Evaluation Technique Summary Table for Evaluation (4 of 4) Technique Selection • Saleability? Criterion Modeling Simulation Measurement 1. Stage Any Any Prototype+ – Much easier to convince people with measurements 2. Time Small Medium Varies required – Most people are skeptical of analytic modeling results since hard to understand 3. Tools Analysts Some Instrumentation • Often validate with simulation before using languages • Can use two or more techniques 4. Accuracy Low Moderate Varies 5. Trade-off Easy Moderate Difficult – Validate one with another evaluation – Most high-quality perf analysis papers have 6. Cost Small Medium High analytic model + simulation or measurement 7. Saleabilty Low Medium High 7 8 Selecting Performance Metrics Outline (1 of 3) • Selecting an Evaluation Technique response time n . An unbounded, random variable … representing the • Selecting Performance Metrics elapses between the time of sending a message and the time when the error diagnostic is received. – S. Kelly-Bootle , The Devil’s DP Dictionary – Case Study Time Speed • Commonly Used Performance Metrics Rate Request • Setting Performance Requirements Correct Resource Done – Case Study Probability Not Reliability Error i System Correct Time between Not Event k Done Duration Availability Time between 9 10 Selecting Performance Metrics Selecting Performance Metrics (2 of 3) (3 of 3) • May be more than one set of metrics • Mean is what usually matters – But variance for some (ex: response time) – Resources: Queue size, CPU Utilization, • Individual vs. Global Memory Use … • Criteria for selecting subset, choose: – May be at odds – Increase individual may decrease global – Low variability – need fewer repetitions • Ex: response time at the cost of throughput – Non redundancy – don’t use 2 if 1 will do – Increase global may not be most fair • ex: queue size and delay may provide • Ex: throughput of cross traffic • Performance optimizations of bottleneck have identical information most impact – Completeness – should capture tradeoffs • ex: one disk may be faster but may return – Example: Response time of Web request – Client processing 1s, Latency 500ms, Server more errors so add reliability measure processing 10s � Total is 11.5 s – Improve client 50%? � 11 s – Improve server 50%? � 6.5 s 11 12 2

  3. Outline Case Study (1 of 5) • Computer system of end-hosts sending • Selecting an Evaluation Technique • Selecting Performance Metrics packets through routers – Congestion occurs when number of packets – Case Study at router exceed buffering capacity (are • Commonly Used Performance Metrics dropped) • Goal: compare two congestion control • Setting Performance Requirements algorithms – Case Study • User sends block of packets to destination – A) Some delivered in order – B) Some delivered out of order – C) Some delivered more than once – D) Some dropped 13 14 Case Study (2 of 5) Case Study (3 of 5) • For A), straightforward metrics exist: • For B), cannot be delivered to user and are 1) Response time: delay for individual packet often considered dropped 2) Throughput: number of packets per unit 7) Probability of out of order arrivals time • For C), consume resources without any use 3) Processor time per packet at source 8) Probability of duplicate packets 4) Processor time per packet at destination • For D), many reasons is undesirable 5) Processor time per packet at router • Since large response times can cause extra 9) Probability lost packets • Also, excessive loss can cause disconnection retransmissions: 6) Variability in response time since can cause 10) Probability of disconnect extra retransmissions 15 16 Case Study (4 of 5) Case Study (5 of 5) • Since a multi-user system and want • After a few experiments (pilot tests) fairness: – Found throughput and delay redundant • higher throughput had higher delay – Throughputs (x 1 , x 2 , …, x n ): • instead, combine with power = thrput/delay f (x 1 , x 2 , …, x n ) = ( Σ x i ) 2 / ( n Σ x i 2 ) • Index between 0 and 1 – Found variance in response time redundant with probability of duplication and – All users get same, then 1 probability of disconnection – If k users get equal and n-k get zero, than • Drop variance in response time index is k / n • Thus, left with nine metrics 17 18 3

  4. Commonly Used Performance Outline Metrics • Selecting an Evaluation Technique • Response Time • Selecting Performance Metrics – Turn around time – Reaction time – Case Study – Stretch factor • Commonly Used Performance Metrics • Throughput • Setting Performance Requirements – Operations/second – Capacity – Case Study – Efficiency – Utilization • Reliability – Uptime – MTTF 19 20 Response Time (1 of 2) Response Time (2 of 2) • Interval between user’s request and System System User User System Starts Finishes Starts Starts Finishes system response Execution Response Request Request Response Time Reaction User’s System’s Think Time Time Request Response Response Time Time 1 • But simplistic since requests and responses Response Time 2 are not instantaneous • Can have two measures of response time • Users type and system formats – Both ok, but 2 preferred if execution long • Think time can determine system load 21 22 Response Time+ Throughput (1 of 2) • Turnaround time – time between submission • Rate at which requests can be serviced by of a job and completion of output system (requests per unit time) – For batch job systems – Batch: jobs per second • Reaction time - Time between submission – Interactive: requests per second of a request and beginning of execution – CPUs • Millions of Instructions Per Second (MIPS) – Usually need to measure inside system since • Millions of Floating-Point Ops per Sec (MFLOPS) nothing externally visible • Stretch factor – ratio of response time at – Networks: pkts per second or bits per second load to response time at minimal load – Transactions processing: Transactions Per Second (TPS) – Most systems have higher response time as load increases 23 24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend