FSU CIS 5930r - Lecture 3 Selecting Evaluation Techniques

Unformatted text preview:

Selecting Evaluation TechniquesDecisions to be MadeEvaluation TechniquesAnalytic ModelingSimulationMeasurementSelecting Performance MetricsResponse TimeExamples of Response TimeMeasures of Response TimeThe Stretch FactorProcessing RateExamples of Processing RateMeasures of Processing RateNominal, Knee, and Usable CapacitiesResource ConsumptionExamples of Resource ConsumptionMeasures of Resource ConsumptionError MetricsExamples of Error MetricsMeasures of ErrorsFinancial MeasuresCharacterizing MetricsTypes of MetricsChoosing What to MeasureCompletenessRedundancyVariabilityClasses of Metrics: HBClasses of Metrics: LBClasses of Metrics: NBSetting Performance RequirementsExample: Web ServerExample: Requirements for Web ServerIs the Web Server SMART?Remaining Web Server IssuesWhite SlideSelecting Evaluation TechniquesAndy WangCIS 5930-03Computer SystemsPerformance AnalysisDecisions to be Made•Evaluation Technique•Performance Metrics•Performance RequirementsEvaluation TechniquesExperimentation isn’t always the answer.Alternatives:•Analytic modeling (queueing theory)•Simulation•Experimental measurementBut always verify your conclusions!Analytic Modeling•Cheap and quick•Don’t need working system•Usually must simplify and make assumptionsSimulation•Arbitrary level of detail•Intermediate in cost, effort, accuracy•Can get bogged down in model-buildingMeasurement•Expensive•Time-consuming•Difficult to get detail•But accurateSelectingPerformance Metrics•Three major perfomance metrics:–Time (responsiveness)–Processing rate (productivity)–Resource consumption (utilization)•Error (reliability) metrics:–Availability (% time up)–Mean Time to Failure (MTTF/MTBF)•Same as mean uptime•Cost/performanceResponse Time•How quickly does system produce results?•Critical for applications such as:–Time sharing/interactive systems–Real-time systems–Parallel computingExamples of Response Time•Time from keystroke to echo on screen•End-to-end packet delay in networks•OS bootstrap time•Leaving Love to getting food in Oglesby– Edibility not a factorMeasures of Response Time•Response time: request-response interval–Measured from end of request–Ambiguous: beginning or end of response?•Reaction time: end of request to start of processing•Turnaround time: start of request to end of responseThe Stretch Factor•Response time usually goes up with load•Stretch Factor measures this:02461 2 3 4LoadResponse TimeLow stretchHigh stretchProcessing Rate•How much work is done per unit time?•Important for:–Sizing multi-user systems–Comparing alternative configurations–MultimediaExamplesof Processing Rate•Bank transactions per hour•File-transfer bandwidth•Aircraft control updates per second•Jurassic Park customers per dayMeasuresof Processing Rate•Throughput: requests per unit time: MIPS, MFLOPS, Mb/s, TPS•Nominal capacity: theoretical maximum: bandwidth•Knee capacity: where things go bad•Usable capacity: where response time hits a specified limit•Efficiency: ratio of usable to nominal capacityNominal, Knee, andUsable CapacitiesResponse-Time LimitKneeUsableCapacityKnee CapacityNominalCapacityResource Consumption•How much does the work cost?•Used in:–Capacity planning–Identifying bottlenecks•Also helps to identify “next” bottleneckExamplesof Resource Consumption•CPU non-idle time•Memory usage•Fraction of network bandwidth needed•Square feet of beach occupiedMeasuresof Resource Consumption•Utilization:where u(t) is instantaneous resource usage–Useful for memory, disk, etc.•If u(t) is always either 1 or 0, reduces to busy time or its inverse, idle time–Useful for network, CPU, etc.tdttu0)(Error Metrics•Successful service (speed)–(Not usually reported as error)•Incorrect service (reliability)•No service (availability)Examples of Error Metrics•Time to get answer from Google•Dropped Internet packets•ATM down time•Wrong answers from IRSMeasures of Errors•Reliability: P(error) or Mean Time Between Errors (MTBE)•Availability:–Downtime: Time when system is unavailable, may be measured as Mean Time to Repair (MTTR)–Uptime: Inverse of downtime, often given as Mean Time Between Failures (MTBF/MTTF)Financial Measures•When buying or specifying, cost/performance ratio is often useful•Performance chosen should be most important for applicationCharacterizing Metrics•Usually necessary to summarize•Sometimes means are enough•Variability is usually critical–A mean I-10 freeway speed of 55 MPH doesn’t help plan rush-hour tripsTypes of Metrics•Global across all users•IndividualFirst helps financial decisions, second measures satisfaction and cost of adding usersChoosing What to MeasurePick metrics based on:•Completeness•(Non-)redundancy•VariabilityCompleteness•Must cover everything relevant to problem–Don’t want awkward questions from boss or at conferences!•Difficult to guess everything a priori–Often have to add things laterRedundancy•Some factors are functions of others•Measurements are expensive•Look for minimal set•Again, often an interactive processVariability•Large variance in a measurement makes decisions impossible•Repeated experiments can reduce variance–Expensive–Can only reduce it by a certain amount•Better to choose low-variance measures to start withClasses of Metrics: HB•Higher is Better:UtilityThroughputBetterClasses of Metrics: LB•Lower is Better:UtilityResponse TimeBetterClasses of Metrics: NB•Nominal is Best:Free Disk SpaceBestUtilitySetting Performance RequirementsGood requirements must be SMART:•Specific•Measurable•Acceptable•Realizable•ThoroughExample: Web Server•Users care about response time (end of response)•Network capacity is expensive  want high utilization•Pages delivered per day matters to advertisers•Also care about error rate (failed & dropped connections)Example: Requirementsfor Web Server•2 seconds from request to first byte, 5 to last•Handle 25 simultaneous connections, delivering 100 Kb/s to each•60% mean utilization, with 95% or higher less than 5% of the time•<1% of connection attempts rejected or droppedIs the Web Server SMART?•Specific: yes•Measurable: may have trouble with rejected connections•Acceptable: response time and aggregate bandwidth might not be enough•Realizable: requires T3 link; utilization depends on popularity•Thorough? You


View Full Document

FSU CIS 5930r - Lecture 3 Selecting Evaluation Techniques

Download Lecture 3 Selecting Evaluation Techniques
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 3 Selecting Evaluation Techniques and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 3 Selecting Evaluation Techniques 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?