DOC PREVIEW
UW-Madison CS 736 - Performance Measurements of Networked Files Systems

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Performance Measurements of Networked Files Systems Winfred Byrd and John Moser [email protected], [email protected] SOS-Please Conference cs736 Advanced Topics in Operating Systems Fall 2000 Professor: Remzi Arpaci-Dusseau December 18, 2000 Abstract The inability of common benchmarks to gracefully scale to evaluate systems of various size and configuration contributes greatly to the incapacity usually shown in comparing varied systems under a common metric. This paper looks at an older, self-scaling I/O benchmark whose results can be normalized using techniques of predicted performance for comparative evaluation of a diverse range of stand-alone computer systems. We assert that there is no reason these comparisons need stop at local disk workstations and show the comparison to produce a valid evaluation of network file-system enabled computers using SUN’s NFS protocol. 1. Introduction The history of performance evaluation is littered with benchmarks unable to stress the larger and more powerful computers that evolved in their aftermath. This inability to scale greatly contributes to inaccuracies when evaluating and comparing varied systems under a common metric. In the discussion that follows, we examine what characteristics a benchmark need have so that it may be used to realistically evaluate modern and future computer systems. We look at what we gain from each of these characteristics and why earlier proposed benchmarks lacking in them have grown obsolete. We then turn to a proposed benchmark that meets all of our stated criteria. Following this, we discuss the changes that were necessary to convert this benchmark for running over a distributed file system.The remainder of this paper is divided into the following sections: section 2 sets the standards for an ideal benchmark, section 3 looks at related work, section 4 addresses our testing procedure, section 5 presents our results, section 6 considers future work, and in section 7 conclusions are drawn from the results. 2. Ideal Benchmarks When attempting to evaluate systems, several questions naturally present themselves: What are common shortcomings among existing benchmarks? Is there a benchmark that is not limited by these shortcomings; or, more generally, what would be our ideal benchmark? Fortunately, the literature is rich with attempts to answer that question. One such claim obliges that an ideal I/O benchmark should: • Increase understanding of the system: This includes helping designers isolate points of poor performance as well as instructing users in optimal machine usage under different workloads, • Be I/O limited: Intuitively, the facet of the system we are attempting to evaluate should be the bottleneck of our system, • Scale gracefully: We cannot expect comparative information between diverse systems to be useful if our evaluation program is ineffective at measuring systems in the performance range of one or more of the platforms in question. Further, we would like our evaluation to remain useful with the next generation of computers. Hence, scaling to stress more powerful systems is a must, • Allow a fair comparison across machines, • Relevant to many applications: We would like to be able to comparatively evaluate systems under all types of workloads: CAD, office automation, software development, and so on. Though design decisions are arguably much easier when we target a specific application, there is obvious value in very general applicability, • Tightly Specified: For benchmarks to have meaning, they must be well defined, reproducible and reported with rigor. What optimizations, such as caching, are allowed? How is the physical machine environment configured? If we are going to hold our results as absolutely reliable, these questions become very important. [2] With this more thorough specification of the required characteristics, we search for our ideal benchmark. 3. Related Work In the past, the benchmarking of various components of computers has invariably followed a trend towards one or two standards. With NFS performance measurement, this hasn’t changed. The computing industry has seen two distinct methods for judging NFS performance in two pieces of software, NFS/NHFSStone, and LADDIS.3.1. NFS/NHFSStone Barry Shein initially published NFSStone at the 1989 USENIX conference in a paper entitled "NFSSTONE - A Network File Server Performance Benchmark". Legato Systems created NHFSStone, which was incorporated as a NFS load-generating program. Both share many features. And while both may have been used in the past as a popular means of describing NFS performance, both also have many deficiencies. We will speak in general of NHFSStone, though the references apply to NFSStone as well (unless otherwise noted).[7] First, both benchmarks utilized the synthetic duplication of an average user workload, on top of an average NFS resource utilization level. While this may give us an indication of what might be the performance of our system under a median-level workload, it does not indicate any sort of performance measurement to the extremities. Additionally, NHFSStone is limited in the number of clients it supports (one). Any attempt to document the performance of a large-scale network wouldn’t find any help within NHFSStone. Without any network contention, the main bottleneck to NFS couldn’t be accurately measured (that being the network infrastructure itself). Even if control coordination software were implemented to maintain several client copies of NHFSStone, the results reported would lose reliability as additional levels of influence were introduced. Second, both NHFSStone and NFSStone are sensitive to the differences in NFS client implementation. The generated NFS command and request sequence produced for any given test could vary (sometimes greatly) amongst clients, depending on how their manufacturer decided to implement the NFS protocol. As the client workload increases, more inconsistencies are introduced into the results of the benchmark. NHFSStone attempted to work around these inconsistencies by implementing various algorithms, depending on what


View Full Document

UW-Madison CS 736 - Performance Measurements of Networked Files Systems

Documents in this Course
Load more
Download Performance Measurements of Networked Files Systems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Performance Measurements of Networked Files Systems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Performance Measurements of Networked Files Systems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?