This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Denali: Lightweight Virtual Machines for Distributed andNetworked ApplicationsAndrew Whitaker, Marianne Shaw, and Steven D. GribbleThe University of Washington{andrew,mar,gribble}@cs.washington.eduAbstractThe goal of Denali is to safely execute many inde-pendent, untrusted server applications on a single phys-ical machine. This would enable any developer to injecta new service into third-party Internet infrastructure;for example, dynamic content generation code couldbe introduced into content-delivery networks or cachingsystems. We believe that virtual machine monitors(VMMs) are ideally suited to this application domain.A VMM provides strong isolation by default, since onevirtual machine cannot directly name a resource in an-other. In addition, VMMs defer the implementation ofhigh-level abstractions to guest OSs, which greatly sim-plifies the kernel and avoids “layer-below” attacks. Themain challenge in using a VMM for this application do-main is in scaling the number of concurrent virtual ma-chines that can simultaneously execute on it.The distinction between Denali and existing VMMs isthat we make aggressive use of para-virtualization tech-niques. Para-virtualization entails selectively modifyingthe virtual architecture to enhance scalability, perfor-mance, and simplicity. By using para-virtualization, webelieve Denali will be able to scale up to an order-of-magnitude more virtual machines than existing VMMs.We have implemented a prototype virtual machine mon-itor that runs in ring 0 on bare x86 hardware. In addi-tion, we have built a simple guest OS tailored to writingInternet services.1 IntroductionImprovements in networking and computingtechnology are pushing application functionalityinto the wide-area infrastructure. This computingmodel has many advantages: services are immedi-ately available to clients without cumbersome soft-ware distribution, services are always available andcan be accessed from any device, services can beadministered centrally, and administration or main-tenance can be out-sourced to an infrastructure ser-vice provider rather than handled in-house.Many of today’s services are maintained by largeorganizations, such as Hotmail. However, the ben-efits of infrastructure computing should apply justas well to small services. A popular vision that weshare is that any individual should be able to injecta new service into the Internet infrastructure for asmall fee. As an example, a group of game play-ers could deploy a server to a well-connected pointin the Internet for the duration of a multi-playergame session. As another example, the owners ofa web service that includes dynamically generatedcontent could inject both static and dynamic por-tions of their site into a content-delivery network.These scenarios have significant trust implica-tions: infrastructure providers cannot trust con-sumers’ services, and services generally do not trusteach other. Correspondingly, a mechanism must ex-ist to enforce strong isolation between services andthe infrastructure, both in the security sense (pre-venting one service from corrupting another) and inthe performance sense (fairly multiplexing physicalresources such as CPU, memory, and network band-width). The simplest approach to providing thisisolation would be to run each service on its ownphysical machine. In addition to isolating servicesfrom each other, this would also allow each serviceto choose its own operating system and software.However, dedicating physical machines to servicesis wasteful, as it eliminates the possibility of statis-tically multiplexing a machine across many services.It is also not cost-effective, as we believe there willbe many services that neither require nor can affordthe cost of an entire physical machine.1.1 Statistically multiplexing servicesThe benefits of statistically multiplexing servicesare re-enforced by Zipf’s law, which states that thefrequency of an event is proportional to x−α, wherex is the rank of the event compared with all otherevents. Many studies of web servers, documents,web caches, and other network services have shownthat popularity is almost always driven by Zipfiandistributions [7]. Based on this, we expect thatthe popularity distribution of infrastructure serviceswill also be driven by Zipf’s law.Zipfian distributions have two significant impli-cations (Figure 1). First, most requests go to asmall number of popular services. Second, most00.20.40.60.810 2000 4000 6000 8000 10000service#cumulativeprobofaccessing• 50%ofaccessesaretothemostpopular6% ofservices(600of10,000)• 20%ofaccessesaretotheleastpopular60% ofservices(6,000of10,000)Figure 1: Zipfian service popularity distribution:This figure shows the CDF of requests to 10,000 hypo-thetical services driven by a Zipfian probability distri-bution, with α = 0.75.services are relatively unpopular, but a non-trivialfraction of requests go to these unpopular services.Because the amount of resources that a service re-quires is typically proportional to the workload itsupports, popular services will require significantcomputational and networking resources. In con-trast, there will be a large number of services thatrequire scarcely any resources, motivating the desireto multiplex many of them on a single computer forreasons of affordability and manageability.Fortunately, Moore’s law has resulted in com-modity components with enormous processingpower, storage, and network bandwidth. A singlemodern computer can support a large amount ofservice traffic: recent SPECweb results show thatsingle CPU servers can serve 2,000 HTTP requestsper second, or 172 million requests per day. Corre-spondingly, we believe that if isolation can be en-forced without introducing prohibitive overhead, asingle computer can host a large number of con-current services (hundreds, or perhaps thousands)while supporting an aggregate throughput that iscomparable to a single-service computer.1.2 Denali: supporting lightweight pro-tection domainsThe Denali project seeks to implementlightweight protection domains that allow manyuntrusted services to execute inside the networkinfrastructure. In particular, Denali’s protectiondomains must have the following properties:• Strong isolation: arbitrary code executing ina protection domain is prevented from perturb-ing code executing in another domain, both interms of security and performance.• Scales to many


View Full Document

UCLA COMSCI 259 - Denali

Download Denali
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Denali and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Denali 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?