DOC PREVIEW
Inter-VM Communication Mechanisms

This preview shows page 1-2-24-25 out of 25 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Survey of State-of-the-art in Inter-VMCommunication MechanismsJian WangSeptember 27, 2009AbstractAdvances in virtualization technology have focused mainly on strength-ening the isolation barrier between virtual machines (VMs) that are co-resident within a single physical machine. At the same time, a large cat-egory of communication intensive distributed applications and softwarecomponents exist, such as web services, high performance grid applica-tions, transaction processing, and graphics rendering, that often wish tocommunicate across this isolation barrier with other endpoints on co-resident VMs. This report presents a survey of the state-of-the-art re-search that aims to improve communication between applications on co-located virtual machines. These efforts can be classified under two broadcategories: (a) shared-memory approaches that bypass the traditional net-work communication datapath to improve both latency and throughputof communication. (b) improvements to CPU scheduling algorithms atthe hypervisor-level that address the latency requirements of inter-VMcommunication. We describe the state-of-the-art approaches in these twocategories, compare and contrast their benefits and drawbacks, and out-line open research problems in this area.1 IntroductionVirtual Machines (VMs) are rapidly finding their way into data centers, enter-prise service platforms, high performance computing (HPC) clusters, and evenend-user desktop environments. The primary attraction of VMs is their abilityto provide functional and performance isolation across applications and servicesthat share a common hardware platform. VMs improve the system-wide utiliza-tion efficiency, provide live migration for load balancing, and lower the overalloperational cost of the system.Hypervisor (also sometimes called the virtual machine monitor) is the soft-ware entity which enforces isolation across VMs residing within a single physicalmachine, often in coordination with hardware assists and other trusted softwarecomponents. For instance, the Xen hypervisor runs at the highest system priv-ilege level and coordinates with a trusted VM called Domain 0 (or Dom0) toenforce isolation among unprivileged guest VMs.1Enforcing isolation is an important requirement from the viewpoint of se-curity of individual software components. At the same time enforcing isolationcan result in significant communication overheads when different software com-ponents need to communicate across this isolation barrier to achieve applicationobjectives.For example, a distributed HPC application may have two processes runningin different VMs that need to communicate using messages over MPI libraries.Similarly, a web service running in one VM may need to communicate with adatabase server running in another VM in order to satisfy a client transaction re-quest. Or a graphics rendering application in one VM may need to communicatewith a display engine in another VM. Even routine inter-VM communication,such as file transfers or heartbeat messages, may need to frequently cross thisisolation barrier. Different applications have different requirements for commu-nication throughput and latency based on their objectives. For example, filetransfer or graphics rendering applications tend to require high throughput, butcommunication latency can be more critical in web services and MPI-based ap-plications. In all the above examples, when the VM endpoints reside on the samephysical machine, ideally we would like to minimize the communication latencyand maximize the bandwidth, without having to rewrite existing applications orcommunication libraries.This report surveys the state-of-the-art research in improving communica-tion performance between co-located virtual machines. The major obstacles toefficient inter-VM communication are as follows.• Long communication data path: A major source of inter-VM com-munication overhead is the long data path between co-located VMs. Forexample, the Xen platform enables applications to transparently commu-nicate across VM boundaries using standard TCP/IP sockets. However,all network traffic from the sender VM to receiver VM is redirected viaDom0, resulting in a significant performance penalty. Packet transmissionand reception involves traversal of TCP/IP network stack and invocationof multiple Xen hypercalls. To overcome this bottleneck, a number ofprior works such as XenSocket [12], IVC [2], XWay [4], MMNet [9], andXenLoop [11] have exploited the facility of inter-domain shared memoryprovided by the Xen hypervisor. Using shared memory for direct packetexchange is much more efficient than traversing the network communica-tion path via Dom0. Each of the above approaches offer varying tradeoffsbetween communication performance and application transparency.• Lack of communication awareness in CPU scheduler: The CPUscheduler in the hypervisor also has a major influence on the latency ofcommunications between co-located VMs. If the CPU scheduler is un-aware of communication requirements of co-located VMs, then it mightmake non-optimal scheduling decisions that increase the inter-VM com-munication latency. For example, the Xen hypervisor currently has twoschedulers: the simple earliest deadline first (SEDF) scheduler and the2Credit scheduler. The SEDF scheduler makes each VM specify a requiredtime slice in a certain period; a (slice, period) pair represents how muchCPU time a domain is guaranteed in a period. The SEDF scheduler pref-erentially schedules a domain with the earliest deadline. It requires afinely tuned parameter configuration for meeting VMs’ performance re-quirements. On the other hand, the Credit scheduler is a proportionalshare scheduler with a load balancing feature for SMP systems. The creditscheduler is simple but provides reasonable fairness and performance guar-antee for CPU-intensive guests. Both the Credit and SEDF schedulerswithin Xen focus on fairly sharing processor resources for all the domains,and they perform well for compute-intensive domains. However, neitherscheduler can provide good performance when faced with I/O intensiveVMs or those with extensive inter-VM interactions. This is because thetwo schedulers are agnostic to inter-VM communication requirements.A number of prior works have attempted to partly address this problem.For example, [1] and [3], have exploited the I/O statistics of the guestoperating system to obtain greater knowledge of the VM’s internal behav-ior and to guide


Inter-VM Communication Mechanisms

Download Inter-VM Communication Mechanisms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Inter-VM Communication Mechanisms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Inter-VM Communication Mechanisms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?