DOC PREVIEW
MTU CS 6461 - Technology Challenges for Virtual Overlay Networks

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 31, NO. 4, JULY 2001 319Technology Challenges for Virtual Overlay NetworksKenneth P. BirmanAbstract—An emerging generation of mission-critical networked appli-cations is placing demands on the Internet protocol suite that go well be-yond the properties they were designed to guarantee. Although the “nextgeneration internet” (NGI) is intended to respond to the need, when wereview such applications in light of the expected functionality of the NGI,it becomes apparent that the NGI will be faster but not more robust. Wepropose a new kind of virtual overlay network (VON) that overcomes thisdeficiency and can be constructed using only simple extensions of existingnetwork technology. In this paper, we use the restructured electric powergrid to illustrate the issues, and elaborate on the technical implications ofour proposal.Index Terms—Critical infrastructure, distributed computing, fault-tol-erance, next generation internet (NGI), quality of service (QoS), virtualoverlay networks (VONs).I. INTRODUCTIONThe basic premise of this paper is that mission-critical use of theInternet, for example, in support of the restructured electric powergrid, emerging medical computing applications, or advanced avionics,will require functionality lacking in the “next generation internet”(NGI). All of these are examples of emerging distributed computingsystems being displaced onto network architectures constructed fromthe same hardware and software components and running the sameprotocols employed in Internet settings. Each can point to earlier suc-cesses with networked technologies, but that depended upon specialhardware, highly specialized architectures, and dedicated protocols.The challenge is to repeat and surpass these accomplishments withstandard components.Today, faced with what can only be called a revolution in net-working connectivity and productivity, it has become imperative towork with off-the-shelf commercial products. Not only are older andless standard approaches unacceptably expensive, in selecting thema designer denies him or herself the best available technology, suchas tools associated with building web interfaces, application-builderssuch as one finds on PCs, management and monitoring infrastructure,plug-and-play connectivity with thousands of powerful softwareproducts, and access to other commodity components that offerexciting functionality and economies of scale. This trend, however,is creating a daunting challenge for the designer of a mission-criticalsystem, who will need to demonstrate the safety, reliability, or stabilityof the application in order to convince the end-user that it is safe andappropriate to deploy the new solution.Traditionally, critical networked applications have exploited phys-ical or logical separation to justify a style of reasoning in which eachapplication is developed independently. For example, a medical com-puting system might be divided into a medical monitoring network,Manuscript received July 2, 2000; revised March 27, 2001. This work wassupported by DARPA/RADC under Grant F30602-99-1-0532 and ARO/EPRIunder Contract WO8333-04. The views,opinions,andfindings expressed hereinare solely those of the author, and do not reflect official positions of the fundingagencies. This paper is revised and extended from a less technical treatmententitled “The Next Generation Internet: Unsafe at Any Speed?” which appearedin the August 2000 issue of IEEE Computer.The author is with the Department of Computer Science, Cornell University,Ithaca, NY 14853 USA (e-mail: [email protected]).Publisher Item Identifier S 1083-4427(01)05293-6.a medical database and records keeping system, a billing and paper-work system, a medical library and pharmacy system, and so forth. Inhospitals built during the 1980s each of these subsystems might wellhave had its own dedicated infrastructure: a real-time network for themonitoring system, a more conventional one for the clinical databasesystem, and so forth. This approach simplified the task confronting thedesigners because each subproblem was smaller and any interactionsbetween subsystems occurred through well-defined interfaces.When migrating such systems to a more standard shared networkinfrastructure, supported by Internet routers and protocols, applica-tions are forced to compete for network bandwidth and switching re-sources in accordance with the end-to-end philosophy which governsthe Internet. Protocols such as TCP are designed to be greedy, aggres-sively seeking the largest possible share of resources, then backing offwhen packet loss in the Internet signals that a saturation point has beenreached. Since other applications are generally layered over TCP, theyare subjected to this behavior.TCP is a reasonable data transfer protocol for filetransfer, email, andeven web pages—at least after the web use becomes accustomed to theidiosyncrasies of the web. However, the unpredictable performance andextended delays that the protocol can experience are at odds with anytype of “guarantee” that the application might require. Moreover, thisbehavior of TCP is a consequence of the connectionless, packet-ori-ented philosophy of the Internet. Thus, to the extent that an applica-tion implicitly depends upon isolation or other network “guarantees”for correctness, migration to a shared network—even one disconnectedfrom the public Internet but running standard Internet protocols—hasthe potential to compromise safety.Recognizing this problem, a series of reports and studies have sug-gested that thereis acrisis in the software industry[1]. A means for sup-porting and validating NGI applications is urgently needed [2]. More-over, the lack of isolation presents serious security concerns [3].Application designers depend upon isolation to rule outunantic-ipated interference. The interpretations of “isolation” and “interfer-ence,” however, vary among applications. For example, some criticalapplications will require security from intrusion, a property offered byvirtual private networks (VPNs). We know how to build VPNs on theInternet, and the NGI will offer even stronger security because of theexpected widespread deployment of public key infrastructures (PKI)and the use of security techniques to protect the Internet routing andnaming protocols. If this is all that an application requires, it seemscompletely reasonable to talk about


View Full Document

MTU CS 6461 - Technology Challenges for Virtual Overlay Networks

Documents in this Course
Tapestry

Tapestry

13 pages

Load more
Download Technology Challenges for Virtual Overlay Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Technology Challenges for Virtual Overlay Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Technology Challenges for Virtual Overlay Networks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?