DOC PREVIEW
DATA LOGISTIC

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Data Logistics in Network Computing:The Logistical Session LayerD. Martin Swany and Rich WolskiComputer Science DepartmentUniversity of California, Santa Barbaraswany,rich@cs.ucsb.eduAbstractIn this paper we present a strategy for optimizing end-to-end TCP/IP throughput over long-haul networks (i.e. thosewhere the product of the bandwidth and the delay is high.)Our approach defines a Logistical Session Layer (LSL) thatuses intermediate process-level “depots” along the net-work route from source to sink to implement an end-to-endcommuncation session. Despite the additional processingoverhead resulting from TCP/IP protocol stack Unix kernelboundary traversals at each depot, our experiments showthat dramatic end-to-end bandwidth improvements are pos-sible. We also describe the prototype implementation ofLSL that does not require Unix kernel modification or rootaccess privilege that we used to generate the results, anddiscuss its utility in the context of extant TCP/IP tuningmethodologies.1 IntroductionThe need for flexible and high-performance access todistributed resources has driven the development of net-working since its inception. With the maturing of “The In-ternet” this community continues to increase its demandsfor network performance to support a raft of emerging ap-plications including distributed collaboratoria, full-motionvideo, and Computational Grid programs.Traditional models of high-performance computing areevolving hand-in-hand with advanced networking [13].While distributed computation control and network re-source control [14] techniques are currently being devel-oped, we have been studying the use of time-limited, dy-namically allocated network buffers [28] as a way of pro-visioning the communication medium. We term this formof networking Logistical Networking [7] to emphasize thehigher-level control of buffer resources it entails.In this paper, we present a novel approach to optimizingend-to-end TCP/IP performance using Logistical Network-ing. Our methodology inserts application-level TCP/IP “de-pots” along the route from source to destination and, despitehaving to doubly traverse a full TCP/IP protocol stack ateach depot, improves bandwidth performance. In addition,we have implemented the the communication abstractionsthat are necessary to manage each communication with-out kernel modifications as a set of session-layer semanticsover over the standard byte-stream semantics supported byTCP/IP sockets. As a result, we term the abstractions wehave implemented the Logistical Session Layer (LSL).LSL improves end-to-end network performance bybreaking long-haul TCP/IP connections into shorter TCPsegments between depots stationed along the route. Stag-ing data at the session layer in a sequence of depots in-creases the overhead associated with end-to-end communi-cation. In the LSL case, data emanating from the sourcemust be processed twice (ingress and egress) at each de-pot thereby increasing the overall protocol processing over-head. In this paper, we show that this performance penaltyis dramatically overshadowed by the performance improve-ment that comes from moving TCP end-points closer to-gether. It is counter-intuitive that adding the processor over-head incurred by traversing the protocol stack on an ad-ditional machine could actually improve performance. In-deed, for some time the networking community has focusedon TCP/IP overhead [9, 21] and examined ways to miti-gate it [22, 32, 37]. To introduce additional protocol pro-cessing runs against the current optimization trends in high-performance wide-area networking and computing. How-ever, despite the additional processing overhead that comesfrom moving the data in and out of the kernel at each depot(including checksumming costs), moving TCP end-pointscloser together can improve end-to-end performance.We present this work in the context of recent network-ing trends that focus on state management in the networkfabric itself. While the Internet Protocol suite (as typicallyimplemented) mandates the communication state be man-aged at the end-points [34], new “stateful” facilities [8,26]which relax this restriction have been proposed. In thisvein, we believe that there are several reasons that in-termediate TCP processing helps, rather than hurts, end-to-end bandwidth performance. First, since the round-trip time (RTT) between any two depots is shorter thanthe end-to-end round-trip-time, LSL allows the inherentTCP congestion-controlmechanism to sense the maximallyavailable throughput more quickly. That is, even thoughthe sum of the RTTs between depots may be longer thanthe end-to-end RTT, because the maximum RTT betweenany two depots is shorter, the congestion-control mecha-nisms adapt more rapidly. Secondly, a retransmission thatresults from a lost packet need not originate at the source,but rather, can be generated from the last depot to for-ward the data. Finally, recent advances in the processingspeed, memory bandwidth, and I/O performance of com-monly available processors has lowered protocol processingand data movement costs relative to available network per-formance. We describe, more completely, the confluence ofthese effects in Section 3.In Section 2, we describe the architecture of a prototypeapplication-layer LSL implementation that we have devel-oped. The advantage of providing a session-layer inter-face is that applications do not need to employ their owncustomized buffer management strategies in order to useLogistical Networking to enhance end-to-end network per-formance. As such, our work not only provides a gen-eral methodology for improving deliverable network per-formance, but it also constitutes an important early exampleof a Grid-enabling network abstraction. At the same time,since our implementation does not require kernel modifica-tion, it is portable and easy to deploy.Finally, in Section 4 we detail the effect of using interme-diate TCP depots and LSL on end-to-end bandwidth, inde-pendent of end-point buffer settings, both with and withoutthe RFC 1323 [20] window-scaling. Our results show that,using LSL, an application can gain a substantial end-to-endincrease in bandwidth over standard TCP/IP sockets, even ifthe socket connections have been “tuned” for performance.2 ArchitectureThe Logistical Session Layer (LSL) is a “session” layer(layer 5) in terms of the OSI protocol model. The ses-sion layer lies above the Transport layer (TCP, in the In-ternet Protocol


DATA LOGISTIC

Download DATA LOGISTIC
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view DATA LOGISTIC and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view DATA LOGISTIC 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?