DOC PREVIEW
UW-Madison CS 739 - The S/Net’s Linda Kernel

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

The S/Net’s Linda Kernel NICHOLAS CARRIER0 and DAVID GELERNTER Yale University Linda is a parallel programming language that differs from other parallel languages in its simplicity and in its support for distributed data structures. The S/Net is a multicomputer, designed and built at AT&T Bell Laboratories, that is based on a fast, word-parallel bus interconnect. We describe the Linda-supporting communication kernel we have implemented on the S/Net. The implementation suggests that Linda’s unusual shared-memory-like communication primitives can be made to run well in the absence of physically shared memory; the simplicity of the language and of our implemen- tation’s logical structure suggest that similar Linda implementations might readily be constructed on related architectures. We outline the language, and programming methodologies based on distributed data structures; we then describe the implementation, and the performance both of the Linda primitives themselves and of a simple S/Net-Linda matrix-multiplication program designed to exercise them. Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks]: Network Ar- chitecture and Design-network communications; C.2.4 [Computer-Communication Networks]: Distributed Systems--network operating systems; D.3.3 [Programming Languages]: Language Constructs-concurrertprogrammingstructures; D.4.4 [Operating Systems]: Communication Man- agement-message sending. General Terms: Languages Additional Key Words and Phrases: Parallel programming languages 1. INTRODUCTION A parallel programming language is a language that supports process-forking and interprocess-communication (in one form or another) in addition to the normal computation and control operations that all programming languages need. Par- allel languages are tools for parallel programming, and parallel programming in turn is useful in two ways. In domains where logically-concurrent algorithms are available (numerical problems, system simulation and AI are three such domains) it is a technique for making programs run faster. On local area networks, it is a method for constructing integrated operating systems and distributed utilities. Linda [13] consists of four simple operators that, when injected into a host language h, turn h into a parallel programming language. A Linda-based parallel language is in fact a new language, not an old one with added system calls, to the extent that the Linda compiler or preprocessor recognizes the Linda operations, This work was supported by the National Science Foundation, Grant No. MCS-8303905. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 0 1986 ACM 0734-2071/86/0500-0110 $00.75 ACM Transactions on Computer Systems, Vol. 4, NO. 2, May 1986, Pages 110-129.The S/Net’s Linda Kernel l 111 checks and rewrites them on the basis of symbol table information, and can optimize the pattern of kernel calls that result based on its knowledge of constants and loops, among other things. (Most of our programming experiments so far have been conducted in C-Linda, but we have recently implemented a Fortran- Linda preprocessor as well, at the request of Yale’s Numerical Analysis group.) The S/Net [l] is a multicomputer that can also function as the backbone of a local area net. Each S/Net is a collection of not more than sixty-four memory- disjoint computer nodes communicating over a fast, word-parallel broadcast bus. We have implemented a Linda-supporting communication kernel on an S/Net at AT&T Bell Laboratories (where the machine was designed and built). This implementation is of interest-we will argue-for two reasons. It demonstrates, first, that Linda’s usually powerful and flexible communication primitives can be made to run well; the language’s shared-memory-like semantics can in fact be supported efficiently in the absence of physically shared memory. Second, al- though Linda and the S/Net are particularly well-matched, the simplicity of the language and of the implementation’s design and of the S/Net logical structure suggest to us that Linda implementations like ours might readily be constructed on similar architectures elsewhere. Such implementations would promise, as ours does, to synthesize some of the advantages of shared-memory programming on the one hand and of unshared-memory network architectures on the other. We have argued elsewhere that Linda’s operators are in many cases substan- tially more powerful and expressive than comparable ones in other languages; that Linda is often cleaner, simpler and easier to use; and that the distributed data structures Linda supports and most other parallel languages forbid are often the most natural complement to distributed algorithms. In Section 2 we outline the language and briefly rehash some of these arguments. We compare Linda in particular to the remote procedure call protocol, which is of special interest because it has become the most widely discussed interprocess communication technique, and has been implemented, tested and used (Birrell and Nelson [3]). In Section 3 we describe our general strategy for implementing Linda on bussed networks, and in Section 4 we discuss the S/Net implementation specifically. Section 5 presents performance results, and Section 6, conclusions. 2. LINDA Processes in Linda communicate through a globally-shared collection of ordered tuples called tuple space or TS. The four operators that Linda provides (1) add tuples to this shared collection, (2) remove tuples,


View Full Document

UW-Madison CS 739 - The S/Net’s Linda Kernel

Documents in this Course
Load more
Download The S/Net’s Linda Kernel
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view The S/Net’s Linda Kernel and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view The S/Net’s Linda Kernel 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?