Duke CPS 296.2 - Applying Parallel Computation Algorithm

Unformatted text preview:

Applying Parallel Computation Algorithms in the Design of Serial Algorithms NIMROD MEGIDDO Tel A viv Umverszty, Tel A vlv, Israel Abstract. The goal of this paper is to point out that analyses of parallelism m computational problems have practical implications even when mult~processor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for designing an efficsent serial algorithm for another problem A unified framework for cases like this is presented. Particular cases, which axe discussed in this paper, provide motivation for examining parallelism in sorting, selecuon, minimum-spanning-tree, shortest route, max-flow, and matrix multiplication problems, as well as in scheduling and locational problems. Categories and Subject Descriptors: F.I.I [Computation by Abstract Devices]: Models of Computation-- relations among model~ F.I.2 [Computation by Abstract Devices]: Modes of Computation--parallelism; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems-- computatwns on discrete structures; geometrical problems and computations; routing and layout; sequencing and scheduhng; sorting and searching; G.2.1 [Discrete Mathematics]. Combmatoncs--combmatorial algo- rithms; G 2 2 [Discrete Mathematics]: Graph Theory--network problems, path and cxrcuit problems; trees General Terms" Algorithms, Theory Addmonal Key Words and Phrases: Parallel algorithms, parametric computatmn, spanning tree, sched- uling, rain-ratio cycle, algorithms on trees, max-flow-related problems 1. Introduction Numerous models have been proposed for parallel computation, and it is not clear yet which of them will eventually be supported by technology. Thus some of the models may at present be criticized as being unrealistic, since issues like synchroni- zation and memory conflicts still have to be settled. In this paper, however, we provide a new motivation for studying such models. An early model of parallel computation, considered by Valiant [37], now seems unrealistic since it only counts comparisons as time-consuming operations and also assumes synchronization. Val- iant does obtain very nice theoretical results with respect to sorting, merging, and maximum finding in parallel. On the other hand, in this paper we show that the search for good parallel algorithms of Valiant's type is justified on practical grounds Parts of this work were done while the author was Visning Scientist of the National Research InsUtute for Mathematical Sciences, CSIR, Pretoria, South Africa and at Northwestern and Carnegie-Mellon Universities. This work was partially supported by the National Science Foundation under grants ECS7909724, ECS8121741, SOC7905900 and by a grant from Control Data. Author's present address: Department of Computer Science, Stanford University, Stanford, CA 94305. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission © 1983 ACM 0004-5411/83/1000-0865 $00.75 Journal of the Assoctauon for Computing Machinery, Vol 30, No 4, October 1983, pp 852-865Parallel Computation Algorithms 853 regardless of the state of technology for parallel computation. This is because, in many cases, one may desire to have a good parallel algorithm for one problem in order to design a good serial algorithm for another problem. In this paper we describe a fairly large class of problems in which this phenomenon occurs. The general idea is as follows. Suppose that a certain problem A is solved in T time units on a P-processor machine. In a serial implementation this amounts to TP time. Within the scope of serial computation we only wish to minimize the product TP. The common examination of parallelism in problem A amounts, essentially, to minimizing T, subject to keeping P at a reasonably low level. Now, suppose that an algorithm for problem A could somehow be applied to designing an algorithm for another problem B, so that a factor like TlogP dominates the (serial) time bound for problem B, provided P is not too large. This would certainly justify an examination of parallelism in problem A, since minimizing TlogP, subject to P's not being too large, is very close to minimizing T. In a study of parallelism, P in many cases depends on the input length n, so that n" <_ P <__ n k for some ~ > 0 and an integer k. In other words, we have log P ~ log n in many cases. In this paper we demonstrate situations like the one mentioned above. We start here with an abstract example. A more concrete example is considered in the next section. Suppose that F(?~) is a monotone function of the real variable A and problem A is to evaluate F at a given h. Suppose that problem B is to solve the equation F(?0 ffi 0. Assume that the variable h is involved throughout the evaluation of F only in additions, comparisons, and multiplications by constants. Then, as we demonstrate in the subsequent sections, in order to design a good serial algorithm for solving the equation F(h) = 0, it is desirable to have a good parallel algorithm for evaluating F. Particular cases of the general problem F(X) = 0, which are discussed later, are minimax or maximin problems and ratio optimization problems. Another important type of problem is finding the maximum or minimum value of a parameter X for which a certain property holds. We discuss examples of this kind as well. In the next section we describe a relatively simple example which is aimed at explaining the general idea rather than presenting an optimal algorithm. Yet the subsequent sections present cases in which the method does improve upon existing algorithms and also motivates a more thorough study of parallelism. We claim that almost every combinatorial problem in which numbers are involved should be examined from the parallelism point of view, since this would probably lead to good algorithms for related problems. The idea suggested in [23] has been used for solving problems taken from different


View Full Document

Duke CPS 296.2 - Applying Parallel Computation Algorithm

Download Applying Parallel Computation Algorithm
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Applying Parallel Computation Algorithm and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Applying Parallel Computation Algorithm 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?