DOC PREVIEW
Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

24 IEEE TRANSACTIONS ON COMPUTERS, VOL. '2-36, NO. 1. JANUARY 1987 1' i Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing EDWARD ASHFORD LEE, MEMBER, IEEE, AND DAVID G. MESSERSCHMI'TT, FELLOW, IEEE Abstract-hrge grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake of program- mer convenience. This is particularly true when the target machine is a programmable DSP chip. However, the runtime overhead inherent in most LGDF implementations is not required for most signal processing systems because such systems are mostly synchronous (in the DSP sense). Synchronous data flow (SDF) differs from traditional data flow in that the amount of data produced and consumed by a data flow node is specified a priori for each input and output. This is equivalent to specifying the relative sample rates in signal processing system. This means that the scheduling of SDF nodes need not be done at runtime, but can be done at compile time (statically), so the runtime overhead evaporates. The sample rates can all be different, which is not true of most current data-driven digital signal processing programming methodologies. Synchronous data flow is closely related to computation graphs, a special case of Petri nets. This self-contained paper develops the theory necessary to statically schedule SDF programs on single or multiple proces- sors. A class of static (compile time) scheduling algorithms is proven valid, and specific algorithms are given for scheduling SDF systems onto single or multiple processors. I Index Terms-Block diagram, computation graphs, data flow digital signal processing, hard real-time systems, multiprocessing, Petri nets, static scheduling, synchronous data flow. I. INTRODUCTION 0 achieve high performance in a processor specialized for T signal processing, the need to depart from the simplicity of von Neumann computer architectures is axiomatic. Yet, in the software realm, deviations from von Neumann program- ming are often viewed with suspicion. For example, in the design of most successful commercial signal processors today [ 11-[5], compromises are made to preserve sequential pro- gramming. Two notable exceptions are the Bell Labs DSP family [6], [7] and the NEC data flow chip [8], both of which are programmed with concurrency in mind. For the majority, however, preserving von Neumann programming style is given priority. This practice has a long and distinguished history. Often, a new non-von Neumann architecture has elaborate hardware Manuscript received August 15. 1985; revised March 17. 1986. This work was supported in pan by the National Science Foundation under Grant ECS- 8211071, an IBM Fellowship. and a grant from the Shell Development Corporation. The authors are with the Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720. IEEE Log Number 861 1442. and software techniques enabling a programmer to write sequential code irrespective of the parallel nature of the underlying hardware. For example, in machines with multiple function units, such as the CDCW and Cray family, so called "scoreboarding" hardware resolves conflicts to ensure the integrity of sequential code. In deeply pipelined machines such as the IBM 360 Model 91, interlocking mechanisms [9] resolve pipeline conflicts. In the M.I.T. Lincoln Labs signal processor [ 101 specialized associative memories are used to ensure the integrity of data precedences. The affinity for von Neumann programming is not all surprising, stemming from familiarity and a proven track record, but the cost is high in the design of specialized digital signal processors. Comparing two pipelined chips that differ radically only in programming methodology, the TI TMS32010 [2] and the Bell Labs DSP20, a faster version of the DSPl [6], we find that they achieve exactly the same performance on the most basic benchmark, the FIR (finite impulse response) filter. But the Bell Labs chip outperforms the TI chip on the next most basic benchmark, the IIR (infinite impulse response) filter. Surprisingly, close examination reveals that the arithmetic hardware (multiplier and ALU) of the Bell Labs chip is half as fast as in the TI chip. The performance gain appears to follow from the departure from conventional sequential programming. However, programming the Bell Labs chip is not easy. The code more closely resembles horizontal microcode than assembly languages. Programmers invariably adhere to the quaint custom of programming these processors in assembler- level languages, for maximum use of hardware resources. Satisfactory compilers have failed to appear. In this paper, we propose programming signal processors using a technique based on large grain data flow (LGDF) languages [ 1 I]. which should ease the programming task by enhancing the modularity of code and permitting algorithms to be described more naturally. In addition, concurrency is immediately evident in the program description, so parallel hardware resources can be used more effectively. We begin by reviewing the data flow paradigm and its relationship with previous methods applied to signal processing. Synchronous data flow (SDF) is introduced, with its suitability for describing signal processing systems explained. The advan- tage of SDF over conventional data flow is that more efficient runtime code can be generated because the data flow nodes can be scheduled at compile time, rather than at runtime. A class of algorithms for constructing sequential (single processor) schedules is proven valid, and a simple 001 8-9340/87/0100-0024$01 .OO 0 1987 IEEE.. I i: i t i LEE AND, MESSERSCHMITT. STATIC SCHEDUUNG OF SYNCHRONOUS DATA heuristic for constructing parallel (multiprocessor) schedules is described. Finally, the limitations of the model are considered. II. THE DATA FLOW PARADIGM In data flow, a program is divided into pieces (nodes or blocks) which can execute (fire)


Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing

Download Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?