DOC PREVIEW
Design Patterns for Real-Time Computer Music Systems

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Design Patterns for Real-Time Computer Music Systems Roger B. Dannenberg and Ross Bencina 4 September 2005 This document contains a set of “design patterns” for real time systems, particularly for computer music systems. We see these patterns often because the problems that they solve come up again and again. Hopefully, these patterns will serve a more than just a set of canned solutions. It is perhaps even more important to understand the underlying problems, which often have subtle aspects and ramifications. By describing these patterns, we have tried to capture the problems, solutions, and a way of thinking about real-time systems design. We welcome your comments and questions.ICMC 2005 Workshop on Real Time Systems Concepts for Computer Music 2 © 2005, Roger B. Dannenberg and Ross Bencina Static Priority Scheduling Also known as Deadline Monotonic Scheduling. Context Real-time systems often have a mix of tasks. Tasks may have hard deadlines, as in sample buffer computation, or strongly desirable upper limits on latency, as in interactive MIDI processing. If this allowable time-to-completion for some task is less than the execution time of some other task, then the long-running task must be preempted so that the low-latency task can meet its deadline. Problem Organize the program so that deadlines are met and latencies are acceptable, using relatively few processes or threads. Forces When there is coordination and communication between tasks (e.g. shared variables, memory, and data structures), it is much easier for the programmer if these tasks do not preempt one another – they can then run in a single thread. Design is a tradeoff between letting tasks share threads, to simplify coordination, and allocating tasks to separate threads, to allow fine-grain scheduling. Solution Minimize the number of threads, typically using 2 or 3. Divide tasks according to their latency requirements, i.e. the time from when the task can run to its completion deadline. Low-latency tasks are assigned to the highest-priority thread, and the highest-latency threads are assigned to the lowest-priority thread, etc. The threads are scheduled according to fixed priorities. Analysis can sometimes be used to evaluate solutions before they are implemented. Essentially, the worst-case latency at the highest priority thread is just the sum of all task run times (assuming tasks will not run again until other tasks have finished). The latency of the thread at the next lower priority level is also the sum of the run times for tasks running in this thread, but in addition, the high priority thread “steals” time and this must be accounted for. Continuing until worst-case latencies are estimated for all threads, one can then decide whether the latencies are small enough to satisfy all tasks. Further Reading Ken Tindell, “Deadline Monotonic Analysis,” in http://www.embedded.com/2000/0006/0006feat1.htm. J. Y. T. Leung, J. Whitehead, “On the Complexity of Fixed Priority Scheduling of Periodic, Real-Time Tasks”, Performance Evaluation, 2(4), pages 237-250, 1989.ICMC 2005 Workshop on Real Time Systems Concepts for Computer Music 3 © 2005, Roger B. Dannenberg and Ross Bencina Real Time Memory Management Also known as non-blocking memory management or deterministic memory management. Context When programming “real-time” code to run on a general purpose operating system, such as to generate audio in a high-priority callback, it is not advisable (or sometimes not even possible) to call operating system functions, including those which allocate memory, because it is impossible to predict the time which may be taken by such calls (which could acquire locks, cause priority inversion, or page faults). Yet often in real-time music software objects need to be allocated dynamically. Problem Allocate memory for program objects using methods which avoid calling the operating system to allocated memory in real-time code. Forces How well is the allocation behavior of these objects understood? How many different types (or size classes) of objects are needed? Is a fully generalised allocation mechanism necessary, or does the allocation behavior allow a simpler solution? How are allocated objects shared between threads of execution? Solutions Depending on the type of allocation patterns which need to be supported there are a number of practical options, many can be combined or used in different parts of the same application. Fixed Up-front allocations with a free list If the maximum number of required objects is known, arrays of these objects can be preallocated, and a linked list of free objects can be maintained for fast allocation and deallocation. Size-segregated free lists An extension of the previous pattern is to keep free lists for separate size classes, instead of distinct object types. Per-thread memory pools One way to avoid locks is to only use memory within a specific thread context and write a custom allocator which allocates memory from a large preallocated block. This avoids calling the OS, but the memory allocation algorithm should be chosen carefully if deterministic timing is important.ICMC 2005 Workshop on Real Time Systems Concepts for Computer Music 4 © 2005, Roger B. Dannenberg and Ross Bencina Allocate in a non-real-time context Don't allocate any memory in the real-time context, instead allocated memory in a non-real-time thread and send it using an asynchronous message queue. Real-time Garbage Collection Write a garbage collector for objects which are used in your real-time thread. An incremental collector can be coded to provide deterministic timing behavior by interrupting collection if its timing allocation is exceeded. Examples SuperCollider Language and Serpent both use real-time garbage collectors to manage dynamic memory. Aura and AudioMulch use per-thread size-segregated memory pools. AudioMulch allocates in a non-real-time context for many types of dynamic objects, especially large ones. Fixed up-front allocation is often used for things like synthesizer voices, where the maximum polyphony is known up-front. Further Reading See the Memory Patterns chapter of “Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems” by Bruce Powel Douglass. Addison Wesley Professional, 2002. or the online extract at: http://www.awprofessional.com/articles/article.asp?p=30309ICMC 2005 Workshop on Real Time Systems Concepts for Computer Music 5 © 2005, Roger B.


Design Patterns for Real-Time Computer Music Systems

Download Design Patterns for Real-Time Computer Music Systems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Design Patterns for Real-Time Computer Music Systems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Design Patterns for Real-Time Computer Music Systems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?