DOC PREVIEW
UNO CSCI 8530 - Real-Time Scheduling

This preview shows page 1-2-3 out of 9 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 9 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CSCI 8530Advanced Operating SystemsReal-Time Scheduling2Generic System TypesAs far as scheduling is concerned, there are basically three different kinds of systems:Multiprocessing with single-threading. Multiple concurrent processes are supported, but each has only a single thread. VAX/VMS, Windows 3.1, and UNIX before Pthreads are of this type.Single-processing with multithreading. This type of system has only a single process (conceptually), but can have many threads. Tempo/C fits in this category (even though the threads are called processes).Multiprocessing with multithreading. Each of the concurrent processes can have multiple threads. Most modern systems fit in this category (e.g. Win 2000/XP, QNX, UNIX with Pthreads).3Why Use Multithreading?Support for multithreading is not free (surprise!). Since processes already provide for concurrent execution, why would you want the additional burden of multithreading support?Threads within the same process share the same global address space, eliminating some IPC requirements (e.g. shared memory).Context switches between threads are much faster than context switches between processes.System design is easier, especially when the system includes a wide variety of operations.Failure of one thread doesn’t necessarily bring down the entire system. It might be possible to correct the problem and restart the faulting thread while the remainder of the system continues operating.4Quick Review“Tasks” (which can be processes or threads) can be in one of three basic states (which can be embellished in some systems):Running – currently using the CPUReady – add CPU and run!Blocked – waiting on a resource (generic term) – e.g. semaphore, timer, I/O completionEach task has its state and other information saved in a unique structure (e.g. process control block) that includes register contents (when not running), resources allocated, privilege, etc.Switching the CPU between tasks requires selecting the “new” task, saving the state of the “old” task and restoring the state of the “new” task.5Scheduling: Selecting the Next TaskMany scheduling algorithms are available, and many of these have (or should have) been seen before:Shortest job first (batch)First-in, first-outRound robinShortest remaining time to completionFor real-time systems, the algorithm must be deterministic. That is, it must always be possible to predict which task will run next. Many real-time operating systems use round robin, since it’s simple and predictable.6PrioritiesAll tasks are not created equal; some are more important than others (for a variety of reasons).Some non-real-time systems may “boost” the priority of low-priority tasks to ensure they get some CPU time. This is notappropriate in real-time systems. It is absolutely necessary that the system designer/programmer (who assigns task priorities) be assured that the tasks, and thus the system, meets it real-time deadlines.27The Priority PrincipleWhen a system assigns priorities to tasks, the scheduling algorithm has two rules:The task that is selected to run is the highest priority ready task in the system. If a blocked task with a high priority becomes ready, and a lower priority task is running, that lower priority task is preempted and the higher priority task is executed.If multiple ready tasks have the same priority, then the basic scheduling algorithm used in the system is applied to them. Thus if round robin is used and two tasks have the same priority, they are run in round robin fashion.8ExampleSuppose a high priority process H is blocked waiting on a mutex held by a low priority process L which is running.Suppose L releases (unlocks) the mutex.As a result of L releasing the mutex, H becomes ready.Since H is now the highest priority ready process, it preempts the execution of L and begins running.9Blocked ProcessesQueues associated with limited resources are ordered by priority in a real-time system.This means that if a high priority and a low priority process are both blocked waiting for a resource (e.g. something controlled by a semaphore), the high priority task will be allowed to use the resource before the low priority process.This is in distinct contrast to many simple systems (e.g. Tempo/C) that use strict FIFO task queuing for resources.10Priority InversionPriority inversion is a situation that can occur in systems that use priority scheduling.In this situation, a low priority task can act as though it has a higher priority than a high-priority task.For priority inversion to occur, there must be at least three tasks involved.11ExampleSuppose the priority of three tasks T1, T2, and T3 is such that T1 > T2 > T3.Suppose T1 and T2 are blocked, and T3 is running.T3 requests and is granted a lock on mutex M.T1 becomes unblocked and preempts T3.T1 requests a lock on mutex M, which can’t be granted (since T3 has it locked), so T1 becomes blocked, and T3 runs (with M still locked).T2 becomes unblocked and preempts T3.The system is now exhibiting priority inversion.12Priority Inversion (Illustration)T1T2T3RequestMT1UnblockedLockMT1BlockedT2Unblocked313What Happened?At the end of the last example, T1 was blocked waiting on mutex M, which it can only acquire when T3 releases it.But T3 cannot run, since T2 is ready and has a higher priority.T1 is thus preventing from running until T2 either blocks or terminates.14Priority InheritanceTo deal with the problem of priority inversion, some real time systems support a technique called priority inheritance.In this scheme, a low priority task that locks a resource temporarilyinherits the priority of the highest-priority task blocked by that resource.This allows the lower-priority task to complete its use of the locked resource, return to its lower priority, and the higher priority task to preempt it.15ExampleAssume the same three tasks as in the last example.After the first few steps, T1 is running, T2 is blocked, and T3 is ready (with a lock on M).T1 tries to lock M, fails, and becomes blocked. At this point, T3 temporarilyinherits the priority of T1 and begins running.T2, as before, becomes ready. But since T3 has a priority higher than T2, T2 is not allowed to preempt T3 and cause priority inversion.Instead, T3 continues to run until it releases M, at which point its priority returns to its lower level, and T1 (no longerblocked) – not T2 – is selected for execution.16A Real-World ExamplePriority inversion is not theoretical.The Mars


View Full Document

UNO CSCI 8530 - Real-Time Scheduling

Download Real-Time Scheduling
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Real-Time Scheduling and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Real-Time Scheduling 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?