DOC PREVIEW
Duke CPS 210 - Queues and Scheduling

This preview shows page 1-2-24-25 out of 25 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Queues and SchedulingCPU Scheduling 101Scheduler GoalsOutlinePriorityInternal PriorityPreemptionA Simple Policy: FCFSBehavior of FCFS QueuesLittle’s LawWhy Little’s Law Is ImportantEvaluating FCFSPreemptive FCFS: Round RobinEvaluating Round RobinDigression: RR and System Throughput IIMinimizing Response Time: SJFBehavior of SJF SchedulingSJF in PracticeConsidering I/ODigression: BottlenecksTwo Schedules for CPU/DiskMultilevel Feedback QueueRialtoA Rialto ScheduleLottery SchedulingQueues and SchedulingQueues and SchedulingCPU Scheduling 101CPU Scheduling 101The CPU scheduler makes a sequence of “moves” that determines the interleaving of threads.•Programs use synchronization to prevent “bad moves”.•…but otherwise scheduling choices appear (to the program) to be nondeterministic.The scheduler’s moves are dictated by a scheduling policy.Schedulerready poolWakeup orReadyToRunGetNextToRun()SWITCH()Scheduler GoalsScheduler Goals•response time or latencyHow long does it take to do what I asked? (R)•throughputHow many operations complete per unit of time? (X)Utilization: what percentage of time does the CPU (and each device) spend doing useful work? (U)•fairnessWhat does this mean? Divide the pie evenly? Guarantee low variance in response times? freedom from starvation?•meet deadlines and guarantee jitter-free periodic tasksOutlineOutline1. the CPU scheduling problem, and goals of the scheduler2. scheduler “add-ons” used in many CPU schedulers.•priority (internal vs. external)•preemption3. fundamental scheduling disciplines•FCFS: first-come-first-served•SJF: shortest-job-first4. practical CPU schedulingmultilevel feedback queues: using internal priority to create a hybrid of FIFO and SJF.PriorityPrioritySome goals can be met by incorporating a notion of priority into a “base” scheduling discipline.Each job in the ready pool has an associated priority value;the scheduler favors jobs with higher priority values.External priority values:•imposed on the system from outside•reflect external preferences for particular users or tasks “All jobs are equal, but some jobs are more equal than others.”•Example: Unix nice system call to lower priority of a task.•Example: Urgent tasks in a real-time process control system.Internal PriorityInternal PriorityInternal priority: system adjusts priority values internally as as an implementation technique within the scheduler. improve fairness, resource utilization, freedom from starvation•drop priority of jobs consuming more than their share •boost jobs that already hold resources that are in demande.g., internal sleep primitive in Unix kernels•boost jobs that have starved in the recent past•typically a continuous, dynamic, readjustment in response to observed conditions and eventsmay be visible and controllable to other parts of the systemPreemptionPreemptionScheduling policies may be preemptive or non-preemptive.Preemptive: scheduler may unilaterally force a task to relinquish the processor before the task blocks, yields, or completes.•timeslicing prevents jobs from monopolizing the CPUScheduler chooses a job and runs it for a quantum of CPU time.A job executing longer than its quantum is forced to yield by scheduler code running from the clock interrupt handler.•use preemption to honor prioritiesPreempt a job if a higher priority job enters the ready state.A Simple Policy: FCFSA Simple Policy: FCFSThe most basic scheduling policy is first-come-first-served, also called first-in-first-out (FIFO).•FCFS is just like the checkout line at the QuickiMart.Maintain a queue ordered by time of arrival.GetNextToRun selects from the front of the queue.•FCFS with preemptive timeslicing is called round robin.Wakeup orReadyToRunGetNextToRun()ready listList::AppendRemoveFromHead CPUBehavior of FCFS QueuesBehavior of FCFS QueuesAssume: stream of normal task arrivals with mean arrival rate λ.Tasks have normally distributed service demands with mean D.Then: Utilization U = λD (Note: 0 <= U <= 1) Probability that service center is idle is 1-U. “Intuitively”, R = D/(1-U)RU 1(100%)Service center saturates as 1/ λ approaches D: small increases in λ cause large increases in the expected response time R. servicecenterLittle’s LawLittle’s LawFor an unsaturated queue in steady state, queue length N and responsetime R are governed by:Little’s Law: N = λR.While task T is in the system for R time units, λR new tasks arrive.During that time, N tasks depart (all tasks ahead of T).But in steady state, the flow in must balance the flow out. (Note: this means that throughput X = λ).Little’s Law gives response time R = D/(1 - U).Intuitively, each task T’s response time R = D + DN.Substituting λR for N: R = D + D λR Substituting U for λD: R = D + URR - UR = D --> R(1 - U) = D --> R = D/(1 - U)(W/T = C/T * W/C)Why Little’s Law Is ImportantWhy Little’s Law Is Important1. Intuitive understanding of FCFS queue behavior.Compute response time from demand parameters (λ, D).Compute N: tells you how much storage is needed for the queue.2. Notion of a saturated service center. If D=1: R = 1/(1- λ)Response times rise rapidly with load and are unbounded.At 50% utilization, a 10% increase in load increases R by 10%.At 90% utilization, a 10% increase in load increases R by 10x.3. Basis for predicting performance of queuing networks.Cheap and easy “back of napkin” estimates of system performance based on observed behavior and proposed changes, e.g., capacity planning, “what if” questions.Evaluating FCFSEvaluating FCFSHow well does FCFS achieve the goals of a scheduler?•throughput. FCFS is as good as any non-preemptive policy.….if the CPU is the only schedulable resource in the system.•fairness. FCFS is intuitively fair…sort of.“The early bird gets the worm”…and everyone else is fed eventually.•response time. Long jobs keep everyone else waiting.3 56D=3D=2D=1TimeGanttChartR = (3 + 5 + 6)/3 = 4.67Preemptive FCFS: Round RobinPreemptive FCFS: Round RobinPreemptive timeslicing is one way to improve fairness of FCFS.If job does not block or exit, force an involuntary context switch after each quantum Q of CPU time.Preempted job goes back to the tail of the ready list.With infinitesimal Q round robin is called processor sharing.D=3D=2D=13+ε 56R = (3 + 5 + 6 + ε)/3 = 4.67 + εIn this case, R is unchanged by timeslicing.Is this always true?quantum


View Full Document
Download Queues and Scheduling
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Queues and Scheduling and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Queues and Scheduling 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?