Page 1 CS162 Operating Systems and Systems Programming Lecture 11 Thread Scheduling (con’t) Protection: Address Spaces February 23, 2010 Ion Stoica http://inst.eecs.berkeley.edu/~cs162 Lec 11.2 2/23/10 CS162 ©UCB Spring 2010 Review: Last Time • Scheduling: selecting a waiting process from the ready queue and allocating the CPU to it • FCFS Scheduling: – Run threads to completion in order of submission – Pros: Simple (+) – Cons: Short jobs get stuck behind long ones (-) • Round-Robin Scheduling: – Give each thread a small amount of CPU time when it executes; cycle between all ready threads – Pros: Better for short jobs (+) – Cons: Poor when jobs are same length (-) Lec 11.3 2/23/10 CS162 ©UCB Spring 2010 Goals for Today • Finish discussion of Scheduling • Kernel vs User Mode • What is an Address Space? • How is it Implemented? Note: Some slides and/or pictures in the following are adapted from slides ©2005 Silberschatz, Galvin, and Gagne Lec 11.4 2/23/10 CS162 ©UCB Spring 2010 Example to illustrate benefits of SRTF • Three jobs: – A,B: both CPU bound, run for week C: I/O bound, loop 1ms CPU, 9ms disk I/O – If only one at a time, C uses 90% of the disk, A or B could use 100% of the CPU • With FIFO: – Once A or B get in, keep CPU for two weeks • What about RR or SRTF? – Easier to see with a timeline C C’s I/O C’s I/O C’s I/O A or BPage 2 Lec 11.5 2/23/10 CS162 ©UCB Spring 2010 SRTF Example continued: C’s I/O CABAB… C C’s I/O RR 1ms time slice C’s I/O C’s I/O C A B C RR 100ms time slice C’s I/O A C C’s I/O A A SRTF Disk Utilization: ~90% but lots of wakeups! Disk Utilization: 90% Disk Utilization: 9/201 ~ 4.5% Lec 11.6 2/23/10 CS162 ©UCB Spring 2010 Review: SRTF Further discussion • Starvation – SRTF can lead to starvation if many small jobs! – Large jobs never get to run • Somehow need to predict future – How can we do this? – Some systems ask the user » When you submit a job, have to say how long it will take » To stop cheating, system kills job if takes too long – But: Even non-malicious users have trouble predicting runtime of their jobs • Bottom line, can’t really know how long job will take – However, can use SRTF as a yardstick for measuring other policies – Optimal, so can’t do any better • SRTF Pros & Cons – Optimal (average response time) (+) – Hard to predict future (-) – Unfair (-) Lec 11.7 2/23/10 CS162 ©UCB Spring 2010 Predicting the Length of the Next CPU Burst • Adaptive: Changing policy based on past behavior – CPU scheduling, in virtual memory, in file systems, etc – Works because programs have predictable behavior » If program was I/O bound in past, likely in future » If computer behavior were random, wouldn’t help • Example: SRTF with estimated burst length – Use an estimator function on previous bursts: Let tn-1, tn-2, tn-3, etc. be previous CPU burst lengths. Estimate next burst τn = f(tn-1, tn-2, tn-3, …) – Function f could be one of many different time series estimation schemes (Kalman filters, etc) – For instance, exponential averaging τn = αtn-1+(1-α)τn-1 with (0<α≤1) Lec 11.8 2/23/10 CS162 ©UCB Spring 2010 Multi-Level Feedback Scheduling • Another method for exploiting past behavior – First used in CTSS – Multiple queues, each with different priority » Higher priority queues often considered “foreground” tasks – Each queue has its own scheduling algorithm » e.g. foreground – RR, background – FCFS » Sometimes multiple RR priorities with quantum increasing exponentially (highest:1ms, next:2ms, next: 4ms, etc) • Adjust each job’s priority as follows (details vary) – Job starts in highest priority queue – If timeout expires, drop one level – If timeout doesn’t expire, push up one level (or to top) Long-Running Compute Tasks Demoted to Low PriorityPage 3 Lec 11.9 2/23/10 CS162 ©UCB Spring 2010 Scheduling Details • Result approximates SRTF: – CPU bound jobs drop like a rock – Short-running I/O bound jobs stay near top • Scheduling must be done between the queues – Fixed priority scheduling: » serve all from highest priority, then next priority, etc. – Time slice: » each queue gets a certain amount of CPU time » e.g., 70% to highest, 20% next, 10% lowest • Countermeasure: user action that can foil intent of the OS designer – For multilevel feedback, put in a bunch of meaningless I/O to keep job’s priority high – Of course, if everyone did this, wouldn’t work! • Example of Othello program: – Playing against competitor, so key was to do computing at higher priority the competitors. » Put in printf’s, ran much faster! Lec 11.10 2/23/10 CS162 ©UCB Spring 2010 Administrivia • Midterm I coming up in two weeks!: – Tuesday 3/9, 3:30-6:30 (this room) – Should be 2 hour exam with extra time – Closed book, one page of hand-written notes (both sides) • No class on day of Midterm – I will post extra office hours for people who have questions about the material (or life, whatever) • Midterm Topics – Everything up to (and including) Thursday (3/4) – History, Concurrency, Multithreading, Synchronization, Protection/Address Spaces/TLBs Lec 11.11 2/23/10 CS162 ©UCB Spring 2010 Scheduling Fairness • What about fairness? – Strict fixed-priority scheduling between queues is unfair (run highest, then next, etc): » long running jobs may never get CPU » In Multics, shut down machine, found 10-year-old job – Must give long-running jobs a fraction of the CPU even when there are shorter jobs to run – Tradeoff: fairness gained by hurting avg response time! • How to implement fairness? – Could give each queue some fraction of the CPU » What if one long-running job and 100 short-running ones? » Like express lanes in a supermarket—sometimes express lanes get so long, get better service by going into one of the other lines – Could increase priority of jobs that don’t get service » What is done in UNIX » This is ad hoc—what rate should you increase priorities? » And, as system gets overloaded, no job gets CPU time, so everyone increases in
View Full Document