DOC PREVIEW
UT CS 372 - Processes

This preview shows page 1-2-3-4-5-6 out of 19 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Processes Last Time• Each process maintains its own state, that includes its text anddata, procedure call stack, etc.• The OS also stores process state for each process. This state iscalled the Process Control Block (PCB), and it includes the PC,SP, register states, execution state, etc.• Process Execution StateNewRunningTerminatedWaitingReady• For each of these execution states, the OS maintains a statequeue. All of the processes that the OS is currently managingreside in one and only one of these state queues.Process Scheduling: what to execute when Today• Scheduling criteria• Scheduling algorithmsOperating Systems Lecture 3 1Scheduling ProcessesMultiprocessing (concurrency) - one process on the CPUrunning, and one or more doing I/O enables the OS to increasesystem utilization and throughput by overlapping I/O and CPUactivities.Long Term Scheduling: How does the OS determine the degreeof multiprogramming, i.e., the number of jobs executing at oncein the primary memory?Short Term Scheduling: How does (or should) the OS select aprocess from the ready queue to execute?• Policy Goals• Policy Options• Implementation considerationsOperating Systems Lecture 3 2Short Term SchedulingThe kernel runs the scheduler at least when• a process switches from running to waiting,• an interrupt occurs, or• a process is created or terminated.In a non-preemptive system, the scheduler must wait for one of theseevents, but in a preemptive system the scheduler can interrupt arunning process.Criteria for Comparing Scheduling Algorithms:CPU Utilization The percentage of time that the CPU is busy.Throughput The number of processes completing in a unit of time.Turnaround time The length of time it takes to run a process frominitialization to termination, including all the waiting time.Waiting time The total amount of time that a process is in theready queue.Response time The time between when a process is ready to runand its next I/O request.Operating Systems Lecture 3 3Car Making MysteryPeople often say they want “faster” internet access.What is faster?• My factory takes 1 day to make a Model-T ford.That doesn’t sound fast.• But 10 minutes after I start making my first Ford of the day, Ican start another Ford.• If my factory runs 24 hrs/day, I can make 24 * 6 = 144 cars perday.Is that faster? If you put in a special order for 1 green car, it stilltakes a day.Throughput is increased, but latency is not.Operating Systems Lecture 3 4Throughput vs. LatencyPeople often say they want “faster” internet access.What is faster?If they transfer files, then they want large bandwidth. If they playgames, they probably want low latency.These two factors are separate. Think of the analogy to water pipes.low latency If I want a drink, I want water to come out of thespout as soon as I turn it on.high bandwidth If I wan to fill up a swimming pool, I want a lot ofwater coming out of that spout at once, and I don’t care if ittakes 5 minutes before I see the first drop.Operating Systems Lecture 3 5Back to SchedulingScheduling for low latency maximizes interactive performance.This is good because if my mouse doesn’t move, I might reboot themachine.But the OS needs to make sure that throughput does not suffer. Iwant my long running programming (e.g., mp3 encoder) to finish, sothe OS must schedule it occasionally, even if there are manyinteractive jobs.• Throughput is computational bandwidth.• Response time is computational latency.Operating Systems Lecture 3 6Scheduling PoliciesIdeally, we would like a CPU schedule that maximizes CPU utilizationand throughput while minimizing turnaround time, waiting time, andresponse time, but this is not generally possible. Instead, we choosea scheduling algorithm based on its ability to satisfy a policy goal:• Minimize response time - provide output to the user as quickly aspossible and process their input as soon as it is received.• Minimize variance of average response time - in an interactivesystem, predictability may be more important than a low averagewith a high variance.• Maximize throughput - two components1. minimize overhead (OS overhead, context switching)2. efficient use of system resources (CPU, I/O devices)• Minimize waiting time - be fair by ensuring each process waitsthe same amount of time. This goal often increases averageresponse time.Operating Systems Lecture 3 7Scheduling PoliciesSimplifying Assumptions• One process per user• One thread per process (more on this topic next week)• Processes are independentResearchers developed these algorithms in the 70’s when theseassumptions were more realistic, and it is still an open problemhow to relax these assumptions.Scheduling Algorithms:FCFS: First Come, First ServedRound Robin: Use a time slice and preemption to alternate jobs.SJF: Shortest Job FirstMultilevel Feedback Queues: Round robin on priority queue.Lottery Scheduling: Jobs get tickets and scheduler randomly pickswinning ticket.Operating Systems Lecture 3 8Scheduling PoliciesFCFS: First-Come-First-Served (or FIFO: First-In-First-Out)• The scheduler executes jobs to completion in arrival order.• In early FCFS schedulers, the job did not relinquish the CPU evenwhen it was doing I/O.• We will assume a FCFS scheduler that runs when processes areblocked on I/O, but that is non-preemptive, i.e., the job keepsthe CPU until it blocks (say on an I/O device).Operating Systems Lecture 3 9FCFS Scheduling PolicyExamples:TimeBC AB CAA requests I/O Arrival order: A,B,C (A does I/O)CA ABArrival order: A,B,C (no I/O)Arrival order: B,C,A (no I/O)0 2 5 100 5 7 100 2 4 7 10If the processes arrive 1 time unit apart, what is the average waittime in these three cases?Advantages:••Disadvantages:••Operating Systems Lecture 3 10Scheduling PoliciesRound Robin: most time sharing systems use some variation ofthis policy.• Add a timer and use a preemptive policy.• After each time slice, move the running thread to the back of thequeue.• Selecting a time slice:– Too large - waiting time suffers, degenerates to FCFS ifprocesses are never preempted.– Too small - throughput suffers because too much time isspent context switching.⇒ Balance the two by selecting a time slice where contextswitching is roughly 1% of the time slice.A


View Full Document

UT CS 372 - Processes

Documents in this Course
MapReduce

MapReduce

17 pages

MapReduce

MapReduce

17 pages

Load more
Download Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Processes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?