DOC PREVIEW
UMass Amherst CS 677 - Processes and Threads

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS677: Distributed OSComputer ScienceLecture 6, page 1Processes and Threads• Processes and their scheduling• Multiprocessor scheduling• Threads• Distributed Scheduling/migrationCS677: Distributed OSComputer ScienceLecture 6, page 2Processes: Review• Multiprogramming versus multiprocessing• Kernel data structure: process control block (PCB)• Each process has an address space – Contains code, global and local variables..• Process state transitions • Uniprocessor scheduling algorithms– Round-robin, shortest job first, FIFO, lottery scheduling, EDF• Performance metrics: throughput, CPU utilization, turnaround time, response time, fairness2CS677: Distributed OSComputer ScienceLecture 6, page 3Process Behavior• Processes: alternate between CPU and I/O• CPU bursts– Most bursts are short, a few are very long (high variance)– Modeled using hyperexponential behavior– If X is an exponential r.v.• Pr [ X <= x] = 1 – e-µx• E[X] = 1/µ– If X is a hyperexponential r.v.• Pr [X <= x] = 1 – p e-µ1x-(1-p) e-µ2x• E[X] = p/ µ1 + (1−p)/ µ2CS677: Distributed OSComputer ScienceLecture 6, page 4Process Scheduling• Priority queues: multiples queues, each with a different priority– Use strict priority scheduling– Example: page swapper, kernel tasks, real-time tasks, user tasks• Multi-level feedback queue– Multiple queues with priority– Processes dynamically move from one queue to another• Depending on priority/CPU characteristics– Gives higher priority to I/O bound or interactive tasks– Lower priority to CPU bound tasks– Round robin at each level3CS677: Distributed OSComputer ScienceLecture 6, page 5Processes and Threads• Traditional process– One thread of control through a large, potentially sparse address space– Address space may be shared with other processes (shared mem)– Collection of systems resources (files, semaphores)• Thread (light weight process)– A flow of control through an address space– Each address space can have multiple concurrent control flows– Each thread has access to entire address space– Potentially parallel execution, minimal state (low overheads)– May need synchronization to control access to shared variablesCS677: Distributed OSComputer ScienceLecture 6, page 6Threads • Each thread has its own stack, PC, registers– Share address space, files,…4CS677: Distributed OSComputer ScienceLecture 6, page 7Why use Threads?• Large multiprocessors need many computing entities (one per CPU)• Switching between processes incurs high overhead• With threads, an application can avoid per-process overheads– Thread creation, deletion, switching cheaper than processes• Threads have full access to address space (easy sharing)• Threads can execute in parallel on multiprocessorsCS677: Distributed OSComputer ScienceLecture 6, page 8Why Threads?• Single threaded process: blocking system calls, no parallelism• Finite-state machine [event-based]: non-blocking with parallelism• Multi-threaded process: blocking system calls with parallelism• Threads retain the idea of sequential processes with blocking system calls, and yet achieve parallelism• Software engineering perspective– Applications are easier to structure as a collection of threads• Each thread performs several [mostly independent] tasks5CS677: Distributed OSComputer ScienceLecture 6, page 9Multi-threaded Clients Example : Web Browsers• Browsers such as IE are multi-threaded• Such browsers can display data before entire document is downloaded: performs multiple simultaneous tasks– Fetch main HTML page, activate separate threads for other parts– Each thread sets up a separate connection with the server • Uses blocking calls– Each part (gif image) fetched separately and in parallel– Advantage: connections can be setup to different sources• Ad server, image server, web server…CS677: Distributed OSComputer ScienceLecture 6, page 10Multi-threaded Server Example• Apache web server: pool of pre-spawned worker threads– Dispatcher thread waits for requests– For each request, choose an idle worker thread– Worker thread uses blocking system calls to service web request6CS677: Distributed OSComputer ScienceLecture 6, page 11Thread Management• Creation and deletion of threads– Static versus dynamic• Critical sections– Synchronization primitives: blocking, spin-lock (busy-wait)– Condition variables• Global thread variables• Kernel versus user-level threadsCS677: Distributed OSComputer ScienceLecture 6, page 12User-level versus kernel threads• Key issues:• Cost of thread management– More efficient in user space• Ease of scheduling• Flexibility: many parallel programming models and schedulers• Process blocking – a potential problem7CS677: Distributed OSComputer ScienceLecture 6, page 13User-level Threads• Threads managed by a threads library– Kernel is unaware of presence of threads• Advantages: – No kernel modifications needed to support threads– Efficient: creation/deletion/switches don’t need system calls– Flexibility in scheduling: library can use different scheduling algorithms, can be application dependent• Disadvantages– Need to avoid blocking system calls [all threads block]– Threads compete for one another– Does not take advantage of multiprocessors [no real parallelism]CS677: Distributed OSComputer ScienceLecture 6, page 14User-level threads8CS677: Distributed OSComputer ScienceLecture 6, page 15Kernel-level threads• Kernel aware of the presence of threads– Better scheduling decisions, more expensive– Better for multiprocessors, more overheads for uniprocessorsCS677: Distributed OSComputer ScienceLecture 6, page 16Light-weight Processes• Several LWPs per heave-weight process• User-level threads package– Create/destroy threads and synchronization primitives• Multithreaded applications – create multiple threads, assign threads to LWPs (one-one, many-one, many-many)• Each LWP, when scheduled, searches for a runnable thread [two-level scheduling]– Shared thread table: no kernel support needed• When a LWP thread block on system call, switch to kernel mode and OS context switches to another LWP9CS677: Distributed OSComputer ScienceLecture 6, page 17LWP ExampleCS677: Distributed OSComputer ScienceLecture 6, page 18Thread Packages• Posix Threads (pthreads)– Widely used threads package– Conforms to the Posix standard– Sample calls:


View Full Document

UMass Amherst CS 677 - Processes and Threads

Download Processes and Threads
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Processes and Threads and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Processes and Threads 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?