DOC PREVIEW
UMass Amherst CS 677 - Multiprocessor Scheduling

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Multiprocessor SchedulingSlide 2SchedulingParallel Applications on SMPsDistributed Scheduling: MotivationImplicationsDesign IssuesComponentsSender-initiated PolicyReceiver-initiated PolicySymmetric PoliciesCase Study: V-System (Stanford)Sprite (Berkeley)Sprite (contd)Code and Process MigrationMotivationSlide 17Migration modelsWho executes migrated entity?Models for Code MigrationDo Resources Migrate?Resource Migration ActionsMigration in Heterogeneous SystemsCS677: Distributed OSComputer ScienceLecture 7, page 1Multiprocessor Scheduling•Will consider only shared memory multiprocessor•Salient features:–One or more caches: cache affinity is important–Semaphores/locks typically implemented as spin-locks: preemption during critical sectionsCS677: Distributed OSComputer ScienceLecture 7, page 2Multiprocessor Scheduling•Central queue – queue can be a bottleneck•Distributed queue – load balancing between queueCS677: Distributed OSComputer ScienceLecture 7, page 3Scheduling•Common mechanisms combine central queue with per processor queue (SGI IRIX)•Exploit cache affinity – try to schedule on the same processor that a process/thread executed last•Context switch overhead–Quantum sizes larger on multiprocessors than uniprocessorsCS677: Distributed OSComputer ScienceLecture 7, page 4Parallel Applications on SMPs•Effect of spin-locks: what happens if preemption occurs in the middle of a critical section?–Preempt entire application (co-scheduling)–Raise priority so preemption does not occur (smart scheduling)–Both of the above•Provide applications with more control over its scheduling–Users should not have to check if it is safe to make certain system calls–If one thread blocks, others must be able to runCS677: Distributed OSComputer ScienceLecture 7, page 5Distributed Scheduling: Motivation•Distributed system with N workstations–Model each w/s as identical, independent M/M/1 systems–Utilization u, P(system idle)=1-u•What is the probability that at least one system is idle and one job is waiting?CS677: Distributed OSComputer ScienceLecture 7, page 6Implications•Probability high for moderate system utilization–Potential for performance improvement via load distribution•High utilization => little benefit•Low utilization => rarely job waiting•Distributed scheduling (aka load balancing) potentially useful•What is the performance metric?–Mean response time•What is the measure of load?–Must be easy to measure–Must reflect performance improvementCS677: Distributed OSComputer ScienceLecture 7, page 7Design Issues•Measure of load–Queue lengths at CPU, CPU utilization•Types of policies–Static: decisions hardwired into system–Dynamic: uses load information–Adaptive: policy varies according to load•Preemptive versus non-preemptive•Centralized versus decentralized•Stability:  => instability, load balance–Job floats around and load oscillatesCS677: Distributed OSComputer ScienceLecture 7, page 8Components•Transfer policy: when to transfer a process?–Threshold-based policies are common and easy•Selection policy: which process to transfer?–Prefer new processes–Transfer cost should be small compared to execution cost•Select processes with long execution times•Location policy: where to transfer the process?–Polling, random, nearest neighbor•Information policy: when and from where?–Demand driven [only if sender/receiver], time-driven [periodic], state-change-driven [send update if load changes]CS677: Distributed OSComputer ScienceLecture 7, page 9Sender-initiated Policy•Transfer policy•Selection policy: newly arrived process•Location policy: three variations–Random: may generate lots of transfers => limit max transfers–Threshold: probe n nodes sequentially•Transfer to first node below threshold, if none, keep job–Shortest: poll Np nodes in parallel•Choose least loaded node below TCS677: Distributed OSComputer ScienceLecture 7, page 10Receiver-initiated Policy•Transfer policy: If departing process causes load < T, find a process from elsewhere•Selection policy: newly arrived or partially executed process•Location policy:–Threshold: probe up to Np other nodes sequentially•Transfer from first one above threshold, if none, do nothing–Shortest: poll n nodes in parallel, choose node with heaviest load above TCS677: Distributed OSComputer ScienceLecture 7, page 11Symmetric Policies•Nodes act as both senders and receivers: combine previous two policies without change–Use average load as threshold•Improved symmetric policy: exploit polling information–Two thresholds: LT, UT, LT <= UT–Maintain sender, receiver and OK nodes using polling info–Sender: poll first node on receiver list …–Receiver: poll first node on sender list …CS677: Distributed OSComputer ScienceLecture 7, page 12Case Study: V-System (Stanford)•State-change driven information policy–Significant change in CPU/memory utilization is broadcast to all other nodes•M least loaded nodes are receivers, others are senders•Sender-initiated with new job selection policy•Location policy: probe random receiver, if still receiver, transfer job, else try anotherCS677: Distributed OSComputer ScienceLecture 7, page 13Sprite (Berkeley)•Workstation environment => owner is king!•Centralized information policy: coordinator keeps info–State-change driven information policy–Receiver: workstation with no keyboard/mouse activity for 30 seconds and # active processes < number of processors•Selection policy: manually done by user => workstation becomes sender•Location policy: sender queries coordinator•WS with foreign process becomes sender if user becomes active: selection policy=> home workstationCS677: Distributed OSComputer ScienceLecture 7, page 14Sprite (contd)•Sprite process migration–Facilitated by the Sprite file system–State transfer•Swap everything out•Send page tables and file descriptors to receiver•Demand page process in•Only dependencies are communication-related–Redirect communication from home WS to receiverCS677: Distributed OSComputer ScienceLecture 7, page 15Code and Process Migration•Motivation•How does migration occur?•Resource migration•Agent-based system•Details of process migrationCS677: Distributed OSComputer ScienceLecture 7, page 16Motivation•Key reasons: performance and flexibility•Process migration


View Full Document

UMass Amherst CS 677 - Multiprocessor Scheduling

Download Multiprocessor Scheduling
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Multiprocessor Scheduling and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Multiprocessor Scheduling 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?