DOC PREVIEW
UMass Amherst ECE 397A - Deadlocks

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Previous Lecture: Deadlocks■A problem/situation with concurrency for both SW (e.g., Java threads) and HW (e.g., interconnection network in a multiprocessor based system) systems.■Different ways to deal with:✦Ignore (common in today’s OS)✦Prevent and avoid (same as prevent but needs a priori resource usage information)✦Detect (periodically run detection to check for deadlock, interrupt/terminate some of the processes and/or preempt resources that are held)■Practical Lesson: You need to make sure your application code is NOT DEADLOCKED! The OS will not help you out…■Good idea to use/construct the Resource Allocation Graph even at the application level.■Distributed systems and OS – the deadlock problem is further complicated…Chapter 9: Memory Management■One of the most critical OS components■Outline:✦Background✦Swapping ✦Contiguous Allocation✦Paging✦Segmentation✦Segmentation with PagingBackground■Program must be brought into memory and placed within a process for it to be run.Assume that the program must fit in memory or the program itself needs to deal with when not. This is different from virtual memory (will be covering later) when this (i.e. fit or not in memory) will be managed by the OS!■User programs go through several steps before being run. Let us review these steps….Binding of Instructions and Data to Memory■Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. For example the MS-DOS .COM format executables…■Load time: Must generate relocatablecode (e.g., say from the beginning of this module rather than address 100) if memory location is not known at compile time. ■Execution time: Address binding delayed until run time, if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., baseand limit registers), for efficiency reasons. Most general-purpose OS.Address binding of instructions and data to memory addresses canhappen at three different stages.Multi-step Processing of a User Program Logical vs. Physical Address Space■ The concept of a logical address space that is bound to a separate physical address space is central to proper memory management.✦Logical address – generated by the CPU; also referred to as virtual address.✦Physical address – address seen by the memory unit.■ Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.2Memory-Management Unit (MMU)■Hardware device that maps virtual address to physical address.■In a simple MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.■There are many different methods for accomplishing such a mapping, as we will see later.■The user program deals with logicaladdresses; it never sees the realphysical addresses.Dynamic relocation using a relocation registerDynamic Loading■In our discussion so far we required that the entire data and program must be in physical memory before it starts executing.■Dynamic Loading – does not require that in order to obtain better memory-space utilization.■Idea: Routine is not loaded until it is called✦Better memory-space utilization as unused routine is never loaded.✦Useful when large amounts of code are needed to handle infrequently occurring cases.■No special support from the operating system is required implemented through program design. It is the responsibility of the user to take advantage of this method. OS may provide library routines to implement dynamic loading.Dynamic Linking■Some OS supports static linking in which libraries are treated as any other object module■Dynamic Linking: linking postponed until execution time.■Small piece of code, stub, is included for each library reference in the image; used to locate the appropriate memory-resident library routine.✦Stub replaces itself with the address of the routine, and executes the routine.✦Operating system needed to check if routine is in processes’ memory address. If routine is not in memory, the stub loads it into memory. That is, only first time dynamic linking incurs the overhead.■Dynamic linking is particularly useful for libraries to avoid building in large pieces of static libraries in the executable image (i.e., as with the case of static libraries). That (may) affect both memory allocation and (instruction) caching performance.Overlays■A process can be larger than the available amount of memory with overlays.■Idea: Keep in memory only those instructions and data that are needed at any given time. Overload instructions that are no longer needed.✦Needed/useful when process is larger than amount of memory allocated to it.✦Implemented by user, no special support needed from operating system, programming design of overlay structure is complex; Require that the programmer has complete knowledge of the program, its code, and its data structures.■Automatic techniques to run large programs in limited physical memory are certainly preferred. Overlays for a Two-Pass Assembler3Swapping■A process needs to be in memory to be executed.■Idea: A process can however be swappedtemporarily out of memory to a backing store, and then brought back into memory for continued execution.■Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images.■Roll out, roll in– swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped outso if higher-priorityprocess arrives and wants service it can be loadedand executed.■Major part of swap time is transfer time; total transfer time is directly proportional to the amountof memory swapped. When a process is swapped back will be loaded into same memory (if not execution time binding is used)■Modified versions of swapping are found on many systems, i.e., UNIX, Linux, and Windows.Schematic View of SwappingContiguous Allocation■Main memory usually divided into two partitions:✦Resident operating system, usually held in low memory with interrupt vector. Since the interrupt vector is typically in low-memory it is more common to map the resident OS into low-memory.✦User processes then held in high


View Full Document

UMass Amherst ECE 397A - Deadlocks

Download Deadlocks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Deadlocks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Deadlocks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?