DOC PREVIEW
Stanford CS 140 - Lecture 14 - Dynamic Memory Allocation

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 140 Summer 2008 Handout 14 Dynamic Memory Allocation Today Dynamic Memory Allocation What s the goal And why is it hard Almost every useful program uses dynamic allocation Satisfy arbitrary set of allocations and frees Easy without free set a pointer to the beginning of some big chunk of memory heap and increment on each allocation Gives wonderful functionality benefits Don t have to statically specify complex data structures Can have data grow as a function of input size Allows recursive procedures stack growth But can have a huge impact on performance heap free memory Today how to implement what s hard Some interesting facts allocation Two or three line code change can have huge non obvious impact on how well allocator works examples to come Proven impossible to construct an always good allocator Surprising result after 50 years memory management still poorly understood More abstractly freelist Problem free creates holes fragmentation Result Lots of free space but cannot satisfy request What is fragmentation really What an allocator must do Inability to use memory that is free Two causes Track which parts of memory in use which parts are free Ideal no wasted space no time overhead Different lifetimes if adjacent objects die at different times then fragmentation What the allocator cannot do Control order of the number and size of requested blocks Change user ptrs bad placement decisions permanent a malloc 20 20 b 20 10 10 If they die at the same time then no fragmentation 20 Different sizes If all requests the same size then no fragmentation paging artificially creates this The core fight minimize fragmentation App frees blocks in any order creating holes in heap Holes too small cannot satisfy future requests The important decisions for fragmentation Placement choice where in free memory to put a requested block Freedom can select any memory in the heap Ideal put block where it won t cause fragmentation later impossible in general requires future knowledge Splitting free blocks to satisfy smaller requests Fights internal fragmentation Freedom can chose any larger block to split One way chose block with smallest remainder best fit Coalescing free blocks to yield larger blocks 20 10 30 30 30 Freedom when coalescing done deferring can be good fights external fragmentation current free position Impossible to solve fragmentation If you read allocation papers or books to find the best allocator it can be frustrating All discussions revolve around tradeoffs The reason There cannot be a best allocator Theoretical result For any possible allocation algorithm there exist streams of allocation and deallocation requests that defeat the allocator and force it into severe fragmentation What is bad Good allocator M log n where M bytes of live data and n ratio between smallest and largest sizes Bad allocator M n Pathological examples Best fit Given allocation of 7 20 byte chunks 20 20 20 20 20 20 20 Data structure heap is a list of free blocks each has a header holding block size and pointers to next What s a bad stream of frees and then allocates 20 Given 100 bytes of free space What s a really bad combination of placement decisions mallocs frees Next two allocators best fit first fit that in practice work pretty well alloc 19 21 19 21 19 21 19 21 19 21 19 21 LIFO put free object on front of list Address sort order free blocks by address Simple but causes higher fragmentation Makes coalescing easy just check if next block is free Also preserves empty space good 19 Gives fragmentation as address sort but unclear why Storage management example of subtle impact of simple decisions LIFO first fit seems good First fit Nuances But has big problems for simple allocation patterns Repeatedly intermix short lived large allocations with longlived small allocations Problem sawdust at beginning of the list Sorting of list forces a large requests to skip over many small blocks Need to use a scalable heap organization Each time large object freed a small chunk will be quickly taken Pathological fragmentation First fit address order in practice Blocks at front preferentially split ones at back only split when no larger one found before them Result Seems to roughly sort free list by size So Makes first fit operationally similar to best fit a first fit of a sorted list best fit Put object on front of list cheap hope same size used again cheap good locality FIFO put free object at end of list However doesn t seem to happen in practice though the way real programs behave suggest it easily could Subtle pathology LIFO FF Strategy pick the first block that fits alloc 20 Fails wasted space 57 bytes Problem Sawdust Data structure free list sorted lifo fifo or by address Code scan list take the first one free 19 19 19 19 37 First fit Simple bad case allocate n m m n in alternating orders free all the m s then try to allocate an m 1 Example start with 100 bytes of memory 19 30 Remainder so small that over time left with sawdust everywhere Fortunately not a problem in practice pretty well 20 fragmentation under many workloads Best fit gone wrong 30 Code Search freelist for block closest in size to the request Exact match is ideal During free usually coalesce adjacent blocks 100 Strategy minimize fragmentation by allocating space from block that leaves smallest fragment When better than best fit 20 Suppose memory has free blocks Suppose allocation sizes are 10 then 20 Suppose allocation sizes are 8 12 then 12 15 The weird parallels of first and best fit Both seem to perform roughly equivalently In fact the placement decisions of both are roughly identical under both randomized and real workloads Some worse ideas Strategy fight against sawdust by splitting blocks to maximize leftover size In real life seems to ensure that no large blocks around No one knows why Pretty strange since they seem pretty different Buddy systems Round up allocations to power of 2 to make coalescing easier Result Heavy internal fragmentation Known patterns of real programs Pattern 1 ramps So far we ve treated programs as black boxes Most real programs exhibit 1 or 2 or all 3 of the following patterns of alloc dealloc Bytes in use Next fit Strategy use first fit but remember where we found the last thing and start searching from there Seems like a good idea but tends to break down entire list Possible explanations First fit best fit because over time its free list becomes sorted by size the beginning of the free list accumulates small objects


View Full Document

Stanford CS 140 - Lecture 14 - Dynamic Memory Allocation

Documents in this Course
Homework

Homework

25 pages

Notes

Notes

8 pages

Load more
Download Lecture 14 - Dynamic Memory Allocation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 14 - Dynamic Memory Allocation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 14 - Dynamic Memory Allocation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?