New version page

Evolution of the Major Programming Languages

Upgrade to remove ads

This preview shows page 1-2-3 out of 10 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Program-level Adaptive Memory ManagementChengliang Zhang†, Kirk Kelsey†, Xipeng Shen†,Chen Ding†, Matthew Hertz∗, and Mitsunori Ogihara††Computer Science DepartmentUniversity of Rochester{zhangchl,kelsey,xshen,cding,ogihara}@cs.rochester.edu∗Computer Science DepartmentCanisius [email protected] application’s performance is impacted by the amount of avail-able memory. In a traditional application, which has a fixed work-ing set size, increasing memory has a beneficial effect up until theapplication’s working set is met. In the presence of garbage col-lection this relationship becomes more complex. While increasingthe size of the program’s heap reduces the frequency of collections,collecting a heap with memory paged to the backing store is veryexpensive. We first demonstrate the presence of an optimal heapsize for a number of applications running on a machine with a spe-cific configuration. We then introduce a scheme which adaptivelyfinds this good heap size. In this scheme, we track the memory us-age and number of page faults at a program’s phase boundaries.Using this information, the system selects the soft heap size. Byadapting itself dynamically, our scheme is independent of the un-derlying main memory size, code optimizations, and garbage col-lection algorithm. We present several experiments on real applica-tions to show the effectiveness of our approach. Our results showthat program-level heap control provides up to a factor of 7.8 over-all speedup versus using the best possible fixed heap size controlledby the virtual machine on identical garbage collectors.Categories and Subject Descriptors D.3.4 [Programming Lan-guages]: Processors—Memory management (garbage collection),OptimizationGeneral Terms Algorithms, Languages, PerformanceKeywords garbage collection, paging, adaptive, program-level,heap sizing1. IntroductionTheir is a strong correlation between memory allocation and pro-gram performance. Traditionally, this relationship has been definedPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. To copy otherwise, to republish, to post on servers or to redistributeto lists, requires prior specific permission and/or a fee.ISMM’06June 10–11, 2006, Ottawa, Ontario, Canada.Copyrightc 2006 ACM 1-59593-221-6/06/0006. ..$5.00.using the concept of a working set. An application’s working set isthe set of objects with which it is currently operating. The amountof memory needed to store these objects is called the application’sworking set size. When the available memory is less than an appli-cation’s working set size, throughput is limited by the time the ap-plication spends waiting for memory to be paged in or out. Until theworking set size it reached, we can improve program performanceby increasing the available memory and thereby reducing the num-ber of page faults. Once an application’s working set fits into mainmemory the application stops paging, so increasing memory furthermay not improve performance.Garbage collection automatically reclaims memory allocateddynamically and thus relieves the programmer of the need to ex-plicitly free blocks of memory. Increasing the memory availableto a garbage-collected application also tends to decrease its exe-cution time, but for a different reason. By increasing the size ofits heap, an application can perform fewer garbage collections andthereby improve its throughput. While we could size the heap tojust fit the working set, this would require the collector to spendmore time collecting the heap and increase execution times accord-ingly. Setting a larger heap size in the virtual machine reduces theneed to collect the heap and, generally, reduces the total time spentin collection. Selecting larger heap sizes, therefore, improves per-formance by reducing the overhead due to garbage collection.While decreasing the frequency of garbage collection, usinglarger heaps may not improve performance. First, as the heapgrows, individual collection may take longer as the collector ex-amines more objects. These larger heaps also increase the changesthe heap will not fit into memory and must have pages evicted tovirtual memory. Another downside is that the live data in the heapmay be scattered over a larger area. This scattering reduces datalocality and hurts the performance of caches and the TLB. Ulti-mately, larger heap sizes requires balancing the benefit of fewercollections with the costs of reduced locality and paging.The exact trade-off between frequent GC and paging is hardto predict, because it really depends on the application, the virtualmachine, the operating system, other programs in the environment,and all of their interactions. We will show that the relationshipfollows a common pattern. This pattern includes an optimal heapsize value, which we can adaptively identify during the programexecution to minimize the execution time.Scope of adaptation Changes RequiredPAMMVM, OS,ProgramProgramAutomatic HeapVM / OS VM / OSSizing [22]GC Hints [5] Program ProgramBC [11] VM /OS VM / OSPreventive Program ProgramGC [9]Table 1. Comparison of different adaptive memory managementschemesThere are two levels at which the heap size can be controlled: atthe program level and the virtual machine level. Past work has ex-amined this problem in a number of ways. Yang et al. estimated theamount of available memory using an approximate reuse distancehistogram [22]. A linear model controlled adjustments to the heapsize so that the physical memory was utilized fully. Maintainingthese histograms required modifying the operating system and sotheir evaluation was done in simulation. An alternative approach byHertz et al. modified the garbage collector and OS to avoid touch-ing pages written to the backing store and thereby reduced the costof collecting a large heap [11]. Andreasson et al. used reinforce-ment learning to find the best times to perform garbage collection.Because it must first be trained, this strategy is effective only aftertens of thousands of time steps (decision points) have passed [2].We note that each of these schemes made changes to the operatingsystem and/or virtual machine.Programs usually run in repetitive patterns. We call each in-stance of the pattern a phase.


Download Evolution of the Major Programming Languages
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Evolution of the Major Programming Languages and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Evolution of the Major Programming Languages 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?