UW-Madison COMPSCI 701 - Meta Optimization - Improving Compiler Heuristics with Machine Learning

Unformatted text preview:

Meta Optimization:Improving Compiler Heuristics with Machine LearningMark Stephenson andSaman AmarasingheMassachusetts Institute of TechnologyLaboratory for Computer ScienceCambridge, MA 02139{mstephen, saman}@cag.lcs.mit.eduMartin Martin and Una-May O’ReillyMassachusetts Institute of TechnologyArtificial Intelligence LaboratoryCambridge, MA 02139{mcm, unamay}@ai.mit.eduABSTRACTCompiler writers have crafted many heuristics over the yearsto approximately solve NP-hard problems efficiently. Find-ing a heuristic that performs well on a broad range of ap-plications is a tedious and difficult process. This paper in-troduces Meta Optimization, a methodology for automat-ically fine-tuning compiler heuristics. Meta Optimizationuses machine-learning techniques to automatically searchthe space of compiler heuristics. Our techniques reduce com-piler design complexity by relieving compiler writers of thetedium of heuristic tuning. Our machine-learning systemuses an evolutionary algorithm to automatically find effec-tive compiler heuristics. We present promising experimentalresults. In one mode of operation Meta Optimization createsapplication-specific heuristics which often result in impres-sive speedups. For hyperblock formation, one optimizationwe present in this paper, we obtain an average speedup of23% (up to 73%) for the applications in our suite. Further-more, by evolving a compiler’s heuristic over several bench-marks, we can create effective, general-purpose heuristics.The best general-purpose heuristic our system found for hy-perblock formation improved performance by an average of25% on our training set, and 9% on a completely unrelatedtest set. We demonstrate the efficacy of our techniques onthree different optimizations in this paper: hyperblock for-mation, register allocation, and data prefetching.Categories and Subject DescriptorsD.1.2 [Programming Techniques]: Automatic Program-ming; D.2.2 [Software Engineering]: Design Tools andTechniques; I.2.6 [Artificial Intelligence]: LearningGeneral TermsDesign, Algorithms, PerformanceKeywordsMachine Learning, Priority Functions, Genetic Program-ming, Compiler HeuristicsPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.PLDI’03, June 9–11, 2003, San Diego, California, USA.Copyright 2003 ACM 1-58113-662-5/03/0006 ...$5.00.1. INTRODUCTIONCompiler writers have a difficult task. They are expectedto create effective and inexpensive solutions to NP-hardproblems such as register allocation and instruction schedul-ing. Their solutions are expected to interact well with otheroptimizations that the compiler performs. Because someoptimizations have competing and conflicting goals, adverseinteractions are inevitable. Getting all of the compiler passesto mesh nicely is a daunting task.The advent of intractably complex computer architecturesalso complicates the compiler writer’s task. Since it is im-possible to create a simple model that captures the intrica-cies of modern architectures and compilers, compiler writersrely on inaccurate abstractions. Such models are based uponmany assumptions, and thus may not even properly simulatefirst-order effects.Because compilers cannot afford to optimally solve NP-hard problems, compiler writers devise clever heuristics thatquickly find good approximate solutions for a large class ofapplications. Unfortunately, heuristics rely on a fair amountof tweaking to achieve suitable performance. Trial-and-errorexperimentation can help an engineer optimize the heuris-tic for a given compiler and architecture. For instance, onemight be able to use iterative experimentation to figure outhow much to unroll loops for a given architecture (i.e.,with-out thrashing the instruction cache or incurring too muchregister pressure).After studying several compiler optimizations, we foundthat many heuristics have a focal point. A single priorityor cost function often dictates the efficacy of a heuristic.A priority function— a function of the factors that affecta given problem— measures the relative importance of thedifferent options available to a compiler algorithm.Take register allocation for example. When a graph col-oring register allocator cannot successfully color an interfer-ence graph, it spills a variable to memory and removes itfrom the graph. The allocator then attempts to color thereduced graph. When a graph is not colorable, choosing anappropriate variable to spill is crucial. For many allocators,this decision is bestowed upon a single priority function.Based on relevant data (e.g., number of references, depth inloop nest, etc.), the function assigns weights to all uncoloredvariables and thereby determines which variable to spill.Fine-tuning priority functions to achieve suitable perfor-mance is a tedious process. Currently, compiler writers man-ually experiment with different priority functions. For in-77stance, Bernstein et al. manually identified three priorityfunctions for choosing spill variables [3]. By applying thethree functions to a suite of benchmarks, they found that aregister allocator’s effectiveness is highly dependent on thepriority function the compiler uses.The importance of priority functions is a key insight thatmotivates Meta Optimization, a method by which a machine-learning algorithm automatically searches the priority func-tion solution space. More specifically, we use a learning al-gorithm that iteratively searches for priority functions thatimprove the execution time of compiled applications.Our system can be used to cater a priority function toa specific input program. This mode of operation is essen-tially an advanced form of feedback directed optimization.More importantly, it can be used to find a general-purposefunction that works well for a broad range of applications.In this mode of operation, Meta Optimization can performthe tedious work that is currently performed by engineers.For each of the three case studies we describe in this paper,we were able to at least match the performance of human-generated priority functions. In some cases we achieved con-siderable speedups.While many researchers have used


View Full Document

UW-Madison COMPSCI 701 - Meta Optimization - Improving Compiler Heuristics with Machine Learning

Download Meta Optimization - Improving Compiler Heuristics with Machine Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Meta Optimization - Improving Compiler Heuristics with Machine Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Meta Optimization - Improving Compiler Heuristics with Machine Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?