DOC PREVIEW
UA CSC 453 - Compiler Phases

This preview shows page 1-2-3-4-5-6 out of 17 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CSc 453 — Compilers and Systems Software22 : OptimizationChristian CollbergDepartment of Computer ScienceUniversity of [email protected]° 2009 Christian CollbergNovember 1, 20091Introduction2 Compiler PhaseserrorsASTasmSemanticAnalyserInterm.Code GenIRMachineCode GenIRASTLexertokenssourceParserOptimizeOr maybe here!Or how about here?We are here!errorserrors13ASTTransformation 1Transformation 2....Inter−Proc. OptTransformation 1Transformation 2....Local OptTransformation 1Transformation 2....Global OptSyntactic AnalysisLexical AnalysisSemantic AnalysisSOURCEBuild CallGraphgen[S]={..}kill[S1]={..}...Data FlowAnalysisMachine Code Gen.Peephole Opt.B5Interm. Code Gen.TransformationsTreeControl Flow AnalysisIntermediate Code OptimizationAST4What do we Optimize?5 What do we Optimize?1. Optimize everything, all the time. The problem is that optimization interferes with debugging. In fact,many (most) compilers don’t let you generate an optimized program with debugging information. Theproblem of debugging optimized code is an important research field.Furthermore, optimization is probably the most time consuming pass in the compiler. Always opti-mizing everything (even routines which will never be called!) wastes valuable time.2. The programmer decides what to optimize. The problem is that the programmer has a local view ofthe code. When timing a program programmers are often very surprised to see where most of the timeis spent.63. Turn optimization on when program is complete. Unfortunately, optimizers aren’t perfect, and aprogram that performed OK with debugging turned on often behaves differently when debugging is offand optimization is on.4. Optimize inner loops only. Unfortunately, procedure calls can hide inner loops:PROCEDURE P(n);BEGINFOR k:=1 TO n DO · · · END;END P;2FOR i:=1 TO 10000 DO P(i) END;75. Use profiling information to guide what to optimize.dataCompiler Front−EndOptimizerCodeGenerator"Carefullychosen input"ExecutableProgramProfile6. Runtime code generation/optimization. We delay code generation and optimization until executiontime. At that time we have more information to guide the otpimizations:Specialized CodeFront−EndQ [...]P [...]Interm.Code TableCompiledCode TableQ [...]P(_,3) [...]P [...]Interm.CodeGen spec. codefor P(_,3), optimize, thencallExecution timecall P(x,3)Compiler8Local vs. Global vs. Inter-proceduralOptimization9 Local, Global, Inter-Proc.• Some compilers optimize moreaggressively than others. An aggressive compiler optimizes over a largepiece of code, a simple one only considers a small chunk of code at a time.Local Optimization:• Consider each basic block by itself. (All compilers.)Global Optimization:• Consider each procedure by itself. (Most compilers.)Inter-Procedural Optimization:• Consider the control flow between procedures. (A few compilers do this.)10 Local Optimization — Transformations• Local common subexpression elimination.• Local copy propagation.• Local dead-code elimination.3• Algebraic optimization.• Jump-to-jump removal.• Reduction in strength.11 Peephole Optimization• Can be done at the machine code level or at the intermediate code level.1. Examine a “window” of instructions.2. Improve code in window.3. Slide window.4. Repeat until “optimal”.12Local Optimization13 Peephole optimization—Redundant Loads• A naive code generator will generate the same address or variable several times.A := A + 1; ⇒ set A, %l0set A, %l1ld [%l1], %l1add %l1, 1, %l1st %l1, [%l0]⇓set A, %l0ld [%l0], %l1add %l1, 1, %l1st %l1, [%l0]14 Peephole optimization—Jumps-to-jumps• Complicated boolean expressions (with many and, or, nots) can easily produce lots of jumps tojumps.if a < b goto L1...L1: goto L2...L2: goto L3⇒ if a < b goto L3...L1: goto L3...L2: goto L3415 Algebraic Simplification• Beware of numerical problems:(x ∗ 0.00000001) ∗ 10000000000.0may produce a different result than(x ∗ 1000.0)!• FORTRAN requires that parenthesis be honored: (5.0 ∗ x) ∗ (6.0 ∗ y) can’t be evaluated as (30.0 ∗ x ∗ y).• Note that multiplication is often faster than division.16 Algebraic Simplification. . .x := x + 0; ⇒x := x - 0; ⇒x := x ∗ 1; ⇒x := 1 ∗ 1; ⇒ x := 1x := x / 1; ⇒x := x ** 2; ⇒ x := x * x;f := f / 2.0; ⇒ f := f ∗ 0.5;17 Reduction in Strength• SHL(x,y) = shift x left y steps.• Multiplications (and divisions) by constants can be replaced by cheaper sequences of shifts and adds.x := x ∗ 32 ⇒ x := SHL(x, 5);x := x ∗ 100⇓x := x ∗ (64 + 32 + 4)⇓x := x ∗ 64 + x ∗32 + x ∗4⇓x := SHL(x,6) + SHL(x,5) + SHL(x,2)18 Local, Global,Inter-ProceduralOriginal CodeFUNCTION P (X,n): INT;IF n = 3 THEN RETURN X[1]ELSE RETURN X[n];CONST R = 1;BEGINK := 3; ...IF P(X,K) = X[1] THENX[1] := R * (X[1] ** 2)519FUNCTION P (X,n): INT;IF n=3 THEN RETURN X[1] ELSE RETURN X[n];CONST R = 1;BEGINK := 3; ...IF P(X,K) = X[1] THEN X[1] := R * (X[1] ** 2)After Local OptimizationFUNCTION P (X,n): INT;IF n=3 THEN RETURN X[1] ELSE RETURN X[n]BEGINK := 3; ...IF P(X,K) = X[1] THEN X[1] := X[1] * X[1]20FUNCTION P (X,n): INT;IF n=3 THEN RETURN X[1] ELSE RETURN X[n]BEGINK := 3;...IF P(X,K) = X[1] THEN X[1] := X[1] * X[1]After Global OptimizationFUNCTION P (X,n): INT;IF n=3 THEN RETURN X[1] ELSE RETURN X[n]BEGIN...IF P(X,3) = X[1] THEN X[1] := X[1] * X[1]21FUNCTION P (X,n): INT;IF n=3 THEN RETURN X[1] ELSE RETURN X[n]BEGINIF P(X,3) = X[1] THEN X[1] := X[1] * X[1]After Inter-Procedural OptBEGINIF TRUE THEN X[1] := X[1] * X[1]After Another Local OptBEGINX[1] := X[1] * X[1]• Delete P if it isn’t used elsewhere. This can maybe be deduced by an inter-procedural analysis.622Global Optimization23 Global Optimization• Makes use of control-flow and data-flow analysis.• Dead code elimination.• Common subexpression elimination (local and global).• Loop unrolling.• Code hoisting.• Induction variables.• Reduction in strenght.• Copy propagation.• Live variable analysis.• Uninitialized Variable Analysis.24 Control Flow GraphsPerform optimizations over the control flow graph of a procedure.if ... goto B2B2if ... goto B3B3B5goto B2if ... goto B6B4B1725Common Sub−ExpressionEliminationB5t9 := 4 * jx := A[t9] x := A[t3]B2B3B4t3 := 4 * jt5 := j * 4A[t5] := 20t3 := 4 * jA[t3] := 20No changes to j here!B126Copy Propagationx := t3A[t4] := t3A[t4] := t3B5B3B4x := t3A[t4] := xA[x] := 20 A[t3] := 20No more uses of x!No changes to x!• Many optimizations produce X := Y


View Full Document

UA CSC 453 - Compiler Phases

Download Compiler Phases
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Compiler Phases and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Compiler Phases 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?