New version page

# Berkeley COMPSCI 164 - IL for Arrays & Local Optimizations

Pages: 42
Documents in this Course

41 pages

7 pages

7 pages

13 pages

15 pages

5 pages

61 pages

37 pages

44 pages

36 pages

7 pages

39 pages

28 pages

20 pages

3 pages

13 pages

4 pages

6 pages

15 pages

25 pages

2 pages

33 pages

73 pages

14 pages

13 pages

13 pages

4 pages

2 pages

12 pages

55 pages

44 pages

10 pages

59 pages

35 pages

5 pages

55 pages

25 pages

5 pages

8 pages

7 pages

67 pages

7 pages

28 pages

134 pages

103 pages

3 pages

19 pages

8 pages

40 pages

7 pages

42 pages

45 pages

3 pages

103 pages

50 pages

8 pages

24 pages

35 pages

30 pages

59 pages

3 pages

42 pages

33 pages

5 pages

35 pages

7 pages

37 pages

6 pages

13 pages

35 pages

2 pages

14 pages

3 pages

2 pages

40 pages

61 pages

2 pages

19 pages

3 pages

25 pages

80 pages

30 pages

53 pages

7 pages

36 pages

6 pages

24 pages

80 pages

9 pages

6 pages

92 pages

21 pages

11 pages

3 pages

4 pages

4 pages

2 pages

10 pages

24 pages

6 pages

## This preview shows page 1-2-3-20-21-40-41-42 out of 42 pages.

View Full Document
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.
Do you want full access? Go Premium and unlock all 42 pages.

Unformatted text preview:

4/23/09 Prof. Hilfinger CS 164 Lecture 26 1IL for Arrays & Local OptimizationsLecture 26(Adapted from notes by R. Bodik and G. Necula)4/23/09 Prof. Hilfinger CS 164 Lecture 26 2Multi-dimensional Arrays• A 2D array is a 1D array of 1D arrays• Java uses arrays of pointers to arrays for >1Darrays.• But if row size constant, for faster access andcompactness, may prefer to represent an MxNarray as a 1D array of 1D rows (not pointers torows): row-major order• FORTRAN layout is 1D array of 1D columns:column-major order.4/23/09 Prof. Hilfinger CS 164 Lecture 26 3IL for 2D Arrays (Row-Major Order)• Again, let S be size of one element, so that a row oflength N has size NxS. igen(e1[e2,e3], t) = igen(e1, t1); igen(e2,t2); igen(e3,t3) igen(N, t4) (N need not be constant) t5 := t4 * t2; t6 := t5 + t3; t7 := t6*S; t8 := t7 + t1 t := *t84/23/09 Prof. Hilfinger CS 164 Lecture 26 4Array Descriptors• Calculation of element address for e1[e2,e3]has form VO + S1 x e2 + S2 x e3, where– VO (address of e1[0,0]) is the virtual origin– S1 and S2 are strides– All three of these are constant throughout lifetimeof array• Common to package these up into an arraydescriptor, which can be passed in lieu of thearray itself.4/23/09 Prof. Hilfinger CS 164 Lecture 26 5Array Descriptors (II)• By judicious choice of descriptor values, canmake the same formula work for differentkinds of array.• For example, if lower bounds of indices are 1rather than 0, must compute address of e[1,1] + S1 x (e2-1) + S2 x (e3-1)• But some algebra puts this into the form VO + S1 x e2 + S2 x e3where VO = address of e[1,1] - S1 - S24/23/09 Prof. Hilfinger CS 164 Lecture 26 6Observation• These examples show profligate use ofregisters.• Doesn’t matter, because this is IntermediateCode. Rely on later optimization stages to dothe right thing.4/23/09 Prof. Hilfinger CS 164 Lecture 26 7 Code Optimization: Basic Concepts4/23/09 Prof. Hilfinger CS 164 Lecture 26 8Definition. Basic Blocks• A basic block is a maximal sequence ofinstructions with:– no labels (except at the first instruction), and– no jumps (except in the last instruction)• Idea:– Cannot jump in a basic block (except at beginning)– Cannot jump out of a basic block (except at end)– Each instruction in a basic block is executed afterall the preceding instructions have been executed4/23/09 Prof. Hilfinger CS 164 Lecture 26 9Basic Block Example• Consider the basic block1. L:2. t := 2 * x3. w := t + x4. if w > 0 goto L’• No way for (3) to be executed without (2)having been executed right before– We can change (3) to w := 3 * x– Can we eliminate (2) as well?4/23/09 Prof. Hilfinger CS 164 Lecture 26 10Definition. Control-Flow Graphs• A control-flow graph is a directed graph with– Basic blocks as nodes– An edge from block A to block B if the executioncan flow from the last instruction in A to the firstinstruction in B– E.g., the last instruction in A is jump LB– E.g., the execution can fall-through from block A toblock B• Frequently abbreviated as CFG4/23/09 Prof. Hilfinger CS 164 Lecture 26 11Control-Flow Graphs. Example.• The body of a method (orprocedure) can berepresented as a control-flow graph• There is one initial node• All “return” nodes areterminalx := 1i := 1L: x := x * x i := i + 1 if i < 10 goto L4/23/09 Prof. Hilfinger CS 164 Lecture 26 12Optimization Overview• Optimization seeks to improve a program’sutilization of some resource– Execution time (most often)– Code size– Network messages sent– Battery power used, etc.• Optimization should not alter what theprogram computes– The answer must still be the same4/23/09 Prof. Hilfinger CS 164 Lecture 26 13A Classification of Optimizations• For languages like C and Cool there are threegranularities of optimizations1. Local optimizations• Apply to a basic block in isolation2. Global optimizations• Apply to a control-flow graph (method body) in isolation3. Inter-procedural optimizations• Apply across method boundaries• Most compilers do (1), many do (2) and veryfew do (3)4/23/09 Prof. Hilfinger CS 164 Lecture 26 14Cost of Optimizations• In practice, a conscious decision is made notto implement the fanciest optimization known• Why?– Some optimizations are hard to implement– Some optimizations are costly in terms ofcompilation time– The fancy optimizations are both hard and costly• The goal: maximum improvement with minimumof cost4/23/09 Prof. Hilfinger CS 164 Lecture 26 15Local Optimizations• The simplest form of optimizations• No need to analyze the whole procedure body– Just the basic block in question• Example: algebraic simplification4/23/09 Prof. Hilfinger CS 164 Lecture 26 16Algebraic Simplification• Some statements can be deletedx := x + 0x := x * 1• Some statements can be simplified x := x * 0 ⇒ x := 0 y := y ** 2 ⇒ y := y * y x := x * 8 ⇒ x := x << 3 x := x * 15 ⇒ t := x << 4; x := t - x(on some machines << is faster than *; but not on all!)4/23/09 Prof. Hilfinger CS 164 Lecture 26 17Constant Folding• Operations on constants can be computed atcompile time• In general, if there is a statement x := y op z– And y and z are constants– Then y op z can be computed at compile time• Example: x := 2 + 2 ⇒ x := 4• Example: if 2 < 0 jump L can be deleted• When might constant folding be dangerous?4/23/09 Prof. Hilfinger CS 164 Lecture 26 18Flow of Control Optimizations• Eliminating unreachable code:– Code that is unreachable in the control-flow graph– Basic blocks that are not the target of any jump or“fall through” from a conditional– Such basic blocks can be eliminated• Why would such basic blocks occur?• Removing unreachable code makes the programsmaller– And sometimes also faster, due to memory cacheeffects (increased spatial locality)4/23/09 Prof. Hilfinger CS 164 Lecture 26 19Single Assignment Form• Some optimizations are simplified if eachassignment is to a temporary that has notappeared already in the basic block• Intermediate code can be rewritten to be insingle assignment formx := a + y x := a + ya := x ⇒ a1 := xx := a * x x1 := a1 * xb := x + a b := x1 + a1 (x1 and a1 are fresh

View Full Document
Unlocking...

Join to view IL for Arrays & Local Optimizations 2 2 and access 3M+ class-specific study document.

or