DOC PREVIEW
CMU CS 15740 - Understanding the Differences Between Value Prediction

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

AbstractRecently two hardware techniques — Value Prediction (VP) andInstruction Reuse (IR) — have been proposed for exploiting theredundancy in programs to collapse data dependences. In thispaper, we attempt to understand the different ways in which VP andIR interact with other microarchitectural features and the impact ofsuch interactions on net performance. More specifically, we per-form the following tasks: (i) we identify the various differencesbetween the two techniques and qualitatively discuss their microar-chitectural interactions, (ii) we evaluate the impact on performanceof these interactions, and (iii) since IR is more restrictive of the twotechniques, we also estimate the amount of total redundancy,present in programs, that can be captured by IR.Our results show that the performance obtained by VP is sensi-tive to the way branches with value-speculative operands are han-dled. We also see that, although IR captures less amount ofredundancy, it may perform equally well because it validates resultsearly, it is non-speculative, and it reduces branch mispredictionpenalty. Finally, we show that 84-97% of redundancy in programscan be reused, implying that the approach of detecting redundantinstructions non-speculatively, based on their operands, does notsignificantly restrict IR’s ability to capture redundancy present inprograms.1. IntroductionSeveral recent studies [2, 5, 8, 10] have shown that thereis significant result redundancy in programs, i.e., manyinstructions perform the same computation and, hence, pro-duce the same result over and over again. These studies havefound that for several benchmarks more than 75% of thedynamic instructions produce the same result as before.Also, recently, two hardware techniques have been proposedto exploit this redundancy: (i) Value Prediction (VP) [3, 4,5], and (ii) Instruction Reuse (IR) [9]. Both techniquesattempt to reduce the execution time of programs by alleviat-ing the dataflow constraint. They use the redundancy in pro-grams to determine — speculatively (Value Prediction) ornon-speculatively (Instruction Reuse) — the results ofinstructions without actually executing them. The advantageof doing so is that instructions do not have to wait for theirsource instructions to execute first; they can execute soonerusing the results obtained by the above two techniques, thus,relaxing the dataflow constraint.Although both VP and IR attempt to shorten the criticalpath through a computation, they follow very differentapproaches. VP predicts the results of instructions (or, alter-natively, the inputs of other instructions) based on the previ-ously seen results, performs computation using the predictedvalues, and confirms the speculation at a later point. The crit-ical path is shortened since the instructions that would nor-mally be executed sequentially could be executed(speculatively) in parallel. On the other hand, IR recognizesthat a certain computation chain has been performed beforeand therefore need not be performed again, i.e., it “splicesout” a chain of computation from the critical path.The effectiveness of any microarchitectural technique inimproving the net performance of a processor not onlydepends on how well it performs by itself, but also on how itinteracts with other microarchitectural features (e.g., branchprediction, availability of resources) when it is integrated ina pipeline. Since VP and IR are different techniques, they notonly perform differently by themselves (i.e., capture differ-ent amounts of the redundancy present in programs) but alsointeract with other microarchitectural features in differentways, thereby, impacting the net performance differently.The purpose of this work is to identify and evaluate the dif-ferent microarchitectural interactions of these techniques.The intent is not to argue which technique is better, but is togain a better understanding of the working of each tech-nique. We feel, that will help in designing other techniques(possibly hybrid of VP and IR) that exploit the redundancyin programs more profitably. More specifically, in this paperwe achieve the following three tasks. (i) We identify the var-ious differences between the two techniques and qualita-tively discuss their microarchitectural interactions. (ii) Weevaluate the impact on performance of these interactions.And finally, (iii) since IR is more restrictive of the two tech-niques (we discuss this later), we also estimate how much ofthe total redundancy present in programs can be captured byIR.The layout for the rest of the paper is as follows. InSection 2, we describe VP and IR in more detail. InSection 3, we identify the various differences between them,and qualitatively discuss various interactions and their theimpacts on performance. In Section 4, we evaluate theseinteractions quantitatively. Finally, in Section 5, we summa-rize and provide conclusions.Understanding the Differences Between Value Prediction and Instruction ReuseAvinash Sodani and Gurindar S. SohiComputer Sciences DepartmentUniversity of Wisconsin-Madison1210 West Dayton StreetMadison, WI 53706 USA{sodani, sohi}@cs.wisc.edu2. Value Prediction and Instruction ReuseAs mentioned earlier, VP is a speculative technique thatexploits redundancy in programs to predict values that areeither produced (results) or used (inputs) by instructions. InFigure 1(a), we show a pipeline with VP. The predictionsare obtained from a hardware table, called Value PredictionTable (VPT). These predicted values are used as inputs byinstructions, which can then execute earlier than they couldhave if they had to wait for their inputs to become availablein the traditional way. When the correct values becomeavailable (after executing an instruction) the speculated val-ues are verified; if a speculation is found to be wrong, theinstructions which executed with the wrong inputs are re-executed (Figure 1(a)). The execution of such instructions isdelayed by the latency of verifying the prediction (VP-veri-fication latency). However, if the speculation is found to becorrect then nothing special needs to be done; instructionsget executed earlier than they would have otherwise. VPcollapses true dependences by allowing dependent instruc-tions, that would have executed sequentially, to execute inparallel.Unlike VP, IR is a non-speculative technique thatexploits redundancy in programs by obtaining results ofinstructions from their previous executions, and thereby, notexecuting them. In


View Full Document

CMU CS 15740 - Understanding the Differences Between Value Prediction

Documents in this Course
leecture

leecture

17 pages

Lecture

Lecture

9 pages

Lecture

Lecture

36 pages

Lecture

Lecture

9 pages

Lecture

Lecture

13 pages

lecture

lecture

25 pages

lect17

lect17

7 pages

Lecture

Lecture

65 pages

Lecture

Lecture

28 pages

lect07

lect07

24 pages

lect07

lect07

12 pages

lect03

lect03

3 pages

lecture

lecture

11 pages

lecture

lecture

20 pages

lecture

lecture

11 pages

Lecture

Lecture

9 pages

Lecture

Lecture

10 pages

Lecture

Lecture

22 pages

Lecture

Lecture

28 pages

Lecture

Lecture

18 pages

lecture

lecture

63 pages

lecture

lecture

13 pages

Lecture

Lecture

36 pages

Lecture

Lecture

18 pages

Lecture

Lecture

17 pages

Lecture

Lecture

12 pages

lecture

lecture

34 pages

lecture

lecture

47 pages

lecture

lecture

7 pages

Lecture

Lecture

18 pages

Lecture

Lecture

7 pages

Lecture

Lecture

21 pages

Lecture

Lecture

10 pages

Lecture

Lecture

39 pages

Lecture

Lecture

11 pages

lect04

lect04

40 pages

Load more
Download Understanding the Differences Between Value Prediction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Understanding the Differences Between Value Prediction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Understanding the Differences Between Value Prediction 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?