Exploiting Criticality to Reduce Branch Misprediction PenaltiesCriticality RevisitedCritical D ChainsEffect of branch misprediction penalty in default configurationCorrelation....?How to best spend your CacheHow does the new cache fare?What if...Exploiting Criticality to Reduce Branch Misprediction PenaltiesTim Lee and Jim ChenFall CS252John KubiatowiczCriticality RevisitedD0E0C0D1E1C1D2E2C2D3E3C3Critical D Chains01020304050607080AnagramGccPerlEonAmmpArtCraftyTwolfBzipMesaAverage D Chain LengthEffect of branch misprediction penalty in default configuration0.000 10.000 20.000 30.000 40.000 50.0000.4000.4500.5000.5500.6000.6500.7000.7500.8000.8500.9000.9501.000Sensitivity of CPI to Miss PenaltyAnagramGccPerlMiss PenaltyNormalized CPICorrelation....?0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1020406080100120140160180200Branch Accuracy vs Average Critical Chain LengthBranch AccuracyAverage LengthHow to best spend your Cache●A common way of spending transistors is to make caches bigger●Lets try a new cache to be used on branch mispredicts●Small, fully associative trace cache●Simulated 5 entry buffer had gains between 5-20%How does the new cache fare?Anagram Gcc Perl00.20.40.60.811.21.41.61.82Comparison of new cache vs increased l1 cache sizeBaseDoubleL15 Entry CacheCPI (lower is better)What if...●SMT/Multi-path execution●Really aggressive ca$hing●In-order <=> out of order●Long pipeline <=> short pipeline●Reconfigurable
View Full Document