Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 20Slide 21Slide 22Slide 23Slide 24Slide 25Slide 26Slide 27Slide 28Slide 291Lecture 12: Large Cache DesignPapers (papers from last class and…):• Co-Operative Caching for Chip Multiprocessors, Chang and Sohi, ISCA’06• Victim Replication, Zhang and Asanovic, ISCA’05• Interconnect Design Considerations for Large NUCA Caches, Muralimanohar and Balasubramonian, ISCA’07• Design and Management of 3D Chip Multiprocessors using Network-in-Memory, Li et al., ISCA’06• A Domain-Specific On-Chip Network Design for Large Scale Cache Systems, Jin et al., HPCA’07• Nahalal: Cache Organization for Chip Multiprocessors, Guz et al., Comp. Arch. Letters, 20072Beckmann and Wood, MICRO’04Latency 13-17cycLatency 65 cycData must beplaced close to thecenter-of-gravityof requests3Examples: Frequency of AccessesDark  more accessesOLTP (on-line transaction processing) Ocean  (scientific code)Block Migration ResultsWhile block migration reducesavg. distance, it complicates search.Alternative LayoutFrom Huh et al., ICS’05:• Paper also introduces the notion of sharing degree• A bank can be shared by any number of cores between N=1 and 16.• Will need support for L2 coherence as well6Cho and Jin, MICRO’06• Page coloring to improve proximity of data and computation• Flexible software policies• Has the benefits of S-NUCA (each address has a unique location and no search is required)• Has the benefits of D-NUCA (page re-mapping can help migrate data, although at a page granularity)• Easily extends to multi-core and can easily mimic the behavior of private caches7Page Coloring ExamplePCPCPCPCPCPCPCPC• Recent work (Awasthi et al., HPCA’09) proposes a mechanism for hardware-based re-coloring of pages without requiring copies in DRAM memory8Private L2sArguments for private L2s:• Lower latency for L2 hits• Fewer ways have to be looked up for L2 hits• Performance isolation (little interference from other threads)• Can be turned off easily (since L2 does not have directory info)• Fewer requests on the on-chip networkPrimary disadvantage:• More off-chip accesses because of higher miss rates9Victim Replication• Large shared L2 cache (each core has a local slice)• On an L1 eviction, place the victim in local L2 slice (if there are unused lines)• The replication does not impact correctness as this core is still in the sharer list and will receive invalidations • On an L1 miss, the local L2 slice is checked before fwding the request to the correct slicePCPCPCPCPCPCPCPC10Coherence among L2s• On an L2 miss, can broadcast request to all L2s and off-chip controller (snooping-based coherence for few cores)• On an L2 miss, contact a directory that replicates tags for all L2 caches and handles the request appropriately (directory-based coherence for many cores)PCPCPCPCPCPCPCPCD11The Directory Structure• For 64-byte blocks, 1 MB L2 caches, overhead ~432 KB• Note the complexities in maintaining presence vectors, non-inclusion for L1 and L2• Note that clean evictions must also inform the central directory• Need not inform directory about L1-L2 swaps (the directory is imprecise about whether the block will be found in L1 or L2)12Co-operation I• Cache-to-cache sharing• On an L2 miss, the directory is contacted and the request is forwarded to and serviced by another cache• If silent evictions were allowed, some of these forwards would fail13Co-operation II• Every block keeps track of whether it is a singlet or replicate – this requires notifications from the central directory every time a block changes modes• While replacing a block, replicates are preferred (with a given probability)• When a singlet block is evicted, the directory is contacted and the directory then forwards this block to another randomly selected cache (weighted probabilities to prefer nearby caches or no cache at all) (hopefully, the forwarded block will replace another replicate)14Co-operation III• An evicted block is given a Recirculation Count of N and pushed to another cache – this block is placed as the LRU block in its new cache – every eviction decrements the RC before forwarding (this paper uses N=1)• Essentially, a block has one more chance to linger in the cache – it will stick around if it is reused before the new cache experiences capacity pressure• This is an attempt to approximate a global LRU policy among all 32 ways of aggregate L2 cache• Overheads per L2 cache block: one bit to indicate “once spilled and not reused” and one bit for “singlet” info15Results16Results17Traditional Networks Example designs for contiguous L2 cache regions18NUCA DelaysCacheControllerR R R R R R R RR R R R R R R RR R R R R R R RR R R R R R R R19Explorations for Optimality20Early and Aggressive Look-UpCacheControllerRMSB LSB• Address packet can only contain LSB and can use latency-optimized wires (transmission lines / fat wires)• Data packet also contains tags and can use regular wires• The on-chip network can now have different types of links for address and data21Hybrid NetworkCacheControllerR R R R R R R RR R R R R R R RR R R R R R R RR R R R R R R RData Network22Hybrid NetworkCacheControllerAddress NetworkRRRR23Results243D Designs, Li et al., ISCA’06• D-NUCA: first search in cylinder, then multicast search everywhere• Data is migrated close to requester, but need not jump across layers25Halo Network, Jin et al., HPCA’07• D-NUCA: Sets are distributed across columns; Ways are distributed across rows26Halo Network27Nahalal, Guz et al., CAL’0728Nahalal• Block is initially placed in core’s private bank and then swapped into the shared bank if frequently accessed by other cores• Parallel search across all banks29Title•


View Full Document

U of U CS 7810 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?