U of U CS 6810 - Lecture 12: ILP Innovations and SMT

Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Slide 17Slide 18Slide 19Slide 201Lecture 12: ILP Innovations and SMT• Today: ILP innovations, SMT, cache basics (Sections 3.5 and supplementary notes)2Reducing Stalls in Fetch• Better branch prediction novel ways to index/update and avoid aliasing cascading branch predictors• Trace cache stores instructions in the common order of execution, not in sequential order in Intel processors, the trace cache stores pre-decoded instructions3Reducing Stalls in Rename/Regfile• Larger ROB/register file/issue queue• Virtual physical registers: assign virtual register names to instructions, but assign a physical register only when the value is made available• Runahead: while a long instruction waits, let a thread run ahead to prefetch (this thread can deallocate resources more aggressively than a processor supporting precise execution)• Two-level register files: values being kept around in the register file for precise exceptions can be moved to 2nd level4Stalls in Issue Queue• Two-level issue queues: 2nd level contains instructions that are less likely to be woken up in the near future• Value prediction: tries to circumvent RAW hazards• Memory dependence prediction: allows a load to execute even if there are prior stores with unresolved addresses• Load hit prediction: instructions are scheduled early, assuming that the load will hit in cache5Functional Units• Clustering: allows quick bypass among a small group of functional units; FUs can also be associated with a subset of the register file and issue queue6Thread-Level Parallelism• Motivation:  a single thread leaves a processor under-utilized for most of the time by doubling processor area, single thread performance barely improves• Strategies for thread-level parallelism: multiple threads share the same large processor  reduces under-utilization, efficient resource allocation Simultaneous Multi-Threading (SMT) each thread executes on its own mini processor  simple design, low interference between threads Chip Multi-Processing (CMP)7How are Resources Shared?Each box represents an issue slot for a functional unit. Peak thruput is 4 IPC.Cycles• Superscalar processor has high under-utilization – not enough work every cycle, especially when there is a cache miss• Fine-grained multithreading can only issue instructions from a single thread in a cycle – can not find max work every cycle, but cache misses can be tolerated• Simultaneous multithreading can issue instructions from any thread every cycle – has the highest probability of finding work for every issue slotSuperscalar Fine-GrainedMultithreadingSimultaneousMultithreadingThread 1Thread 2Thread 3Thread 4Idle8What Resources are Shared?• Multiple threads are simultaneously active (in other words, a new thread can start without a context switch)• For correctness, each thread needs its own PC, IFQ, logical regs (and its own mappings from logical to phys regs)• For performance, each thread could have its own ROB (so that a stall in one thread does not stall commit in other threads), I-cache, branch predictor, D-cache, etc. (for low interference), although note that more sharing  better utilization of resources• Each additional thread costs a PC, IFQ, rename tables, and ROB – cheap!9FrontEndFrontEndFrontEndFrontEndExecution EngineRename ROBI-Cache BpredRegs IQFUsDCachePrivate/SharedFront-endPrivateFront-endSharedExec EngineWhat about RAS, LSQ?Pipeline Structure10Resource SharingR1  R1 + R2R3  R1 + R4R5  R1 + R3R2  R1 + R2R5  R1 + R2R3  R5 + R3P65 P1 + P2P66  P65 + P4P67  P65 + P66P76  P33 + P34P77  P33 + P76P78  P77 + P35P65 P1 + P2P66  P65 + P4P67  P65 + P66P76  P33 + P34P77  P33 + P76P78  P77 + P35FU FU FU FUInstr FetchInstr FetchInstr RenameInstr RenameIssue QueueRegister FileThread-1Thread-211Performance Implications of SMT• Single thread performance is likely to go down (caches, branch predictors, registers, etc. are shared) – this effect can be mitigated by trying to prioritize one thread• While fetching instructions, thread priority can dramatically influence total throughput – a widely accepted heuristic (ICOUNT): fetch such that each thread has an equal share of processor resources• With eight threads in a processor with many resources, SMT yields throughput improvements of roughly 2-4• Alpha 21464 and Intel Pentium 4 are examples of SMT12Pentium4 Hyper-Threading• Two threads – the Linux operating system operates as if it is executing on a two-processor system• When there is only one available thread, it behaves like a regular single-threaded superscalar processor• Statically divided resources: ROB, LSQ, issueq -- a slow thread will not cripple thruput (might not scale)• Dynamically shared: trace cache and decode (fine-grained multi-threaded, round-robin), FUs, data cache, bpred13Multi-Programmed Speedup• sixtrack and eon do not degrade their partners (small working sets?)• swim and art degrade their partners (cache contention?)• Best combination: swim & sixtrack worst combination: swim & art• Static partitioning ensures low interference – worst slowdown is 0.914The Cache HierarchyCore L1L2L3Off-chip memory15Accessing the Cache8-byte words 101000Direct-mapped cache:each address maps toa unique address8 words: 3 index bitsByte addressData arraySetsOffset16The Tag Array8-byte words 101000Direct-mapped cache:each address maps toa unique addressByte addressTagCompareData arrayTag array17Increasing Line Size32-byte cacheline size or block size 10100000Byte addressTagData arrayTag arrayOffsetA large cache line size  smaller tag array,fewer misses because of spatial locality18Associativity 10100000Byte addressTagData arrayTag arraySet associativity  fewer conflicts; wasted power because multiple data and tags are readWay-1 Way-2Compare19Example• 32 KB 4-way set-associative data cache array with 32 byte line sizes• How many sets?• How many index bits, offset bits, tag bits?• How large is the tag array?20Title•


View Full Document

U of U CS 6810 - Lecture 12: ILP Innovations and SMT

Documents in this Course
Caches

Caches

13 pages

Pipelines

Pipelines

14 pages

Load more
Download Lecture 12: ILP Innovations and SMT
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 12: ILP Innovations and SMT and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 12: ILP Innovations and SMT 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?