DOC PREVIEW
Berkeley COMPSCI 152 - Memory

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 152 Computer Architecture andEngineering Lecture 6 - MemoryKrste AsanovicElectrical Engineering and Computer SciencesUniversity of California at Berkeleyhttp://www.eecs.berkeley.ed u/~krstehttp://inst.eecs.berkeley.e du/~cs1522/10/2009 CS152-Spring!092Last time in Lecture 5• Control hazards (branches, interrupts) are mostdifficult to handle as they change which instructionshould be executed next• Speculation commonly used to reduce effect ofcontrol hazards (predict sequential fetch, predict noexceptions)• Branch delay slots make control hazard visible tosoftware• Precise exceptions: stop cleanly on one instruction,all previous instructions completed, no followinginstructions have changed architectural state• To implement precise exceptions in pipeline, shiftfaulting instructions down pipeline to “commit” point,where exceptions are handled in program order2/10/2009 CS152-Spring!093CPU-Memory BottleneckMemoryCPUPerformance of high-speed computers is usuallylimited by memory bandwidth & latency• Latency (time for a single access)Memory access time >> Processor cycle time• Bandwidth (number of accesses per unit time)if fraction m of instructions access memory,!1+m memory references / instruction!CPI = 1 requires 1+m memory refs / cycle(assuming MIPS RISC ISA)2/10/2009 CS152-Spring!094Core Memory• Core memory was first large scale reliable main memory– invented by Forrester in late 40s/early 50s at MIT for Whirlwind project• Bits stored as magnetization polarity on small ferrite coresthreaded onto 2 dimensional grid of wires• Coincident current pulses on X and Y wires would write celland also sense original state (destructive reads)DEC PDP-8/E Board,4K words x 12 bits, (1968)• Robust, non-volatile storage• Used on space shuttlecomputers until recently• Cores threaded onto wiresby hand (25 billion a year atpeak production)• Core access time ~ 1µs2/10/2009 CS152-Spring!095Semiconductor Memory, DRAM• Semiconductor memory began to be competitive in early1970s– Intel formed to exploit market for semiconductor memory• First commercial DRAM was Intel 1103– 1Kbit of storage on single chip– charge on a capacitor used to hold value• Semiconductor memory quickly replaced core in ‘70s2/10/2009 CS152-Spring!096One Transistor Dynamic RAMTiN top electrode (VREF)Ta2O5 dielectricW bottomelectrodepolywordlineaccesstransistor1-T DRAM Cellwordbitaccess transistorStoragecapacitor (FET gate,trench, stack)VREF2/10/2009 CS152-Spring!097DRAM ArchitectureRow AddressDecoderCol.1Col.2MRow 1Row 2NColumn Decoder &Sense AmplifiersMNN+Mbit linesword linesMemory cell(one bit)DData• Bits stored in 2-dimensional arrays on chip• Modern chips have around 4 logical banks on each chip– each logical bank physically implemented as many smaller arrays2/10/2009 CS152-Spring!098DRAM Packaging• DIMM (Dual Inline Memory Module) containsmultiple chips with clock/control/address signalsconnected in parallel (sometimes need buffersto drive signals to all chips)• Data pins work together to return wide word(e.g., 64-bit data bus using 16x4-bit parts)Address lines multiplexedrow/column addressClock and control signalsData bus(4b,8b,16b,32b)DRAMchip~12~72/10/2009 CS152-Spring!099DRAM OperationThree steps in read/write access to a given bank• Row access (RAS)– decode row address, enable addressed row (often multiple Kb in row)– bitlines share charge with storage cell– small change in voltage detected by sense amplifiers which latch wholerow of bits– sense amplifiers drive bitlines full rail to recharge storage cells• Column access (CAS)– decode column address to select small number of sense amplifierlatches (4, 8, 16, or 32 bits depending on DRAM package)– on read, send latched bits out to chip pins– on write, change sense amplifier latches which then charge storagecells to required value– can perform multiple column accesses on same row without anotherrow access (burst mode)• Precharge– charges bit lines to known value, required before next row accessEach step has a latency of around 15-20ns in modern DRAMsVarious DRAM standards (DDR, RDRAM) have different ways of encoding thesignals for transmission to the DRAM, but all share same core architecture2/10/2009 CS152-Spring!0910Double-Data Rate (DDR2) DRAM[ Micron, 256Mb DDR2 SDRAM datasheet ]Row Column Precharge Row’Data200MHzClock400Mb/sData Rate2/10/2009 CS152-Spring!0911Processor-DRAM Gap (latency)Time!Proc 60%/yearDRAM7%/year110100100019801981198319841985198619871988198919901991199219931994199519961997199819992000DRAMCPU1982Processor-MemoryPerformance Gap:(grows 50% / year)Performance“Moore’s Law”Four-issue 2GHz superscalar accessing 100ns DRAM couldexecute 800 instructions during time for one memory access!2/10/2009 CS152-Spring!09Typical Memory Reference PatternsAddressTimeInstruction fetchesStackaccessesDataaccessesn loop iterationssubroutinecallsubroutinereturnargument accessvector accessscalar accesses2/10/2009 CS152-Spring!09Common Predictable PatternsTwo predictable properties of memory references:– Temporal Locality: If a location is referenced it islikely to be referenced again in the near future.– Spatial Locality: If a location is referenced it is likelythat locations near it will be referenced in the nearfuture.Memory Reference PatternsDonald J. Hatfield, Jeanette Gerald: ProgramRestructuring for Virtual Memory. IBM Systems Journal10(3): 168-192 (1971)TimeMemory Address (one dot per access)TimeTimeSpatialLocalityTemporal Locality2/10/2009 CS152-Spring!09Multilevel MemoryStrategy: Reduce average latency using small, fastmemories called caches.Caches are a mechanism to reduce memorylatency based on the empirical observation thatthe patterns of memory references made by aprocessor are often highly predictable: PC … 96loop: ADD r2, r1, r1 100 SUBI r3, r3, #1 104 BNEZ r3, loop 108 … 1122/10/2009 CS152-Spring!0916Memory HierarchySmall,FastMemory(RF, SRAM)• capacity: Register << SRAM << DRAM why?• latency: Register << SRAM << DRAM why?• bandwidth: on-chip >> off-chip why?On a data access:hit (data " fast memory) ! low latency accessmiss (data # fast memory) ! long latency access (DRAM)CPUBig, SlowMemory(DRAM)A Bholds frequently used data2/10/2009 CS152-Spring!0917Relative Memory Cell Sizes[ Foss, “ImplementingApplication-SpecificMemory”, ISSCC 1996 ]DRAM onmemory


View Full Document

Berkeley COMPSCI 152 - Memory

Documents in this Course
Quiz 5

Quiz 5

9 pages

Memory

Memory

29 pages

Quiz 5

Quiz 5

15 pages

Memory

Memory

29 pages

Memory

Memory

35 pages

Quiz

Quiz

6 pages

Midterm 1

Midterm 1

20 pages

Quiz

Quiz

12 pages

Memory

Memory

33 pages

Quiz

Quiz

6 pages

Homework

Homework

19 pages

Quiz

Quiz

5 pages

Memory

Memory

15 pages

Load more
Download Memory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Memory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Memory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?