DOC PREVIEW
ISU CPRE 381 - memory

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

• CPU and memory unit interfaceThe Main Memory UnitAddressDataControlCPU Memory1• CPU issues address (and data for write)• Memory returns data (or acknowledgment for write)y• Provide adequate storage capacity• Four ways to approach this goal– Use of number of different memory devices with different cost/performance ratios– Automatic space-allocation methods in hierarchyMemories: Design Objectives2– Development of virtual-memory concept– Design of communication links• Location: Inside CPU, Outside CPU, External• Performance: Access time, Cycle time, Transfer rate• Capacity: Word size, Number of words• Unit of Transfer: Word, BlockMemories: Characteristics3• Access: Sequential, Direct, Random, associative• Physical Type: Semiconductor, Magnetic, Optical• Cost: c=C/S ($/bit)• Performance: – Read access time (Ta), access rate (1/Ta)• Access Mode: random access, serial, semi-random•Alterability: Memories: Basic Parameters4y– R/W, Read Only, PROM, EPROM, EEPROM• Storage: – Destructive read out, Dynamic, Volatility• Hierarchy: – Tape, Disk, DRUM, CCD, CORE, MOS, BiPOLAR• Users want large and fast memories! SRAM access times are 1 - 25ns at cost of $100 to $250 per Mbyte.DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte.Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte.Exploiting Memory HierarchyCPU5• Try and give it to them anyway– build a memory hierarchyLevel nLevel 2Level 1Levels in thememory hierarchyIncreasing distance from the CPU in access time Size of the memory at each levelAdvantage of Memory Hierarchy• Decrease cost/bit• Increase capacity• Improve average access time• Decrease frequency of accesses to slow memory6• SRAM:– value is stored on a pair of inverting gates– very fast but takes up more space than DRAM • DRAM:–value is stored as a charge on capacitor Memories: Review7gp– very small but slower than SRAM (factor of 5/10)BA ABWord linePass transistorCapacitorBit line• Storage cells are organized in a rectangular array • The address is divided into row and column parts• There are M (=2r) rows of N bits each• The row address (r bits) selects a full row of N bits• The column address (c bits) selects k bits out of NMdN ll f2Memories: Array Organization8•M and N are generally powers of 2• Total size of a memory chip = M*N bits– It is organized as A=2r+caddresses of k-bit words• To design an R addresses W-bit words memory, we need |R/A| * |W/k| chips4Mx64-bit Memory using 1Mx4 memory chip : 4151515414141441313134121212411111141010104999488847774666455544444333422241114000B0B1B2Data in9415154141441313412124111141010499488477466455444433422411400B320 Addr linesData out24 2322 - 13 12 - 3 2 1 0BankAddr Row Addresses Column AddressesByteAddrDecoderB3 B2 B1 B0To select abyte in 64 bit wordTo all chipscolumn addressesTo all chipsrow addressesLocality• A principle that makes memory hierarchy a good idea• If an item is referenced– temporal locality: it will tend to be referenced again soon– spatial locality: nearby items will tend to be referenced soon.• Why does code have locality?10yy• Our initial focus: two levels (upper, lower)– block: minimum unit of data – hit: data requested is in the upper level– miss: data requested is not in the upper levelMemory Hierarchy and Access Time• ti is time for access at level i– on-chip cache, off-chip cache, main memory, disk, tape• N accesses– ni satisfied at level i– a higher level can always satisfy any access that is satisfied by a lower level11– N = n1 + n2 + n3 + n4 + n5• Hit Ratio– number of accesses satisfied/number of accesses made– Could be confusing– For example for level 3 is it n3/N or (n1+n2+n3)/N or n3/(N-n1-n2)– We will take the second definitionAverage Access Time• ti is time for access at level i• ni satisfied at level i• hi is hit ratio at level i– hi = (n1 + n2 + …+ ni) /N • We will also assume that data are transferred from level i+1 to level i before satisfying the request12level i+1 to level i before satisfying the request• Total time = n1*t1 + n2*(t1+t2) + n3*(t1+t2+t3) + n4* (t1+t2+t3+t4) + n5*(t1+t2+t3+t4+t5)• Average time = Total time/N• t(avr) = t1+t2*(I-h1)+t3*(1-h2)+t4*(1-h3)+t5*(1-h4)• Total Cost = C1*S1+C2*S2+C3*S3+C4*S4+C5*S5• Two issues:– How do we know if a data item is in the cache?– If it is, how do we find it?• Our first example:–block size is one word of dataCache13– "direct mapped"For each item of data at the lower level, there is exactly one location in the cache where it might be.e.g., lots of items at the lower level share locations in the upper level• Mapping: – address is modulo the number of blocks in the cacheDirect Mapped Cache000Cache0010100111001011101111400001 001010100101101 10001 10101 11001 11101Memory• For MIPS:Direct Mapped Cacheddess(sogbtpostos)2010ByteoffsetValid Tag DataIndex012TagIndexHit Data31 30 13 12 11 2 1 015What kind of locality are we taking advantage of?1021102210232032• Taking advantage of spatial locality:Direct Mapped CacheAddress(showingbit positions)16 12 ByteoffsetVTagDataHit Data16 bits 128 bits2Block offsetIndexTag31 16 15 432101616324KentriesMux32 32 3232• Read hits– this is what we want!• Read misses– stall the CPU, fetch block from memory, deliver to cache, restart Hits vs. Misses17• Write hits:– can replace data in cache and memory (write-through)– write the data only into the cache (write-back the cache later)• Write misses:– read the entire block into the cache, then write the word• Make reading multiple words easier by using banksHardware IssuesCPUCacheBusCPUBusMultiplexorCacheCPUCacheBus18• It can get a lot more complicated...Memorya. One-word-wide memory organizationb. Wide memory organizationMemoryMemorybank 1Memorybank 2Memorybank 3Memorybank 0c. Interleaved memory organization• Increase in block size tend to decrease miss rate:Performance40%35%30%25%20%15%10%5%Miss rate19• Use split caches (more spatial locality in code)1 KB8 KB16 KB64 KB256 KB2560%64164Block size (bytes)ProgramBlock size in wordsInstruction miss rateData miss rateEffective combined miss rategcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%Performance• Simplified model:execution time=(execution cycles +


View Full Document

ISU CPRE 381 - memory

Download memory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view memory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view memory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?