DOC PREVIEW
ISU CPRE 381 - Memory

This preview shows page 1-2-3-4-5-6 out of 19 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1• CPU and memory unit interface• CPU issues address (and data for write)• Memory returns data (or acknowledgment for write)The Main Memory UnitAddressDataControlCPU Memory2• Provide adequate storage capacity• Four ways to approach this goal– Use of number of different memory devices with different cost/performance ratios– Automatic space-allocation methods in hierarchy– Development of virtual-memory concept– Design of communication linksMemories: Design Objectives3• Location: Inside CPU, Outside CPU, External• Performance: Access time, Cycle time, Transfer rate• Capacity: Word size, Number of words• Unit of Transfer: Word, Block• Access: Sequential, Direct, Random, associative• Physical Type: Semiconductor, Magnetic, OpticalMemories: Characteristics4• Cost: c=C/S ($/bit)• Performance: – Read access time (Ta), access rate (1/Ta)• Access Mode: random access, serial, semi-random• Alterability: – R/W, Read Only, PROM, EPROM, EEPROM• Storage: – Destructive read out, Dynamic, Volatility• Hierarchy: – Tape, Disk, DRUM, CCD, CORE, MOS, BiPOLARMemories: Basic Parameters5• Users want large and fast memories! SRAM access times are 1 - 25ns at cost of $100 to $250 per Mbyte.DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte.Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte.• Try and give it to them anyway– build a memory hierarchyExploiting Memory HierarchyCPULevel nLevel 2Level 1Levels in thememory hierarchyIncreasing distance from the CPU in access time Size of the memory at each level6Advantage of Memory Hierarchy• Decrease cost/bit• Increase capacity• Improve average access time• Decrease frequency of accesses to slow memory7• SRAM:– value is stored on a pair of inverting gates– very fast but takes up more space than DRAM • DRAM:– value is stored as a charge on capacitor – very small but slower than SRAM (factor of 5/10)Memories: ReviewBA ABWord linePass transistorCapacitorBit line8• Storage cells are organized in a rectangular array • The address is divided into row and column parts• There are M (=2r) rows of N bits each• The row address (r bits) selects a full row of N bits• The column address (c bits) selects k bits out of N• M and N are generally powers of 2• Total size of a memory chip = M*N bits– It is organized as A=2r+caddresses of k-bit words• To design an R addresses W-bit words memory, we need |R/A| * |W/k| chipsMemories: Array Organization94Mx64-bit Memory using 1Mx4 memory chip : 441515151544141414144413131313441212121244111111114410101010449999448888447777446666445555444444443333442222441111440000B0B1B2B320 AddrlinesData outData in24 2322 - 13 12 - 3 2 1 0BankAddrRow Addresses Column AddressesByteAddrDecoderB3 B2 B1 B0To select abyte in 64 bit wordTo all chipscolumn addressesTo all chipsrow addresses10Locality• A principle that makes memory hierarchy a good idea• If an item is referenced– temporal locality: it will tend to be referenced again soon– spatial locality: nearby items will tend to be referenced soon.• Why does code have locality?• Our initial focus: two levels (upper, lower)– block: minimum unit of data – hit: data requested is in the upper level– miss: data requested is not in the upper level11Memory Hierarchy and Access Time• ti is time for access at level i– on-chip cache, off-chip cache, main memory, disk, tape• N accesses– ni satisfied at level i– a higher level can always satisfy any access that is satisfied by a lower level– N = n1 + n2 + n3 + n4 + n5• Hit Ratio– number of accesses satisfied/number of accesses made– Could be confusing– For example for level 3 is it n3/N or (n1+n2+n3)/N or n3/(N-n1-n2)– We will take the second definition12Average Access Time• ti is time for access at level i• ni satisfied at level i• hi is hit ratio at level i– hi = (n1 + n2 + …+ ni) /N • We will also assume that data are transferred from level i+1 to level i before satisfying the request• Total time = n1*t1 + n2*(t1+t2) + n3*(t1+t2+t3) + n4* (t1+t2+t3+t4) + n5*(t1+t2+t3+t4+t5)• Average time = Total time/N• t(avr) = t1+t2*(I-h1)+t3*(1-h2)+t4*(1-h3)+t5*(1-h4)• Total Cost = C1*S1+C2*S2+C3*S3+C4*S4+C5*S513• Two issues:– How do we know if a data item is in the cache?– If it is, how do we find it?• Our first example:– block size is one word of data– "direct mapped"For each item of data at the lower level, there is exactly one location in the cache where it might be.e.g., lots of items at the lower level share locations in the upper levelCache14• Mapping: – address is modulo the number of blocks in the cacheDirect Mapped Cache00001 001010100101101 10001 10101 11001 11101000CacheMemory00101001110010111011115• For MIPS:What kind of locality are we taking advantage of?Direct Mapped CacheAddress(showingbitpositions)2010ByteoffsetValid Tag DataIndex012102110221023TagIndexHit Data203231 30 13 12 11 2 1 016• Taking advantage of spatial locality:Direct Mapped CacheAddress(showingbitpositions)16 12 ByteoffsetVTagDataHit Data16324Kentries16 bits 128 bitsMux32 32 32232Block offsetIndexTag31 16 15 4321017• Read hits– this is what we want!• Read misses– stall the CPU, fetch block from memory, deliver to cache, restart • Write hits:– can replace data in cache and memory (write-through)– write the data only into the cache (write-back the cache later)• Write misses:– read the entire block into the cache, then write the wordHits vs. Misses18• Make reading multiple words easier by using banks• It can get a lot more complicated...Hardware IssuesCPUCacheBusMemorya. One-word-wide memory organizationCPUBusb. Wide memory organizationMemoryMultiplexorCacheCPUCacheBusMemorybank 1Memorybank 2Memorybank 3Memorybank 0c. Interleaved memory organization19• Increase in block size tend to decrease miss rate:• Use split caches (more spatial locality in code)Performance1 KB8 KB16 KB64 KB256 KB25640%35%30%25%20%15%10%5%0%Miss rate64164Block size (bytes)ProgramBlock size in wordsInstruction miss rateData miss rateEffective combined miss rategcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%20Performance• Simplified model:execution time=(execution cycles + stall cycles)∗cct• stall cycles= #of


View Full Document

ISU CPRE 381 - Memory

Download Memory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Memory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Memory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?