DOC PREVIEW
Berkeley COMPSCI 61C - Lecture Notes

This preview shows page 1-2-23-24 out of 24 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Slide 1Protocol Family ConceptSlide 3Protocol for Network of NetworksTCP/IP packet, Ethernet packet, protocolsOverhead vs. BandwidthMagnetic DisksDisk Device TerminologyDisk Device PerformanceData Rate: Inner vs. Outer TracksDisk Performance Model /TrendsHistorical PerspectiveUse Arrays of Small Disks…Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks)Array ReliabilityRedundant Arrays of (Inexpensive) DisksBerkeley History, RAID-I“RAID 0”: No redundancy = “AID”RAID 1: Mirror dataRAID 3: ParityRAID 4: parity plus small sized accessesInspiration for RAID 5RAID 5: Rotated Parity, faster small writes“And In conclusion…”CS61C L40 I/O: Disks (1)Ho, Fall 2004 © UCB TA Casey Hoinst.eecs.berkeley.edu/~cs61c CS61C : Machine StructuresLecture 39 I/O : Disks2005-4-29Microsoft rolled out a 64 bit version of its Windows operating systems on Monday. As compared with existing 32-bit versions:64-bit Windows will handle 16 terabytes of virtual memory, as compared to 4 GB for 32-bit Windows. System cache size jumps from 1 GB to 1 TB, and paging-file size increases from 16 TB to 512 TB.CS61C L40 I/O: Disks (2)Ho, Fall 2004 © UCBProtocol Family ConceptMessage MessageTH Message TH Message TH THActual ActualPhysicalMessage TH Message THActual ActualLogicalLogicalCS61C L40 I/O: Disks (3)Ho, Fall 2004 © UCBProtocol Family Concept•Key to protocol families is that communication occurs logically at the same level of the protocol, called peer-to-peer……but is implemented via services at the next lower level•Encapsulation: carry higher level information within lower level “envelope”•Fragmentation: break packet into multiple smaller packets and reassembleCS61C L40 I/O: Disks (4)Ho, Fall 2004 © UCBProtocol for Network of Networks•Transmission Control Protocol/Internet Protocol (TCP/IP)•This protocol family is the basis of the Internet, a WAN protocol•IP makes best effort to deliver •TCP guarantees delivery•TCP/IP so popular it is used even when communicating locally: even across homogeneous LANCS61C L40 I/O: Disks (5)Ho, Fall 2004 © UCBMessageTCP/IP packet, Ethernet packet, protocols•Application sends messageTCP dataTCP HeaderIP HeaderIP DataEHEthernet HdrEthernet Hdr•TCP breaks into 64KiB segments, adds 20B header•IP adds 20B header, sends to network•If Ethernet, broken into 1500B packets with headers, trailers (24B)•All Headers, trailers have length field, destination, ...CS61C L40 I/O: Disks (6)Ho, Fall 2004 © UCBOverhead vs. Bandwidth•Networks are typically advertised using peak bandwidth of network link: e.g., 100 Mbits/sec Ethernet (“100 base T”)•Software overhead to put message into network or get message out of network often limits useful bandwidth•Assume overhead to send and receive = 320 microseconds (s), want to send 1000 Bytes over “100 Mbit/s” Ethernet•Network transmission time: 1000Bx8b/B /100Mb/s= 8000b / (100b/s) = 80 s•Effective bandwidth: 8000b/(320+80)s = 20 Mb/sCS61C L40 I/O: Disks (8)Ho, Fall 2004 © UCBMagnetic Disks•Purpose:• Long-term, nonvolatile, inexpensive storage for files• Large, inexpensive, slow level in the memory hierarchy (discuss later) Processor (active)ComputerControl(“brain”)Datapath(“brawn”)Memory(passive)(where programs, data live whenrunning)DevicesInputOutputKeyboard, MouseDisplay, PrinterDisk,NetworkCS61C L40 I/O: Disks (10)Ho, Fall 2004 © UCBDisk Device Terminology•Several platters, with information recorded magnetically on both surfaces (usually)•Actuator moves head (end of arm) over track (“seek”), wait for sector rotate under head, then read or write•Bits recorded in tracks, which in turn divided into sectors (e.g., 512 Bytes)PlatterOuterTrackInnerTrackSectorActuatorHeadArmCS61C L40 I/O: Disks (11)Ho, Fall 2004 © UCBDisk Device PerformancePlatterArmActuatorHeadSectorInnerTrackOuterTrack•Disk Latency = Seek Time + Rotation Time + Transfer Time + Controller Overhead•Seek Time? depends no. tracks move arm, seek speed of disk•Rotation Time? depends on speed disk rotates, how far sector is from head •Transfer Time? depends on data rate (bandwidth) of disk (bit density), size of requestControllerSpindleCS61C L40 I/O: Disks (12)Ho, Fall 2004 © UCBData Rate: Inner vs. Outer Tracks •To keep things simple, originally same # of sectors/track•Since outer track longer, lower bits per inch•Competition decided to keep bits/inch (BPI) high for all tracks (“constant bit density”)•More capacity per disk•More sectors per track towards edge•Since disk spins at constant speed, outer tracks have faster data rate•Bandwidth outer track 1.7X inner track!CS61C L40 I/O: Disks (13)Ho, Fall 2004 © UCBDisk Performance Model /Trends• Capacity : + 100% / year (2X / 1.0 yrs)Over time, grown so fast that # of platters has reduced (some even use only 1 now!)•Transfer rate (BW) : + 40%/yr (2X / 2 yrs)•Rotation+Seek time : – 8%/yr (1/2 in 10 yrs)•Areal Density•Bits recorded along a track: Bits/Inch (BPI)•# of tracks per surface: Tracks/Inch (TPI)•We care about bit density per unit area Bits/Inch2•Called Areal Density = BPI x TPI•MB/$: > 100%/year (2X / 1.0 yrs)•Fewer chips + areal densityCS61C L40 I/O: Disks (16)Ho, Fall 2004 © UCBHistorical Perspective•Form factor and capacity drives market, more than performance•1970s: Mainframes  14" diam. disks•1980s: Minicomputers, Servers  8", 5.25" diam. disks•Late 1980s/Early 1990s:•Pizzabox PCs 3.5 inch diameter disks•Laptops, notebooks  2.5 inch disks•Palmtops didn’t use disks, so 1.8 inch diameter disks didn’t make itCS61C L40 I/O: Disks (19)Ho, Fall 2004 © UCBUse Arrays of Small Disks…14”10”5.25”3.5”3.5”Disk Array: 1 disk designConventional: 4 disk designsLow EndHigh End• Katz and Patterson asked in 1987: • Can smaller disks be used to close gap in performance between disks and CPUs?CS61C L40 I/O: Disks (20)Ho, Fall 2004 © UCBReplace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks)Capacity Volume PowerData Rate I/O Rate MTTF CostIBM 3390K20 GBytes97 cu. ft.3 KW15 MB/s600 I/Os/s250 KHrs$250KIBM 3.5" 0061320 MBytes0.1 cu. ft.11 W1.5 MB/s55 I/Os/s50 KHrs$2Kx7023 GBytes11 cu. ft.1 KW120 MB/s3900 IOs/s??? Hrs$150KDisk Arrays potentially high performance, high MB per cu. ft., high MB per KW, but what about reliability?9X3X8X6XCS61C L40 I/O: Disks (21)Ho, Fall 2004 © UCBArray


View Full Document

Berkeley COMPSCI 61C - Lecture Notes

Documents in this Course
SIMD II

SIMD II

8 pages

Midterm

Midterm

7 pages

Lecture 7

Lecture 7

31 pages

Caches

Caches

7 pages

Lecture 9

Lecture 9

24 pages

Lecture 1

Lecture 1

28 pages

Lecture 2

Lecture 2

25 pages

VM II

VM II

4 pages

Midterm

Midterm

10 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?