DOC PREVIEW
Berkeley COMPSCI 258 - Course Wrap-Up

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS258 S991NOW Handout Page 1Course Wrap-UpCS 258, Spring 99David E. CullerComputer Science DivisionU.C. Berkeley5/7/99 CS258 S99 2Today’s Plan• Whirlwind tour of where we’ve been• Some thought on where things are headed• HKN evaluationCS 258Parallel Computer ArchitectureCS 258, Spring 99David E. CullerComputer Science DivisionU.C. Berkeley5/7/99 CS258 S99 4What will you get out of CS258?• In-depth understanding of the design andengineering of modern parallel computers– technology forces– fundamental architectural issues» naming, replication, communication, synchronization– basic design techniques» cache coherence, protocols, networks, pipelining, …– methods of evaluation– underlying engineering trade-offs• from moderate to very large scale• across the hardware/software boundary5/7/99 CS258 S99 5Will it be worthwhile?• Absolutely!– even through few of you will become PP designers• The fundamental issues and solutions translateacross a wide spectrum of systems.– Crisp solutions in the context of parallel machines.• Pioneered at the thin-end of the platform pyramidon the most-demanding applications– migrate downward with time• Understand implications for softwareSuperServersDepartmenatal ServersWorkstationsPersonal ComputersWorkstations5/7/99 CS258 S99 6What is Parallel Architecture?• A parallel computer is a collection of processingelements that cooperate to solve large problemsfast• Some broad issues:– Resource Allocation:» how large a collection?» how powerful are the elements?» how much memory?– Data access, Communication and Synchronization» how do the elements cooperate and communicate?» how are data transmitted between processors?» what are the abstractions and primitives for cooperation?– Performance and Scalability» how does it all translate into performance?» how does it scale?CS258 S992NOW Handout Page 25/7/99 CS258 S99 7Role of a computer architect:To design and engineer the various levels of a computer systemto maximize performance and programmability within limits oftechnology and cost.Parallelism:• Provides alternative to faster clock for performance• Applies at all levels of system design• Is a fascinating perspective from which to view architecture• Is increasingly central in information processingWhy Study Parallel Architecture?5/7/99 CS258 S99 8Speedup• Speedup (p processors) =• For a fixed problem size (input data set),performance = 1/time• Speedup fixed problem (p processors) =Performance (p processors)Performance (1 processor)Time (1 processor)Time (p processors)5/7/99 CS258 S99 9Architectural Trends• Architecture translates technology’s gifts intoperformance and capability• Resolves the tradeoff between parallelism andlocality– Current microprocessor: 1/3 compute, 1/3 cache, 1/3 off-chipconnect– Tradeoffs may change with scale and technology advances• Understanding microprocessor architecturaltrends=> Helps build intuition about design issues or parallelmachines=> Shows fundamental role of parallelism even in “sequential”computers5/7/99 CS258 S99 10Architectural Trends• Greatest trend in VLSI generation is increase inparallelism– Up to 1985: bit level parallelism: 4-bit -> 8 bit -> 16-bit» slows after 32 bit» adoption of 64-bit now under way, 128-bit far (notperformance issue)» great inflection point when 32-bit micro and cache fit on achip– Mid 80s to mid 90s: instruction level parallelism» pipelining and simple instruction sets, + compileradvances (RISC)» on-chip caches and functional units => superscalarexecution» greater sophistication: out of order execution,speculation, prediction• to deal with control transfer and latency problems– Next step: thread level parallelism5/7/99 CS258 S99 11Summary: Why Parallel Architecture?• Increasingly attractive– Economics, technology, architecture, application demand• Increasingly central and mainstream• Parallelism exploited at many levels– Instruction-level parallelism– Multiprocessor servers– Large-scale multiprocessors (“MPPs”)• Focus of this class: multiprocessor level ofparallelism• Same story from memory system perspective– Increase bandwidth, reduce average latency with many localmemories• Spectrum of parallel architectures make sense– Different cost, performance and scalability5/7/99 CS258 S99 12Programming Model• Conceptualization of the machine thatprogrammer uses in coding applications– How parts cooperate and coordinate their activities– Specifies communication and synchronization operations• Multiprogramming– no communication or synch. at program level• Shared address space– like bulletin board• Message passing– like letters or phone calls, explicit point to point• Data parallel:– more regimented, global actions on data– Implemented with shared address space or message passingCS258 S993NOW Handout Page 35/7/99 CS258 S99 13Toward Architectural Convergence• Evolution and role of software have blurred boundary– Send/recv supported on SAS machines via buffers– Can construct global address space on MP (GA -> P | LA)– Page-based (or finer-grained) shared virtual memory• Hardware organization converging too– Tighter NI integration even for MP (low-latency, high-bandwidth)– Hardware SAS passes messages• Even clusters of workstations/SMPs are parallelsystems– Emergence of fast system area networks (SAN)• Programming models distinct, but organizationsconverging– Nodes connected by general network and communication assists– Implementations also converging, at least in high-end machines5/7/99 CS258 S99 14Mem° ° °NetworkP$Communicationassist (CA)Convergence: Generic Parallel Architecture• Node: processor(s), memory system, pluscommunication assist– Network interface and communication controller• Scalable network• Convergence allows lots of innovation, withinframework– Integration of assist with node, what operations, howefficiently...5/7/99 CS258 S99 15Architecture• Two facets of Computer Architecture:– Defines Critical Abstractions» especially at HW/SW boundary» set of operations and data types these operate on– Organizational structure that realizes these abstraction• Parallel Computer Arch. = Comp. Arch + Communication Arch.• Comm. Architecture has same two facets– communication abstraction– primitives at user/system and hw/sw boundary5/7/99 CS258 S99 16Communication ArchitectureUser/System


View Full Document

Berkeley COMPSCI 258 - Course Wrap-Up

Documents in this Course
Load more
Download Course Wrap-Up
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Course Wrap-Up and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Course Wrap-Up 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?