DOC PREVIEW
CMU CS 15740 - Multiprocessor Interconnection Networks

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

• Topics– Network design space– Contention– Active messagesMultiprocessor InterconnectionNetworksTodd C. MowryCS 740November 19, 1998CS 740 F’98– 2 –Networks• Design Options:• Topology• Routing• Direct vs. Indirect• Physical implementation• Evaluation Criteria:• Latency• Bisection Bandwidth• Contention and hot-spot behavior• Partitionability• Cost and scalability• Fault toleranceCS 740 F’98– 3 –Buses• Simple and cost-effective for small-scale multiprocessors• Not scalable (limited bandwidth; electrical complications)P PPBusCS 740 F’98– 4 –Crossbars• Each port has link to every other port+ Low latency and high throughput- Cost grows as O(N^2) so not very scalable.- Difficult to arbitrate and to get all data lines into and out of a centralizedcrossbar.• Used in small-scale MPs (e.g., C.mmp) and as building block for othernetworks (e.g., Omega).PPPPM M M MCrossbarCS 740 F’98– 5 –Rings• Cheap: Cost is O(N).• Point-to-point wires and pipelining can be used to make themvery fast.+ High overall bandwidth- High latency O(N)• Examples: KSR machine, HectorP PPP P PRingCS 740 F’98– 6 –Trees• Cheap: Cost is O(N).• Latency is O(logN).• Easy to layout as planar graphs (e.g., H-Trees).• For random permutations, root can become bottleneck.• To avoid root being bottleneck, notion of Fat-Trees (used in CM-5)• channels are wider as you move towards root.H-TreeFat TreeCS 740 F’98– 7 –Hypercubes• Also called binary n-cubes. # of nodes = N = 2^n.• Latency is O(logN); Out degree of PE is O(logN)• Minimizes hops; good bisection BW; but tough to layout in 3-space• Popular in early message-passing computers (e.g., intel iPSC, NCUBE)• Used as direct network ==> emphasizes locality0-D 1-D 2-D 3-D4-DCS 740 F’98– 8 –Multistage Logarithmic Networks• Cost is O(NlogN); latency is O(logN); throughput is O(N).• Generally indirect networks.• Many variations exist (Omega, Butterfly, Benes, ...).• Used in many machines: BBN Butterfly, IBM RP3, ...CS 740 F’98– 9 –Omega Network• All stages are same, so can use recirculating network.• Single path from source to destination.• Can add extra stages and pathways to minimize collisions and increasefault tolerance.• Can support combining. Used in IBM RP3.000001010011100101110111000001010011100101110111Omega NetworkCS 740 F’98– 10 –Butterfly Network000001010011100101110111000001010011100101110111Butterfly Networksplit on MSBsplit on LSB• Equivalent to Omega network. Easy to see routing of messages.• Also very similar to hypercubes (direct vs. indirect though).• Clearly see that bisection of network is (N / 2) channels.• Can use higher-degree switches to reduce depth. Used in BBN machines.CS 740 F’98– 11 –k-ary n-cubes• Generalization of hypercubes (k-nodes in a string)• Total # of nodes = N = k^n.• k > 2 reduces # of channels at bisection, thus allowing for widerchannels but more hops.4-ary 3-cubeCS 740 F’98– 12 –Routing Strategies and Latency• Store-and-Forward routing:• Tsf = Tc • ( D • L / W)• L = msg length, D = # of hops, W = width, Tc = hop delay• Wormhole routing:• Twh = Tc • (D + L / W)• # of hops is an additive rather thanmultiplicative factor• Virtual Cut-Through routing:• Older and similar to wormhole. When blockage occurs, however,message is removed from network and buffered.• Deadlock are avoided through use of virtual channels and by using arouting strategy that does not allow channel-dependency cycles.CS 740 F’98– 13 –Advantages of Low-Dimensional Nets• What can be built in VLSI is often wire-limited• LDNs are easier to layout:– more uniform wiring density (easier to embed in 2-D or 3-D space)– mostly local connections (e.g., grids)• Compared with HDNs (e.g., hypercubes), LDNs have:– shorter wires (reduces hop latency)– fewer wires (increases bandwidth given constant bisection width)» increased channel width is the major reason why LDNs win!• Factors that limit end-to-end latency:– LDNs: number of hops– HDNs: length of message going across very narrow channels• LDNs have better hot-spot throughput– more pins per node than HDNsPerformance Under ContentionCS 740 F’98– 15 –Types of Hot Spots• Module Hot Spots:• Lots of PEs accessing the same PE's memory at the sametime.• Possible solutions:• suitable distribution or replication of data• high BW memory system design• Location Hot Spots:• Lots of PEs accessing the same memory location at thesame time• Possible solutions:• caches for read-only data, updates for R-W data• software or hardware combiningCS 740 F’98– 16 –NYU Ultracomputer/ IBM RP3• Focus on scalable bandwidth and synchronization in presence of hot-spots.• Machine model: Paracomputer (or WRAM model of Borodin)• Autonomous PEs sharing a central memory• Simultaneous reads and writes to the same location can all be handled ina single cycle.• Semantics given by the serialization principle:• ... as if all operations occurred in some (unspecified) serial order.• Obviously the above is a very desirable model.• Question is how well can it be realized in practise?• To achieve scalable synchronization, further extended read (write)operations with atomic read-modify-write (fetch-&-op) primitives.CS 740 F’98– 17 –The Fetch-&-Add Primitive• F&A(V,e) returns old value of V and atomically sets V = V + e;• If V = k, and X = F&A(V, a) and Y = F&A(V, b) done at same time• One possible result: X = k, Y = k+a, and V = k+a+b.• Another possible result: Y = k, X = k+b, and V = k+a+b.• Example use: Implementation of task queues.Insert: myI = F&A(qi, 1); Q[myI] = data; full[myI] = 1;Delete: myI = F&A(qd, 1); while (!full[myI]) ; data = Q[myI]; full[myI] = 0;Qfullqd qiinfiniteCS 740 F’98– 18 –The IBM RP3 (1985)• Design Plan:• 512 RISC processors (IBM 801s)• Distributed main memory with software cache coherence• Two networks: Low latency Banyan and a combining Omega==> Goal was to build the NYU Ultracomputer model• Interesting aspects:• Data distribution scheme to address locality and module hot spots• Combining network design to address synchronization bottlenecksPMem MapunitCache NImain memoryL G(interleave)NETWORKSmoveable boundary betweenlocal and global storagePMem


View Full Document

CMU CS 15740 - Multiprocessor Interconnection Networks

Documents in this Course
leecture

leecture

17 pages

Lecture

Lecture

9 pages

Lecture

Lecture

36 pages

Lecture

Lecture

9 pages

Lecture

Lecture

13 pages

lecture

lecture

25 pages

lect17

lect17

7 pages

Lecture

Lecture

65 pages

Lecture

Lecture

28 pages

lect07

lect07

24 pages

lect07

lect07

12 pages

lect03

lect03

3 pages

lecture

lecture

11 pages

lecture

lecture

20 pages

lecture

lecture

11 pages

Lecture

Lecture

9 pages

Lecture

Lecture

10 pages

Lecture

Lecture

22 pages

Lecture

Lecture

28 pages

Lecture

Lecture

18 pages

lecture

lecture

63 pages

lecture

lecture

13 pages

Lecture

Lecture

36 pages

Lecture

Lecture

18 pages

Lecture

Lecture

17 pages

Lecture

Lecture

12 pages

lecture

lecture

34 pages

lecture

lecture

47 pages

lecture

lecture

7 pages

Lecture

Lecture

18 pages

Lecture

Lecture

7 pages

Lecture

Lecture

21 pages

Lecture

Lecture

10 pages

Lecture

Lecture

39 pages

Lecture

Lecture

11 pages

lect04

lect04

40 pages

Load more
Download Multiprocessor Interconnection Networks
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Multiprocessor Interconnection Networks and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Multiprocessor Interconnection Networks 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?