DOC PREVIEW
CMU CS 15744 - Lecture

This preview shows page 1-2-3-4-5-6 out of 19 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 19 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 15-744: Computer Networking L-24 Data Center Networking Overview • Data Center Overview • Routing in the DC • Transport in the DC 2 3 Datacenter Arms Race • Amazon, Google, Microsoft, Yahoo!, … race to build next-gen mega-datacenters • Industrial-scale Information Technology • 100,000+ servers • Located where land, water, fiber-optic connectivity, and cheap power are available • E.g., Microsoft Quincy • 43600 sq. ft. (10 football fields), sized for 48 MW • Also Chicago, San Antonio, Dublin @$500M each • E.g., Google: • The Dalles OR, Pryor OK, Council Bluffs, IW, Lenoir NC, Goose Creek , SC Google Oregon Datacenter 42 5 Computers + Net + Storage + Power + Cooling Energy Proportional Computing 6 Figure 1. Average CPU utilization of more than 5,000 servers during a six-month period. Servers are rarely completely idle and seldom operate near their maximum utilization, instead operating most of the time at between 10 and 50 percent of their maximum It is surprisingly hard to achieve high levels of utilization of typical servers (and your home PC or laptop is even worse) “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Energy Proportional Computing 7 Figure 2. Server power usage and energy efficiency at varying utilization levels, from idle to peak performance. Even an energy-efficient server still consumes about half its full power when doing virtually no work. “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Doing nothing well … NOT! Energy Proportional Computing 8 Figure 4. Power usage and energy efficiency in a more energy-proportional server. This server has a power efficiency of more than 80 percent of its peak value for utilizations of 30 percent and above, with efficiency remaining above 50 percent for utilization levels as low as 10 percent. “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Design for wide dynamic power range and active low power modes Doing nothing VERY well3 Thermal Image of Typical Cluster 9 Rack Switch M. K. Patterson, A. Pratt, P. Kumar, “From UPS to Silicon: an end-to-end evaluation of datacenter efficiency”, Intel Corporation 10 DC Networking and Power • Within DC racks, network equipment often the “hottest” components in the hot spot • Network opportunities for power reduction • Transition to higher speed interconnects (10 Gbs) at DC scales and densities • High function/high power assists embedded in network element (e.g., TCAMs) DC Networking and Power • 96 x 1 Gbit port Cisco datacenter switch consumes around 15 kW -- approximately 100x a typical dual processor Google server @ 145 W • High port density drives network element design, but such high power density makes it difficult to tightly pack them with servers • Alternative distributed processing/communications topology under investigation by various research groups 11 124 Containerized Datacenters • Sun Modular Data Center • Power/cooling for 200 KW of racked HW • External taps for electricity, network, water • 7.5 racks: ~250 Servers, 7 TB DRAM, 1.5 PB disk 13 Containerized Datacenters 14 Summary • Energy Consumption in IT Equipment • Energy Proportional Computing • Inherent inefficiencies in electrical energy distribution • Energy Consumption in Internet Datacenters • Backend to billions of network capable devices • Enormous processing, storage, and bandwidth supporting applications for huge user communities • Resource Management: Processor, Memory, I/O, Network to maximize performance subject to power constraints: “Do Nothing Well” • New packaging opportunities for better optimization of computing + communicating + power + mechanical 15 Overview • Data Center Overview • Routing in the DC • Transport in the DC 165 Layer 2 vs. Layer 3 for Data Centers 17 Flat vs. Location Based Addresses • Commodity switches today have ~640 KB of low latency, power hungry, expensive on chip memory • Stores 32 – 64 K flow entries • Assume 10 million virtual endpoints in 500,000 servers in datacenter • Flat addresses  10 million address mappings  ~100 MB on chip memory  ~150 times the memory size that can be put on chip today • Location based addresses  100 – 1000 address mappings  ~10 KB of memory  easily accommodated in switches today 18 PortLand: Main Assumption • Hierarchical structure of data center networks: • They are multi-level, multi-rooted trees 19 Data Center Network 206 Hierarchical Addresses 21 Hierarchical Addresses 22 Hierarchical Addresses 23 Hierarchical Addresses 247 Hierarchical Addresses 25 Hierarchical Addresses 26 PortLand: Location Discovery Protocol • Location Discovery Messages (LDMs) exchanged between neighboring switches • Switches self-discover location on boot up 27 Location Discovery Protocol 288 Location Discovery Protocol 29 Location Discovery Protocol 30 Location Discovery Protocol 31 Location Discovery Protocol 329 Location Discovery Protocol 33 Location Discovery Protocol 34 Location Discovery Protocol 35 Location Discovery Protocol 3610 Location Discovery Protocol 37 Location Discovery Protocol 38 Location Discovery Protocol 39 Name Resolution 4011 Name Resolution 41 Name Resolution 42 Name Resolution 43 Fabric Manager 4412 Name Resolution 45 Name Resolution 46 Name Resolution 47 Other Schemes • SEATTLE [SIGCOMM ‘08]: • Layer 2 network fabric that works at enterprise scale • Eliminates ARP broadcast, proposes one-hop DHT • Eliminates flooding, uses broadcast based LSR • Scalability limited by • Broadcast based routing protocol • Large switch state • VL2 [SIGCOMM ‘09] • Network architecture that scales to support huge data centers • Layer 3 routing fabric used to implement a virtual layer 2 • Scale Layer 2 via end host modifications • Unmodified switch hardware and software • End hosts modified to perform enhanced resolution to assist routing and forwarding 4813 VL2: Name-Location Separation 49 payload ToR3%. . . . . . y%x%Servers use flat names Switches run link-state routing and maintain only switch-level topology Cope with host churns with


View Full Document

CMU CS 15744 - Lecture

Documents in this Course
Lecture

Lecture

25 pages

Lecture

Lecture

10 pages

Lecture

Lecture

10 pages

Lecture

Lecture

45 pages

Lecture

Lecture

48 pages

Lecture

Lecture

19 pages

Lecture

Lecture

97 pages

Lecture

Lecture

39 pages

Lecture

Lecture

49 pages

Lecture

Lecture

33 pages

Lecture

Lecture

21 pages

Lecture

Lecture

52 pages

Problem

Problem

9 pages

Lecture

Lecture

6 pages

03-BGP

03-BGP

13 pages

Lecture

Lecture

42 pages

lecture

lecture

54 pages

lecture

lecture

21 pages

Lecture

Lecture

18 pages

Lecture

Lecture

18 pages

Lecture

Lecture

58 pages

lecture

lecture

17 pages

lecture

lecture

46 pages

Lecture

Lecture

72 pages

Lecture

Lecture

44 pages

Lecture

Lecture

13 pages

Lecture

Lecture

22 pages

Lecture

Lecture

48 pages

lecture

lecture

73 pages

17-DNS

17-DNS

52 pages

Lecture

Lecture

10 pages

lecture

lecture

53 pages

lecture

lecture

51 pages

Wireless

Wireless

27 pages

lecture

lecture

14 pages

lecture

lecture

18 pages

Lecture

Lecture

16 pages

Lecture

Lecture

14 pages

lecture

lecture

16 pages

Lecture

Lecture

16 pages

Lecture

Lecture

37 pages

Lecture

Lecture

44 pages

Lecture

Lecture

11 pages

Lecture

Lecture

61 pages

Multicast

Multicast

61 pages

Lecture

Lecture

8 pages

Lecture

Lecture

81 pages

Lecture

Lecture

9 pages

Lecture

Lecture

6 pages

Lecture

Lecture

63 pages

Lecture

Lecture

13 pages

Lecture

Lecture

63 pages

Lecture

Lecture

50 pages

lecture

lecture

35 pages

Lecture

Lecture

47 pages

Lecture

Lecture

29 pages

Lecture

Lecture

92 pages

Load more
Download Lecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?