DOC PREVIEW
CMU CS 15744 - Data Center Networking

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 15-744: Computer Networking L-24 Data Center Networking Overview • Data Center Overview • Networking in the DC 2 “The Big Switch,” Redux 3 “A hundred years ago, companies stopped generating their own power with steam engines and dynamos and plugged into the newly built electric grid. The cheap power pumped out by electric utilities didn’t just change how businesses operate. It set off a chain reaction of economic and social transformations that brought the modern world into existence. Today, a similar revolution is under way. Hooked up to the Internet’s global computing grid, massive information-processing plants have begun pumping data and software code into our homes and businesses. This time, it’s computing that’s turning into a utility.” Growth of the Internet Continues … 4 1.32 billion in 4Q07 20% of world population 266% growth 2000-20072 5 Datacenter Arms Race • Amazon, Google, Microsoft, Yahoo!, … race to build next-gen mega-datacenters • Industrial-scale Information Technology • 100,000+ servers • Located where land, water, fiber-optic connectivity, and cheap power are available • E.g., Microsoft Quincy • 43600 sq. ft. (10 football fields), sized for 48 MW • Also Chicago, San Antonio, Dublin @$500M each • E.g., Google: • The Dalles OR, Pryor OK, Council Bluffs, IW, Lenoir NC, Goose Creek , SC Google Oregon Datacenter 6 7 Computers + Net + Storage + Power + Cooling Energy Proportional Computing 8 Figure 3. CPU contribution to total server power for two generations of Google servers at peak performance (the first two bars) and for the later generation at idle (the rightmost bar). “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 CPU energy improves, but what about the rest of the server architecture?3 Energy Proportional Computing 9 Figure 1. Average CPU utilization of more than 5,000 servers during a six-month period. Servers are rarely completely idle and seldom operate near their maximum utilization, instead operating most of the time at between 10 and 50 percent of their maximum It is surprisingly hard to achieve high levels of utilization of typical servers (and your home PC or laptop is even worse) “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Energy Proportional Computing 10 Figure 2. Server power usage and energy efficiency at varying utilization levels, from idle to peak performance. Even an energy-efficient server still consumes about half its full power when doing virtually no work. “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Doing nothing well … NOT! Energy Proportional Computing 11 Figure 4. Power usage and energy efficiency in a more energy-proportional server. This server has a power efficiency of more than 80 percent of its peak value for utilizations of 30 percent and above, with efficiency remaining above 50 percent for utilization levels as low as 10 percent. “The Case for Energy-Proportional Computing,” Luiz André Barroso, Urs Hölzle, IEEE Computer December 2007 Design for wide dynamic power range and active low power modes Doing nothing VERY well 12 Better to have one computer at 50% utilization than five computers at 10% utilization: Save $ via Consolidation (& Save Power) “Power” of Cloud Computing • SPECpower: two best systems • Two 3.0-GHz Xeons, 16 GB DRAM, 1 Disk • One 2.4-GHz Xeon, 8 GB DRAM, 1 Disk • 50% utilization  85% Peak Power • 10%65% Peak Power • Save 75% power if consolidate & turn off • 1 computer @ 50% = 225 W 5 computers @ 10% = 870 W4 Bringing Resources On-/Off-line • Save power by taking DC “slices” off-line • Resource footprint of applications hard to model • Dynamic environment, complex cost functions require measurement-driven decisions -- opportunity for statistical machine learning • Must maintain Service Level Agreements, no negative impacts on hardware reliability • Pervasive use of virtualization (VMs, VLANs, VStor) makes feasible rapid shutdown/migration/restart • Recent results suggest that conserving energy may actually improve reliability • MTTF: stress of on/off cycle vs. benefits of off-hours 13 14 Typical Datacenter Power Power-aware allocation of resources can achieve higher levels of utilization – harder to drive a cluster to high levels of utilization than an individual rack X. Fan, W-D Weber, L. Barroso, “Power Provisioning for a Warehouse-sized Computer,” ISCA’07, San Diego, (June 2007). Aside: Disk Power IBM Microdrive (1inch) • writing 300mA (3.3V) 1W • standby 65mA (3.3V) .2W IBM TravelStar (2.5inch) • read/write 2W • spinning 1.8W • low power idle .65W • standby .25W • sleep .1W • startup 4.7 W • seek 2.3W Spin-down Disk Model Not Spinning Spinning & Ready Spinning & Access Spinning & Seek Spinning up Spinning down Request Trigger: request or predict Predictive .2W .65-1.8W 2W 2.3W 4.7W Inactivity Timeout threshold*5 Disk Spindown • Disk Power Management – Oracle (off-line) • Disk Power Management – Practical scheme (on-line) 17 access1 access2 IdleTime > BreakEvenTime Idle for BreakEvenTime Wait time Source: from the presentation slides of the authors Spin-Down Policies • Fixed Thresholds • Tout = spin-down cost s.t. 2*Etransition = Pspin*Tout • Adaptive Thresholds: Tout = f (recent accesses) • Exploit burstiness in Tidle • Minimizing Bumps (user annoyance/latency) • Predictive spin-ups • Changing access patterns (making burstiness) • Caching • Prefetching Thermal Image of Typical Cluster 19 Rack Switch M. K. Patterson, A. Pratt, P. Kumar, “From UPS to Silicon: an end-to-end evaluation of datacenter efficiency”, Intel Corporation 20 DC Networking and Power • Within DC racks, network equipment often the “hottest” components in the hot spot • Network opportunities for power reduction • Transition to higher speed interconnects (10 Gbs) at DC scales and densities • High function/high power assists embedded in network element (e.g., TCAMs)6 DC Networking and Power • 96 x 1 Gbit port Cisco datacenter switch consumes around 15 kW -- approximately 100x a typical dual


View Full Document

CMU CS 15744 - Data Center Networking

Documents in this Course
Lecture

Lecture

25 pages

Lecture

Lecture

10 pages

Lecture

Lecture

10 pages

Lecture

Lecture

45 pages

Lecture

Lecture

48 pages

Lecture

Lecture

19 pages

Lecture

Lecture

97 pages

Lecture

Lecture

39 pages

Lecture

Lecture

49 pages

Lecture

Lecture

33 pages

Lecture

Lecture

21 pages

Lecture

Lecture

52 pages

Problem

Problem

9 pages

Lecture

Lecture

6 pages

03-BGP

03-BGP

13 pages

Lecture

Lecture

42 pages

lecture

lecture

54 pages

lecture

lecture

21 pages

Lecture

Lecture

18 pages

Lecture

Lecture

18 pages

Lecture

Lecture

58 pages

lecture

lecture

17 pages

lecture

lecture

46 pages

Lecture

Lecture

72 pages

Lecture

Lecture

44 pages

Lecture

Lecture

13 pages

Lecture

Lecture

22 pages

Lecture

Lecture

48 pages

lecture

lecture

73 pages

17-DNS

17-DNS

52 pages

Lecture

Lecture

10 pages

lecture

lecture

53 pages

lecture

lecture

51 pages

Wireless

Wireless

27 pages

lecture

lecture

14 pages

lecture

lecture

18 pages

Lecture

Lecture

16 pages

Lecture

Lecture

14 pages

lecture

lecture

16 pages

Lecture

Lecture

16 pages

Lecture

Lecture

37 pages

Lecture

Lecture

44 pages

Lecture

Lecture

11 pages

Lecture

Lecture

61 pages

Multicast

Multicast

61 pages

Lecture

Lecture

19 pages

Lecture

Lecture

8 pages

Lecture

Lecture

81 pages

Lecture

Lecture

9 pages

Lecture

Lecture

6 pages

Lecture

Lecture

63 pages

Lecture

Lecture

13 pages

Lecture

Lecture

63 pages

Lecture

Lecture

50 pages

lecture

lecture

35 pages

Lecture

Lecture

47 pages

Lecture

Lecture

29 pages

Lecture

Lecture

92 pages

Load more
Download Data Center Networking
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Data Center Networking and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Data Center Networking 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?