Placeholder Image

字幕列表 影片播放

  • ERIK TEETZEL: Here at Google, data centers are very

  • important to us.

  • They are how we deliver all of our web services

  • to all of our users.

  • A data center can mean a variety of things.

  • It can mean a small closet filled with a couple of

  • machines all the way to very large warehouse scale

  • buildings that are optimized for power use and IT computing

  • and filled with thousands of servers.

  • At Google, we spend a lot of time innovating the way in

  • which we design and build these facilities to minimize

  • the amount of energy, and water, and other the resources

  • that these computing facilities use.

  • In terms of the results of all of the work that we've been

  • doing over many, many years, now we use half of the energy

  • of the typical data center.

  • To put things into perspective, the entire ICT

  • sector, that includes mobile phones, computers, monitors,

  • cell phone towers, represents roughly about 2% of global greenhouse

  • gas emissions.

  • Of that 2%, the data center portion is

  • responsible for about 15%.

  • There's design choices that you can make for energy

  • efficiency that improve the performance

  • of your data center.

  • And these things are just best practices.

  • And adhering well to best practices, that's how you can

  • actually make the most improvement in

  • terms of energy use.

  • The results of the these types of activities return

  • Google millions of dollars in energy savings.

  • So the results are significant.

  • We've invited several members of our data center team

  • to explain some of these best practices to all of you.

  • KEVIN DOLDER: The first step in managing the efficiency of

  • your data center is to make sure you have the

  • instrumentation in place to measure the PUE, or power

  • usage effectiveness.

  • PUE is the ratio of total facility energy to IT

  • equipment energy within your data center.

  • It's a measure of how effectively you deliver power

  • and cooling to the IT equipment.

  • In 2006, the typical PUE of an enterprise

  • data center was 2.0.

  • Which means that for every one watt of IT energy consumed, one watt of

  • overhead was consumed by the facility to deliver the power

  • and cooling.

  • ERIK TEETZEL: Reducing the overhead is

  • really what you want.

  • You want PUE to get to as close to 1.0 as possible.

  • KEVIN DOLDER: Over the last 12 months, our TTM PUE was 1.16.

  • We've continuously measured that and it's gone down nearly

  • every quarter since we began reporting it back in 2008.

  • Last quarter the lowest data center was 1.09.

  • Ideally, you should measure PUE as fast as you can, as often

  • as you can, every second or so.

  • And the more often you can measure it, the more

  • meaningful the results will be.

  • It's important to measure PUE over the course of a year --

  • annually or quarterly -- to get a meaningful result.

  • If you just take snapshots in time the information won't be

  • realistic and it won't really be an actual measure of how

  • well your data center is operating.

  • One way to make it easier to manage is to incorporate the

  • PUE measurements into your building management system.

  • We do this at all of our sites at Google.

  • Without having easy access to this data we wouldn't to be

  • able to operate our data centers as

  • efficiently as we do.

  • ERIK TEETZEL: Once you have the ability to measure and

  • manage your PUE, the first step in terms of reducing your

  • data center energy load is to focus on the

  • management of the air flow.

  • The most important thing here is to eliminate the mixing of

  • the hot and the cold air.

  • And there's no one right way to do this.

  • Containment can be achieved through many different

  • approaches.

  • One thing we found very useful at Google is CFD analysis to

  • see where are your hot spots and how is your air flow going

  • actually be directed in your data center?

  • By doing so, you can actually model the way in which air

  • flow will go and it helps you make very simple design

  • choices to improve the air flow in your data center.

  • For example, in one of our computing and networking

  • rooms, we call them CNRs, we actually did some thermal

  • modeling to see exactly what air flow was doing.

  • Through that modeling we realized that the intake to

  • our CRACs was too low.

  • And that by simply piecing together some sheet metal we

  • could create extensions that would dramatically increase

  • the air flow quality into the CRACs.

  • We also did a bunch of other retrofits.

  • KEN WONG: Here in this corporate data center at

  • Google, we've implemented meat locker curtains that are very

  • inexpensive and easy to install.

  • These are hung from the overhead structure and they

  • separate the cold aisle, which is actually hot, and the hot

  • aisle, which is actually hotter.

  • We are set now to enter to hot aisle containment door.

  • And we incorporated these simple, inexpensive, sheet metal doors

  • to separate very tightly the cold aisle from the hot aisle.

  • Now over here, we've got the hot air from the racks

  • coming up, going over head, up through the return

  • air plenum

  • back to the CRAC units to give you a nice high temperature

  • differential across your CRAC units.

  • A very important step is to seal the rack space where you

  • don't quite have all of your equipment populated.

  • And it's very easy to do with these blanking panels.

  • It's almost like weatherizing your house to make sure that

  • you've got a nice, tight environment.

  • ERIK TEETZEL: All totalled, we spent about $25,000 in parts.

  • And those $25,000 saved us over $65,000 in

  • energy costs yearly.

  • Once you manage your air flow properly, the next step in

  • data center efficiency is to increase the temperature of

  • your cold aisle.

  • It's long been believed by many data center operators

  • that the data center has to be cold to keep all the equipment

  • at a temperature that it will run safely at.

  • And in fact, that's just false.

  • So if you look at recommended guidelines from ASHRAE, they

  • recommend you running all the way up to 80 degrees

  • Fahrenheit.

  • And at Google, that's exactly what we do.

  • We've got a small corporate data center here.

  • It's about 200 kilowatts of load.

  • Simply raising the temperature from 72 degrees to 80 degrees

  • saves us thousands of dollars in energy costs

  • every single year.

  • What's nice about that is it also allows our employees to

  • come to work in shorts.

  • Whenever possible, we recommend people to free cool.

  • Free cooling means utilizing ambient temperatures outside

  • of your data center to be able to provide cooling without

  • operating very heavy energy consuming

  • equipment like chillers.

  • CHRIS MALONE: We use free cooling at

  • all of our data centers.

  • And you can see this in our publicly recorded PUE data

  • where the PUE values go up in the summertime and down in the

  • wintertime.

  • And this is just a reality of running our operations

  • with free cooling.

  • And it yields tremendous efficiency gains.

  • In Europe, we have two data centers that have no chillers

  • whatsoever.

  • We're able to take advantage of the local constraints and

  • conditions.

  • In Belgium, we use evaporative towers without any chillers

  • given the ambient conditions.

  • In Finland, we use sea water cooling.

  • Sea water from the Bay of Finland cools the servers.

  • And then we temper the water returning to the Bay of

  • Finland so there's no temperature gradience

  • returning to the bay.

  • Evaporative cooling uses water on site, but what we found

  • through our studies is that by the use of evaporative cooling

  • in a very efficient fashion, we save water on the whole.

  • So for every gallon of water that we use in the evaporative

  • cooling plants, we eliminate the use of two gallons of

  • water on the energy production side.

  • This translates into hundreds of millions of gallons per

  • year in water savings.

  • There's no one right way to deliver free cooling.

  • The important point is that you should examine these

  • opportunities and take advantage of them to eliminate

  • or reduce substantially the mechanical cooling.

  • TRACY VAN DYK: In the data center, you pull power in from

  • the electrical grid and you convert it down to the

  • voltages that are needed for all the

  • components in the data center.

  • And there's a lot conversion stages in there.

  • By minimizing those conversion stages, you can save

  • money and save energy.

  • Also by making each conversion stage more efficient you can

  • save energy, as well.

  • Traditionally, one of the biggest losses is UPS,

  • Uninterruptible Power Supply.

  • Typically, there's a giant room of batteries.

  • The batteries are DC voltage.

  • And the power coming in to charge those batteries is AC.

  • And so you need to convert from AC down to DC with a

  • rectifier in order to charge the batteries.

  • And then when the batteries are needed in a power event,

  • you need to convert that back to AC with an inverter.

  • And then the AC needs to be converted back down to DC for

  • all the components in the data center.

  • So you've got three conversion stages in there

  • that are not necessary.

  • What Google has done is put a battery on board the tray.

  • So you're eliminating those three conversion steps.

  • You just have DC right into the server components.

  • In a typical server configuration, you have a

  • server with an AC/DC power supply attached to it.

  • By making sure that AC/DC power supply is efficient, you

  • can save a lot of energy.

  • Things like Energy Star labels will point you to power

  • supplies that are 90% plus efficient.

  • Google is able to save over $30 dollars per year per

  • server by implementing all of these features.

  • ERIK TEETZEL: There really are very simple, effective

  • approaches that all of us can implement to reduce the data

  • center energy use.

  • And most of them are cost effective within

  • 12 months of operation.

  • So a lot of efficiency best practices should be adopted by

  • just about everyone.

  • They're applicable to small data centers

  • or large data centers.

  • It's simply following the five steps that we go through here

  • to make sure that you're able to reduce your energy use.

  • 1. Measure PUE 2. Manage Airflow 3. Adjust Thermostat 4. Utilize Free Cooling 5. Optimize Power Distribution

ERIK TEETZEL: Here at Google, data centers are very

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B1 中級

谷歌數據中心效率最佳實踐--完整視頻 (Google Data Center Efficiency Best Practices -- Full Video)

  • 225 16
    Kenny 發佈於 2021 年 01 月 14 日
影片單字