If you run a data center, you live and die by three letters: PUE.
Power Usage Effectiveness is a measure of how efficiently the data center uses its power, a ratio of the total amount of power used, divided by the amount of power used by the computing equipment.
A consortium known as The Green Grid introduced the PUE in 2006; beginning in 2007, the Uptime Institute began surveying members to determine the average data center PUE. In 2007, the average was 2.5, which means for every watt of power that went into the computers, 1.5 watts were used to cool the equipment.
As of last year, the Uptime Institute lowered the average to 1.8, a significant improvement. But that pales compared to the incredible numbers put up by the top 10, with some data centers reporting PUEs as low as 1.07. Earlier this year it published a report on some of the most power-efficient data centers.
So, how did they do it? There was no magic bullet, according to Rhonda Ascierto, author of the 451 Group’s paper on the data centers and Senior Analyst in 451 Research’s Datacenter Technologies and Eco-Efficient IT practices. Here’s what works.
The Holistic Approach
The holistic approach is something the 451 Group and Uptime have both preached, even before they merged in 2011. One of the biggest problems that caused runaway costs was that IT and facilities did not communicate or share in expenses. Uptime estimates only 20 percent of IT departments pay the power bill.
So CIOs would go on spending binges for new equipment, but facilities, run by the CFO, had to cool all that gear. The result was CIOs never knew the operational cost of their systems.
“We’re finally seeing an emphasis on collaboration between data center facilities and the IT organization,” said Ascierto. “There’s more sharing of details for capacity planning for optimizing resources and shared understanding of the true cost of resources.”
eBay, one of the companies cited in the 451 Group report, follows that model. “We have all production operations under one group and a VP. We’re all one big team, there is no gap at eBay, no separation between the teams,” said Jeremy Rodriguez, distinguished engineer in eBay’s Global Foundation Services, in an interview.
Cool It on the Cooling
According to conventional wisdom, data centers should be kept cold enough to store meat. Slowly, that notion is changing. “A lot of companies overcool their IT equipment because of concerns over uptime,” said Ascierto. “They run the cooling on warranty temperature. Now the recommended temperature is going up on newer models. Raising the temperature of your data center is one area where we are seeing efficiencies.”
Facebook wasn’t afraid to let things warm up in its data centers. “I think people are afraid of pushing the boundaries,” said Jay Park, an engineering director in Facebook’s data center design team. “They think the servers must operate at 70 degrees plus or minus 2 degrees and humidity between 40 and 65 percent. We have done a lot of testing, and do not trust what vendors tell you. They can operate at a much higher temperature and wider humidity range. Our servers have run in 100 degrees and 95 percent humidity and survived just fine.”
Sealing Off the Outside
People seem to have dueling views on ambient air. On the one hand, some data center operators would prefer to use just outside air. Many data centers located in the Pacific Northwest, for example, can claim to be eco-friendly not only because they use hydroelectric power, but because it’s so cool all they need to do is open the window.
But what if your data center isn’t on the Oregon coast? Antonio Mancini, founder and president of Fifth Essential, which does data center design, audits of data centers and consulting work, doesn’t like using outside air. Instead, he uses the KyotoCooling air conditioning system.
“The advantage of not using outside air is you can put all the filter you want on it but you can’t filter out gases in the atmosphere, especially urban areas,” said Mancini. “With KyotoCooling, I get free cooling without the introduction of large volumes of outside air into the data center.”
Open Up the Windows
On the flip side, Facebook practically threw open the windows. It eliminated centralized chiller plant and air ducts and simply brings in outside air. It then added a spray system that controls the temperature and humidity and pressurized the entire data center.
Facebook, like majority of companies surveyed, uses a hot aisle/cold aisle layout. Air conditioning systems take in and push cold air down into the data center. Hot air is vented outside during the summer months; in the winter it’s used to warm the too-cold outside air.
In eBay’s case, it put a modular data center in Phoenix, Arizona, which is not known for its cold temperatures. It uses water to humidify and decrease the temperature but it too does not subscribe to the notion that a data center needs to be colder than a meat locker.
“From our experience and the expected lifetime of our IT solutions, we do not see a risk in outside air to cool the solution. Most solutions can be easily maintained with proper air filtration to remove particulates, and that decreases the risk to IT systems,” said Rodriguez.
The goal is to be more strategic with cooling, and not just blast the data center with freezing air, said Ascierto: “I think dynamic cooling is a little more accessible and accepted and there’s more than one company out there offering dynamic, intelligent cooling. It uses sensors around the data center to match cooling with IT loads and moving around the cooling as needed.”
A Yahoo data center in upstate New York was cited in the 451 Research’s report for superior building design. The building shape was built to allow heat to rise via natural convection, while the length and width of the building provides easier access to outside air. The design resulted in a PUE rating of 1.15 or better. A comparable facility would probably average a PUE rating of 1.92, The 451 found. That translates into a savings of 60,700 MWh per year.
Changing the Power Conversion Plan
One of the tricks for power management involves something computing experts might know peripherally but have not fully studied, and that’s alternating current to direct current conversion.
For a long time, data centers simply let each servers use its own individual power supply, or if it was a bladed server, the power supply in the chassis. Instead, data centers are doing power conversions at strategic locations around the data center.
In the case of Facebook, instead of using power distribution units (PDUs) to convert the voltage from 480 volts to 208, as is done in many data centers to distribute power, Facebook takes advantage of the fact that 480 volt AC power is produced as a three-phase power source, and each phase is delivered at 277 volts. That voltage can be delivered directly to server racks, which typically operate at 208 volts.
“Whenever you do that conversion, you have a loss. So a typical centralized UPS system, depending on the load, will experience 8 to 12 percent power loss just doing the conversions,” said Park.
As for Mancini and Fifth Essential, he preferred the central UPS strategy. It has a central UPS receiving 25kV from the local utility. Like Facebook, Fifth Essential eliminated the PDUs. Instead, the power is transformed from 400 volts to 230 volts coming out of the UPS to the racks.
“What this means for efficiency is the copper used for carrying power,” said Mancini. “The higher the voltage, the lower the amperage, and the lower the amperage, the less copper is required. So you have a lot of efficiency in terms of the resource used.”
The bottom line is there are many ways to achieve data center efficiency. A plan of attack is a good way to start.