Despite the growing list of innovative (and sometimes expensive) adaptations designed to transform datacenters into slightly-less-active power gluttons, the most effective way to make datacenters more efficient is also the most obvious, according to researchers from Stanford, Berkeley and Northwestern.
Using power-efficient hardware, turning power down (or off) when the systems aren’t running at high loads, and making sure air-cooling systems are pointed at hot IT equipment—rather than in a random direction—can all do far more than fancier methods for cutting datacenter power, according to Jonathan Koomey, the Stanford researcher who has been instrumental in making power use a hot topic in IT.
Many of the most-publicized advances in building “green” datacenters during the past five years have focused on efforts to buy datacenter power from sources that also have very low carbon footprints. That includes Google’s recent decision to power its Finnish datacenter with all the power a Swedish wind farm can produce during the next decade—which can’t hurt, but can also skew perceptions by focusing on only the third most-important factor that determines how effectively a datacenter uses data, Koomey wrote.
In a paper published in the online edition of the journal Nature Climate Change, Koomey and two co-authors created mapped the sources of both energy use and carbon-dioxide production resulting from specific characteristics of modern datacenters. They found that advanced or automated management functions can have a positive impact on both costs and energy efficiency—functions including software with the ability to automate routine datacenter-management tasks, distribute workloads across a datacenter to optimize both performance and power use, and analyze performance and power use on the fly.
Sourcing all a datacenter’s power from sources cleaner or more sustainable than the traditional coal-fired generating plants that supply most datacenter power in the U.S. can also have a solid impact on the datacenter’s overall energy-efficiency rating and shrink its carbon footprint.
All those factors together didn’t add up to the impact of two very basic things: the overall energy efficiency of the individual pieces of hardware installed in a datacenter, and the level of efficiency with which those systems were configured and managed, Koomey explained in a blog published in conjunction with the paper’s publication. (The full paper is behind a paywall but Koomey offered to distribute copies free to those contacting him via his personal blog.)
The single biggest factor is the efficiency of the servers, storage, networking and other IT equipment within the datacenter. The second-most-important factor is the efficiency of the cooling, water pumps, power distribution and other bits of physical infrastructure.
Least of the three major factors was the carbon footprint of the company providing the electricity.
Of the relevant metrics, the most important are Power Usage Effectiveness (PUE), which measures the total power used by IT equipment within the datacenter, and Data Center Infrastructure Efficiency (DCiE), or the amount of power used for cooling, lights and other systems required to keep the facility operating.
Most datacenters use as much power for infrastructure systems as they do to run the IT systems themselves, according to studies from The Green Grid that Google cited in explaining its own approach to datacenter efficiency. Because it is a ratio, the perfect PUE score—for a facility in which all power is used to power IT equipment rather than supporting systems—would be a 1. The average of datacenters around the world is about half as good as the perfect score, ranging between 1.8 and 1.9, according to Uptime Institute’s 2012 datacenter survey.
Google’s worldwide average is 1.12, according to its own calculations. Its best practices include managing airflow and temperature, as well as using outside air to help with cooling and minimize the number of conversions from high-power to lower power, AC to DC or other power conversions.
However, the biggest impact comes from how well the IT systems themselves use power, according to Koomey. Keeping that in mind can help change the criteria datacenter managers use to decide when to upgrade IT hardware to get more efficient systems, where to locate datacenters to get the biggest benefit from environmental cooling, and the policies under which the IT systems themselves are run.
Keeping all the servers in a datacenter running at full power 24/7, even though they’re only under heavy processing load less than half that time, wastes a huge amount of power. Changing that policy and using software that would automatically put those systems on standby, or just run them at a lower power level during periods of low demand, seems too obvious to have required investigation. Indeed, the paper by Koomey et al. just reinforces an obvious point: energy conservation isn’t exactly quantum physics.
Image: Eric Masanet, Arman Shehabi, and Jonathan Koomey