Energy Dept. Datacenter Investigates Fuel Cells, Low-PUE Design

The U.S. Department of Energy (DOE) has combined two of the things highest on the priority list of enterprise IT organizations this year: datacenters and energy saving alternative power sources.

The DOE has built a datacenter in its National Renewable Energy Laboratory in Golden, Colorado, that is designed specifically to investigate the uses and development of hydrogen fuel cells. The combination is not intended to accelerate development of fuel cells to be used in datacenters; instead, the datacenter will serve as data collector and cruncher for the DoE’s National Fuel Cell Technology Evaluation Center, which collects data from lab research and early-stage pilot projects.

That body of data, once collected, scrubbed and cataloged, will be available through the new datacenter at the National Fuel Cell Technology Evaluation Center, to be used by commercial, academic and governmental organizations.

Since 2004, the National Renewable Energy Laboratory has collected more than 2 million hours of operational data on fuel-cell development and performance. The center’s Renewable Resource Data Center (RReDC) – the informational repository, not the computing facility – provides data, maps, and previous research results on power generation using biomass, geothermal, solar and wind power as well as fuel cells.

It also provides analytics such as PVWatts, which calculates energy production and cost-impact of solar-power systems connected to regional or national power grids anywhere in the world. And if that wasn’t enough, it offers the Simple Model of Atmospheric Radiative Transfer of Sunshine (SMARTS) – a tool that predicts the amount of sunlight that will travel through the Earth’s atmosphere under certain weather conditions, the better to help measure the energy capacity of a particular moment in time.

The agency’s datacenter also includes a high-performance computing (HPC) datacenter that cools its high-end hardware with warm water rather than cold, to reduce both the impact of heat and cost of counteracting it. “We wanted an energy-efficient HPC system appropriate for our workload. This is being supplied by HP and Intel,” according to Steve Hammond, director of NREL’s Computational Science Center in a March preview of the HPC center then under construction. “A new component-level liquid cooling system, developed by HP, will be used to keep computer components within safe operating range, reducing the number of fans in the backs of the racks.”

The HPC facility was created by datacenter-design firms SmithGroupJJR and the Integral Group, who used high-voltage electricity running directly to the server racks rather than 208-volt power, which reduces the hardware required to manage and cycle down the power as well as the amount of power wasted in overhead as it passes through multiple power-management steps.

Using water rather than air means using energy-efficient (and quiet) pumps to move cooling media rather than noisy fans. The HPC servers rely on Intel Xeon Phi coprocessors, which increase the efficiency of CPUs by offloading graphical or complex-data calculations. Xeon Phi is a favorite co-process for supercomputer manufacturers; most modern supercomputers are built using a primary CPU and co-processor design to route calculations to the most efficient number-crunching platform.

The second phase of construction, which happened during the summer of 2013, will expand the volume of power required to 1 megawatt, a volume more efficient for offloading heat directly to liquid rather than trying to dissipate it using air first, Hammond said. Water in the cooling system circulates through servers at about 75 degrees Farenheit – far higher than the 50-degrees-or-so typical of air-cooling systems; when the coolant leaves the datacenter, the water is around 100 degrees.

Heated water also makes it easier to capture the waste heat that would ordinarily be lost, which could save the datacenter $200,000 per year in heating costs for the building, in addition to the $800,000 it is already saving by using super-efficient power systems. “Compared to a typical data center, we may save $800,000 of operating expenses per year,” Hammond added. Including the savings from recovering waste heat, “we are looking at saving almost $1 million per year in operation costs for a data center that cost less to build than a typical data center.”

Using a series of systems designed to reduce the Power Usage Effectiveness (PUE) rating of the facility is part of the whole mission of the lab, but the systems NREL is building should be easily applicable to other datacenters. A PUE rating of 1 is perfect; NREL scores 1.06 on average.

“One of the things we are mindful of is that while our systems are becoming denser in terms of footprint, they are becoming more power efficient,” according to Stephen Wheat, general manager of HPC at Intel, who helped coordinate construction and systems integration. “NREL is the premiere place to demonstrate a means to continue the growth of HPC capability in an environmentally friendly way.”

The result should be a datacenter that goes easy on the power, but is able to run models and simulations of a variety of advanced power systems, including those that depend on new or advanced specialized materials. It will also allow researchers to examine the demand-response dynamic for specific locations and conditions to predict accurately how high the power demand will go when a summer day tops 95 degrees, for example.

NREL researchers are also investigating the potential of integrating datacenter environmental systems with building automation systems to maximize power savings by allowing building managers to access and adjust datacenter power use the same way they do HVAC, lighting and other systems in the rest of the building.

“There is a lot of interest in looking at how to integrate the HPC system in the building automation system as part of the energy systems integration work that we’re doing,” Hammond said.

 

Image: Dept. of Energy

Related