Despite a growing emphasis on the cloud as a way to keep companies from being slaves to their own datacenters—or perhaps because of it—companies that sell services available only through the cloud are turning datacenter construction and management into an art.
Google receives much attention for the giant scale of its operation; Microsoft gets a lot of press for its rapidly growing datacenter network. And Facebook boasts efficient, sophisticated datacenters: its just-opened facility in Lulea, Sweden is entirely powered with renewable hydroelectric power. It uses arctic outside air to cool the servers, and heat from the servers to warm the offices.
That’s not Facebook’s only hat-tip to energy efficiency: the social network posts near real-time estimates of energy use and efficiency of its datacenters in north Carolina and Oregon online (albeit via dashboards accessible through the individual Facebook page of each datacenter).
Facebook is also building a 62,000-square-foot facility near its Oregon datacenter for a special low-power, large-scale data archive system, the better to house some of the 180 petabytes of data it generates each year. It uses hard drives for storage because tape is too slow.
It may be a trick to build ginormous, energy efficient datacenters. It’s quite another to design, build and operate all your own servers to fill those datacenters, and do it well enough to worry giant systems vendors such as Hewlett-Packard and Cisco about how their business might change after Facebook contributes designs for its “vanity free” rack-mounted datacenter servers to the Open Compute Project (OCP). Facebook engineers founded OCP in 2011 to reduce the waste of effort and resources resulting from do-it-yourself efforts by big IT vendors and cloud players.
OCP was part of Facebook’s effort to reinvent its own IT infrastructure by designing every aspect of its systems and datacenters to optimize them in every way to save money, improve performance and keep the network. Along the way, Facebook has released detailed information on the design of its servers and architectural guides to its evolving datacenter plan. It has also released detailed descriptions of both the database structure and storage-hardware configuration it uses to keep from drowning in its own data.
Other than bringing the Lulea facility online, the latest advance is a set of homegrown links that allow Facebook to integrate the monitoring data and control functions in commercial building-management software with homegrown server-management tools, according to a GigaOm interview with Tom Furlong, Facebook’s vice president of site operations.
The combined systems- and data-center infrastructure management (DCIM) system can monitor outside temperature and humidity, power consumption, data volumes in storage and flowing through networks, and track utilization rates for server CPUs and memory.
Besides giving network operations staffers a better view of what’s going on in Facebook’s infrastructure, the combined management suite allows staffers a better picture of the physical layout of the datacenter and its hardware, Furlong said. The layout plan is complete enough that, when Facebook engineers need to reposition hardware racks to improve cooling or make space, it only takes half an hour or so to find a workable solution, rather than 12 hours messing around with drawings and other, older planning tools.
The company will be rolling out the DCIM software throughout the year and adding a cluster-planning system that will allow for better visualization of data, where it’s going and how to get it there.
Furlong told GigaOm that Facebook may not release the new multifunctional management systems in the same way it did the designs for systems, storage and databases, because the company’s engineers rely on those tools for sensitive internal work.
Like other custom DCIM systems built by Microsoft and other hyperscale datacenter owners, Facebook’s system is neither a threat to commercial DCIM vendors nor a promising source of DCIM standards, according to DataCenter Dynamics.