The open-source hardware movement behind the year-old Open Compute Project continues to gain steam, complete with new specifications and servers about to go into production.
Facebook launched the Open Compute Project in April 2011 with the intention of sharing the designs of the social networking giant’s data center in Prineville, Oregon, as well as custom designs for servers, power supplies and UPS units. Since then, the project has been growing, adding new partners and introducing new technologies designed specifically for use in webscale data centers.
New members include companies such as Hewlett-Packard, AMD, VMware, Supermicro, and Cloudscaling, among others. Customers interested in OCP aren’t just limited to a handful of Web companies, but encompass all industries, including telecommunications, financial services, universities, manufacturing, and pharmaceuticals. While it’s still early, there has been too much traction over the past year to say the project is still in the beginning stages, said Joseph George, Dell’s director of product strategy and marketing for cloud and big data solutions.
“We are off the ground. We have real customers, and people are interested” in the designs that have been submitted to the project, he said.
Specifications for several new technologies, such as racks, battery enclosures, motherboards, storage, and servers, are already available through Open Compute’s repository. The first version of OCP’s servers, dubbed the Freedom servers, is already in production, while the second version is expected in August, said Mark Reonigk, COO of Rackspace Hosting. OCP v3.0 servers will enter production sometime first quarter 2013.
The Freedom servers, used by Facebook in its Oregon data center as well as by several other large organizations, offer companies 40 percent savings in energy, Reonigk added. While OCP v2 features incremental improvements over OCP v1, it has more or less the same functionality. OCP v3.0 will be different from the predecessors as it features processor and networking improvements designed specifically for use in Internet Service Provider environments.
The Open Compute v2 machines were unveiled at the third Open Compute Summit in May. The new OCP v2 servers are double-stuffed machines that can fit two two-socket x86 servers, their power supplies, and fans into a 1.5U Open Computer chassis.
AMD and Intel have contributed motherboard designs used in OCP v1 and v2. The new motherboards stripped out many features found in traditional motherboards to optimize power and reduce costs.
Intel’s v1.0 motherboard is a dual Intel Xeon 5000 or 5600 socket motherboard with 18 DIMM slots. The v2.0 motherboard, dubbed “Windmill,” measures 6.5 inches x20 inches, and features two Intel Xeon E5-2600 processors and 16 DIMM slots. The processor family supports up to 8 cores per CPU, for up to 16 threads with hyper-threading technology, and up to 20MB last level cache.
The board also uses the next-generation Intel platform controller hub Intel C602 chipset, which supports 14 USB 2.0 ports, two individual SATA 6 Gps ports, and three mini-SAS ports. The E5-2600 processor can provide up to 40 PCIe Express 3.0 lanes and the C602 chipset PCH provides up to eight PCIe Express 2.0 lanes. The Windmill board will have four PCI-Express 3.0 slots and eight PCI-Express 2.0 slots.
AMD’s v1.0 motherboard had a heavier memory footprint than Intel’s v1 board, featuring a dual AMD Opteron 61 Series socket motherboard with 24 DIMM slots. The v2.0 motherboard, dubbed Watermark, measures 6.6 inches x20 inches and is a dual AMD G34 motherboard with 16 DIMM slots. The board can support up to 384GB per socket, but OCP v2 will top out at a maximum of 512 GB memory with 32 GB RDIMMS across 16 slots.
The number of PCI-Express 2.0 slots could also vary depending on the chipset-SR5650, SR5670, or SR5690, used on the board. A fully loaded board with the SR5690 chipset would support one x16, two x4 slots and a dual-port Gigabit Ethernet networking card. The AMD SP5100 Southbridge chipset on the board supports two USB 2.0 ports and six SATAII ports.
OCP servers differ from “old-style” servers primarily in how they control power consumption and manage data transfers, according to Reonigk. Traditional servers tend to consume a lot of power even when idle, whereas OCP servers can manage power at rest to not consume any power. The hard drive spins only when it’s actually in use, and the firmware and software have several built-in controls to keep track of when it needs to work.
The servers have a “traffic policeman in each box driving efficiency” in how data is transferred, Reonigk said.
The Open Compute Project is already changing how companies running the next-generation data center buy hardware, Dell’s George said. Instead of the traditional scenario in which the company’s buying decisions are determined by what the Original Equipment Manufacturers (OEM) such as Dell, HP, and IBM are offering, open sourcing hardware give companies the ability to buy the exact hardware they want.
Businesses are increasingly more curious about open source, and many of them are already deploying open source tools and the cloud, George said. They are increasingly looking at open source software as viable alternatives to commercial options. This level of exploration is moving to the infrastructure layer.
“Driving standards is what open source is about,” George added. With specifications at hand, it is possible to manufacture server and storage components that deliver consistent results regardless of who’s in charge of production. Companies who may normally not be able to afford their own design team or working with original design manufacturing (ODM) providers can take advantage of the latest in data center technology using the specifications submitted to the project.
Reonigk said there will be five to ten board designs comprising all OCP servers. If necessary, the customer can also request customizations to a specification to meet specific needs. While there is no requirement to do so, the expectation is that customers will resubmit modifications to the community so that others can take advantage of the modifications.
Reonigk said he had never seen this kind of “transformation and energy” behind open initiatives before, noting Open Compute is “moving much faster than expected.”