Data centers have a tendency to be monolithic, with most featuring racks of x86 servers, a smattering of RISC-based servers running databases and possibly a mainframe or two. But that’s changing thanks to new classes of server architectures with the potential to run enterprise workloads (including analytics) more efficiently than x86 servers while consuming less space and power.
Often referred to as microservers, these new systems—based on low-energy processors from ARM and Intel—are ideally suited for running memory-intensive workloads from Big Data applications to the presentation layer within large-scale Web 2.0 applications. While x86 processors will generally continue to be favored for running CPU-bound transaction applications, it turns out that one size of processors need not fit all—so to speak—when it comes to running different types of application workloads. In fact, running certain classes of memory-bound applications workloads on x86 servers winds up contributing heavily to server density issues that make data-center operations such an expensive proposition.
According to Gerald Kleyn, director of hyperscale server hardware research and development for HP, Project Gemini is part of a larger effort to help IT organizations reduce costs by optimizing what workloads run where in the data center: “It’s about rightsizing the IT architecture to fit the workloads.”
Project Gemini itself is derived from HP’s Project Moonlight initiative, which aims to create an ecosystem around next-generation servers that allows processors to more easily share power and cooling resources. While HP will apply many of these concepts to multiple classes of processors, the first commercial instance of that research is Project Gemini.
Dell, meanwhile, has decided to embrace ARM in its drive to develop its first low-energy servers. According to Drew Schulke, director of data center solutions at Dell, servers based on low-energy processors have a sweet spot of opportunity in the data center, especially with organizations running massive Web applications. Those applications are already segmented across multiple layers of computing, making it easier for them to identify and take advantage of different compute engines.
Low-energy processors will end up competing for their share of data-center workloads—putting pressure on IT organizations to take advantage of those capabilities. Most IT organizations have a fairly monolithic approach to managing applications. Breaking them up to optimize the performance of the different workloads that make up those applications, said Tirias Research founder Jim McGregor, will require a lot of skill that many of them don’t have today: “You’re really talking about introducing more parallelism into the environment… that provides a lot of flexibility but it comes with more complexity.”
McGregor notes that complexity is hardly limited to low-energy servers: “There will be multiple systems in the data center because there really is no such thing as one size processor fitting all.”
Despite all the recent hype over the prospect of a showdown between Intel and ARM in the data center, ARM’s market-share goals are relatively modest. Ian Ferguson, director of server systems and ecosystem for ARM, suggests that it will take a number of years yet for low-energy servers to reach even double-digit market share in the data center. “It’s not really like we’re going to be going head-to-head against Intel x86.”
That said, ARM continues to make progress in the enterprise. This week ARM announced the ARM CoreLink CCN-504 cache coherent network, capable of delivering up to one terabit of usable system bandwidth per second. ARM says this new offering will enable system-on-a-chip (SOC) designers to advantage of a high-performance, cache coherent interconnect for ‘many-core’ systems designed using the ARM Cortex-A15 MPCore processor and next-generation 64-bit processors.
Nevertheless, Intel is already preparing its response. According to Naveen Bohra, an Intel product manager, what many people don’t appreciate about Atom is how it supports the same x86 instruction set as Intel Xeon processors, making it easier for developers to incorporate both Xeon and Atom servers into their application development strategies. Bohra says that’s a critical distinction, as not all applications are completely compute bound or I/O bound, but made up of elements with different requirements: “Low energy processors may be the new shiny thing in the data center that everyone wants to talk about, but it comes down to the data and type of workload involved.”
At the recent Intel Developer Forum (IDF), Intel said it would deliver Atom chips codenamed Clover Trail over the next year and slightly beyond: Medfield, an Intel Atom Z2460 processor running at up to 2.0 GHz, and Clover Trail+, an Intel Atom Z2580 dual-core processor for 2x the CPU and graphics performance—to be followed by a next-generation version of the processor based on a 22nm microarchitecture.
At this juncture, it’s pretty clear that the server environment is about to become a lot more diverse in terms of the number and classes of processors running it. That will present some significant challenges for both IT operations and developers alike. But it also means we can look forward to a much richer application environment where processors in the data are finely tuned to run the class of application workload actually deployed on them.
Image: Robert Lucian Crusitu/Shutterstock.com