When is a custom, purpose-built product better than a general-purpose part within a datacenter? Naturally, this isn’t just a question for the enterprise. But in the data center, such a question can put millions of dollars on the line. An industry-standard server might just do the job, but will it be more efficient than a purpose-built product? Lower power? Higher performance? Let’s look at some of the thinking. First, let’s take a moment to define the problem. The dictionary defines “purpose-built” as “designed and constructed to serve a particular purpose.” Or try “bespoke”: “built to a customer’s specifications.” “General purpose,” by contrast, usually means “designed for a range of uses.” Let’s look at this from an engineering perspective. According to the ever-popular Wikipedia, interchangeable parts are components (presumably of a larger system) that are, for practical purposes, identical: “They are made to specifications that ensure that they are so nearly identical that they will fit into any assembly of the same type. One such part can freely replace another, without any custom fitting (such as filing). This interchangeability allows easy assembly of new devices, and easier repair of existing devices, while minimizing both the time and skill required of the person doing the assembly or repair.” Also according to Wikipedia, the custom-fit concept “can be understood as the offering of one-of-a-kind products that, due to their intrinsic characteristics and use, can be totally adapted to geometric characteristics in order to meet the user requirements.” In this concept, the “industry moves from a resource-based manufacturing system to a knowledge-based manufacturing system and from mass production to individual production.” The weird thing about the custom-fit definition is that, before the Industrial Revolution, everything was individually produced.  If we are headed back into a custom-fit world, we are returning to that world and not creating a new world. In that old world, a crafts- or trades-person built up a knowledge base over an apprenticeship and lifetime of experience. After the industrial revolution, a designer or design team had to build that knowledge base. But what happens in a post- industrial world? Do we really go back to individually produced one-of-a-kind products? Mass customization is the “use of flexible computer-aided manufacturing systems to produce custom output.  Those systems combine the low unit costs of mass production processes with the flexibility of individual customization.” (Wikipedia, again.) Based on the concept of mass customization, here are my definitions for the datacenter market: General purpose: A product built to a broad set of shared specifications, capable of being interchanged with any other product built to the same, shared set of specifications. These products typically differentiate from each other on cost and performance, as their functionality is defined by the shared specification. As a shared specification matures, vendors have an increasingly difficult task in differentiating from each other with performance and cost becomes a primary differentiation: the product category becomes a commodity and customer-switching costs decrease over time. Purpose-built: An anachronism that sounds great in marketing literature. In today’s product design and manufacturing context, the reuse of components reigns–simply rebalancing a system’s attributes using the same components as a general purpose system but in different proportions does not count as “purpose-built.” HP is the current market leader for co-opting the term for use in the hyperscale datacenter infrastructure market; see HP’s ProLiant SL4500 press release. TI is not far off the mark with its “Keystone” chips, but there is a better alternative to describe them: Workload-customized: A product built mostly or entirely from pre-existing components, where the performance of each component is optimized and the overall mix of components tuned to meet the requirements of a single workload or a narrow range of related workloads. Why would we go through expense of doing this, of workload-customizing datacenter infrastructure? It depends on what kind of datacenter you own. Take a look at these categories: Enterprise General Purpose: A typical enterprise IT shop. In the Bronze Age of computing, IT departments customized servers for specific workloads, but a modern consolidated and virtualized IT datacenter is a homogeneous infrastructure of interchangeable x86-based servers.  Any workload may be scheduled on any server node at any time. Infrastructure evolution is glacial due to required assurances that new infrastructure purchases will be completely compatible with installed infrastructure. Cloud General Purpose: An outsourced version of a homogeneous enterprise datacenter.  Design goals are scale and flexibility, with the ability to migrate runtime images between any server nodes as needed. They tend to be bigger than enterprise datacenters and so operate at more efficient economies of scale. Infrastructure evolution is slow because new infrastructure purchases must retain some level of compatibility with installed infrastructure, and those requirements are exposed to external customers (who are often moving out of the enterprise general purpose category). Hyperscale services: Operators run a specific set of workloads at scale. They know what the workloads are, they can plan for those workloads in advance, and because each workload operates at scale they can afford to optimize their infrastructure purchases for each workload. Customers included in this category build globe-spanning datacenter infrastructure to serve: search, mapping, advertising, social media, machine-to-machine (M2M), and other services that are coded and controlled by the service provider. Now I can address my question–where does purpose-built infrastructure play a role in datacenters? Workload-customized infrastructure is a shift in how hyperscale services datacenters are planned and deployed. Dean Nelson, formerly of Sun Microsystems, offered this interesting analogy a few years ago: if you lift the roof off of a modern hyperscale services datacenter, its layout looks like a microprocessor design. There are different sections of the hyperscale datacenter performing different functions, and each section translates to racks of servers and rows of racks dedicated to performing a single function, much as there are different functional blocks on a microprocessor chip. The tension in a hyperscale datacenter is between flexibility and efficiency. Flexibility is the ability to share infrastructure as much as possible across workloads to maximize efficiencies of scale–buying power, equipment experience, etc.  In many respects, general-purpose infrastructure still plays a role in hyperscale datacenters. Efficiency depends on optimizing infrastructure performance characteristics for specific workloads to reduce operational costs, which leans on workload-customized infrastructure. If it seems like a step backwards to customize servers for a specific application, it is–but now an application is a workload deployed at massive scale, where small increments in efficiency can outweigh potential cost savings for deploying a more general-purpose infrastructure. We’ve seen vendors “cloud-wash” the datacenter market in recent years, and they are starting down that path with Big Data. Remember that, like virtualization, Hadoop and MapReduce frameworks and NoSQL databases are not workloads–they are horizontal software infrastructure layers. Performance using those same layers will vary dramatically among different workloads. There’s a need for a new set of benchmarks focused on big data infrastructure and for broader hyperscale datacenter infrastructure. Without creating these new benchmarks, any claims of “purpose-built” or even “workload-customized” infrastructure are difficult to justify.   Image: kubais/Shutterstock.com