Why Would Anyone Use an ARM Processor in a Datacenter?

At the heart of the Innovator’s Dilemma is the concept that smart people can be blind to new opportunity because they have invested themselves in a current business model. New innovation infiltrates an existing business model from the bottom up, effectively addressing an emerging market need with a different set of economics than existing solutions are capable of providing.

The data center processor market has been growing exponentially since the Bronze Age of computing, since before solid-state transistors. With the exception of that first wave of “low cost” minicomputers expanding the market hand-in-hand with mainframes (and eventually capping the market for those mainframes), each successive wave has displaced the wave before it with exponentially higher volume.

This will happen again. Why? Big data, and its respective workloads. And it’s a wave that the incumbent vendors are ignoring, to the benefit of smaller, emerging vendors like Calxeda.

Big view of data center processor shipments over time:

Let’s look at how the transitions between processor generations played out.

The mainframe folks scoffed at minicomputers: As it turned out, that worked out pretty well for a while. Minicomputers did not displace mainframes as much as they expanded the initial market for room-sized enterprise computing by reducing the size of the room needed (and the budget needed to acquire and operate). However, as the market expanded many smaller potential customers for mainframes opted to buy minicomputers instead, which capped the mainframe market and forced fierce competition and consolidation. In North America the victor was IBM… what happened to Burroughs, Sperry, Amdahl, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA? Some of them survived, but those that did do not build mainframes any more.

The minicomputer folks scoffed at RISC towers running an upstart OS named “Unix.” This did not work out very well, as the minicomputer folks became caught in a vise: the mainframe folks maintained their position at the top of the market while the upstart Unix+RISC folks eviscerated the minicomputer vendors from below. IBM vacuumed up the pieces of this market as it consolidated. The victims included DEC, Control Data Corp., Data General, Honeywell-Bull (as they merged and down-shifted to escape IBM’s consolidation of the mainframe market), Prime, and Wang.

The Unix+RISC folks scoffed at x86+Windows:That didn’t work out for the same reason that the minicomputer folks were overwhelmed: the high end was occupied by consolidated mainframe and minicomputer vendors, and so there was nowhere left to retreat.

What was different at the beginning of this shift was how x86 servers started out by servicing a new market for low cost network routers, print servers and file servers that was created by an exploding PC market that used the same processors. RISC towers were simply too expensive for increasingly smaller companies and workgroups to deploy in support of those new PCs.

I’ll call HP the winner here, as it successfully made the transition into the volume x86 market by virtue of being the only one of the RISC generation vendors that simultaneously entered the PC client market, and so had hybrid DNA. x86 processors introduced a new variable – commodity manufacturing – into the datacenter infrastructure equation.

And now the Wintel server folks are scoffing at the potential for ARM processors combined with open source software stacks. How is this going to turn out any differently for the x86 server vendors? Whether it’s ARM or some other emergent processor architecture is irrelevant; it will eventually happen. However, ARM looks like it’s in the right place at the right time. There are strong, lingering echoes from the late 1980s and early 1990s of servers following commodity client processor investment and economics.

History has not been kind to non-IBM incumbent server vendors; Dell and HP are the current non-IBM incumbents. Although HP dodged a bullet once, it will be difficult for them to repeat that performance as they had to buy Compaq (who had previously bought DEC) to survive the first transition.

Let’s look at the generational shifts a little more closely…

Here I will use the phrase “generational shift” here in a business context for IT. My generational transitions are not between adjacent processor waves, they skip a wave and describe the crossover between a fading expensive legacy technology and an ascending inexpensive disruptive new technology: minicomputers to x86, and RISC to small-core fabrics.

Using these second-order generational shifts, I see three distinct phases for IT as a business tool:

Internal: Reduce the friction of doing business by making repeatable processes more…repeatable. Implement IT to reduce error rates, increase speed and operational efficiency, and be as productive as possible as a stand-alone business. This is similar to the way PCs initially addressed personal productivity.

External: Generate revenue by using IT as a marketing and sales tool. External is additive to Internal, it does not replace it, but Internal is no longer a high-growth sector. Collaborate more effectively across a product or service value chain. IT turned into a tool that spans ecosystems. This is similar to how the World Wide Web and consumer services such as email and online retail enabled people to start collaborating and integrating into larger affinity groups.

Big Data: context becomes a fungible commodity; real-time metadata describing the behavior of people, goods, services, and their environments becomes a service in its own right. Big Data is additive to Internal and External, and both of the previous are no longer high growth sectors. We are seeing the beginnings of this in our personal lives with social media and location-based services.

The next wave of processor and datacenter innovation will follow the last wave:

We are in the middle of the second shift. Quarter-to-quarter, Unix+RISC is losing significant ground to grown-up x86 “enterprise-class” processors. I’d argue that x86 processors are being used for initial Big Data build-out simply because, for the first time in the history of datacenter processors, there are no viable architectural alternatives. And, to add insult to injury, the server processor market has collapsed onto one vendor: Intel. The x86 wave had two competitors, but the other is rapidly becoming irrelevant.

The ante for any emerging competitor is a 64-bit processor roadmap with acceptable hardware thread performance for specific Big Data workloads. The fact that these products don’t yet exist is creating a condition marketers call “pent-up demand.” Around the edges of the server market very large customers building Big Data focused mega-datacenters are investing in experiments with alternate instruction sets and fabric-based processor architectures. Calxeda’s recent $55 million funding round caps a significant amount of recent visible investment, which is only the tip of a huge investment iceberg.

Gartner says that low-power server sales will be miniscule (their code for “small core” servers). Smart people can plan around what they can see… it’s what they can’t see that will hurt them. And if you’re not looking for a transition, you won’t see it until it’s too late.

Post a Comment

Your email address will not be published.