Despite its late start in the enterprise, Linux has made tremendous headway over the last decade in the data center. It took some initial share in high performance computing (HPC), driven by cheap acquisition costs and labor; the software was effectively free, and most of the initial HPC customers had a lot of “free” graduate students on-hand to port the OS while they built custom applications.
So what does the future hold for enterprise data center Linux? Simply this: Linux has no chance of gaining significantly more share within enterprise data centers. Not now, not ever.
A Look Back
To explain why, it will first help to look back at how Linux rose into prominence. After the PC, Windows rode the first wave of cheap print, file and network servers. Windows NT 4.0 started Microsoft’s march in 1996 by eating Novell’s enterprise share–which peaked at about 90 percent in the early 1990’s–and Microsoft moved on to solidify its enterprise quality OS releases and applications such as Exchange and SQL Server.
The real turning point for x86 servers was AMD’s 2003 introduction of 64-bit x86 processors, which forced Intel to quickly follow suit. After that, x86 processors were enterprise-class in every respect, and enterprise data centers converted en masse. Unix application vendors looked at Linux as a least-effort port from proprietary Unix flavors running on proprietary RISC processors, such as SUN’s SPARC, to a much larger and homogeneous base of inexpensive standardized x86-based hardware from Intel and AMD.
As they had to send their new Linux port through quality assurance (QA) testing anyway, vendors also ported to the x86 architecture, leveraging their QA efforts to address the new volume hardware base.
The results of the X86-based server rollout over the last decade had quite the impact on server operating systems:
- Windows is now deployed on 75 percent of all new servers shipped, according to IDC’s May 2012 forecast.
- Linux is now deployed on 25 percent of all new servers shipped, and is also stuck there. Some distributions, in particular Red Hat’s Enterprise Linux (RHEL), are certified as enterprise-class, run the same kind of workloads as Windows, and are valued identically. Red Hat has done well, but now every single percent of market share must be stolen from Microsoft.
- UNIX has an insignificant share of x86 server shipments today: under 1 percent.
- There are no other practical OS choices in the x86 server market, and non-x86 servers now account for just a scant few percent of server boxes shipped. Systems from IBM and Unisys are low volume but vertically integrated, and their revenues include hardware, operating systems, development tools, applications, and consulting services.
The Linux portion of public cloud installed infrastructure is today mostly a collection of proprietary and non-public modifications to public distributions, which means that current Linux enterprise distributions (like RHEL) are not selling much into this market.
At this point, it appears as if the industry is at a stalemate, with Microsoft holding the majority of the market share. Or does it? Does the cloud make a difference?
First, let’s talk about public clouds: those vast, hugely, mind-bogglingly big clouds. They use Linux, right? In reality, really large-scale public cloud leaders like Amazon, Rackspace and new entrant Hewlett-Packard don’t really care which OS they use, as long as they have access to the source code. Operating football field-sized warehouses full of cloud infrastructure in a competitive market requires access to the source code. Access to source code is required to incrementally squeeze out more OS efficiency over time. In the public cloud, scale matters because efficiencies of scale lower the incremental cost of transactions for customers. Therefore, efficiency is a competitive advantage that can be used to lower price or increase profits.
There are two approaches to supplying a competitively efficient OS for a public cloud:
- Grab a Linux distribution and make it yours over time. No data center operator can know if they’re more efficient than the next guy unless they publish common efficiency metrics like those found at the Green Grid. You own the illusion of control, and therefore the illusion of competitiveness.
- Operate a Microsoft Windows Azure datacenter and leverage efficiencies that Microsoft gleans from, and supplies back to, all of their Azure datacenters. Azure is a level playing field–you know that you’re getting what everyone else is getting, but you will (presumably) see improvement over time as Microsoft incorporates improvements based on a wide variety of workloads. It’s not a differentiating OS, but it’s hopefully not debilitating either.
If not public clouds, how about private ones?
I do not understand why many enterprises would ever want to host a private cloud over the long term. Workload consolidation is pretty much over. The ability to run a workload on any available server is nice, but why not outsource that capability to someone who’s really good at it?
The primary objection to a public cloud implementation is security, prompting many analysts to recommend private clouds. Most companies can’t afford the expertise or systems to reliably implement a competitive level of modern security. The Stratfor hack was a clear indication that companies that have taken “reasonable” precautions can nonetheless be easily penetrated at will.
CIOs are learning to serve their business by outsourcing functions that do not provide a clear, compelling business advantage to their company. Eventually, the common enterprise applications every organization needs in order to be in business will be web-based, standardized and lightly customizable. Likewise, end-to-end security will be good enough and will be backed up by sound disaster recovery planning. I believe that private clouds will be short-term experiments by CIOs who are mired in the last 60 years of datacenter ownership dogma.
I can’t imagine a start-up today buying their own servers (other than hosting companies and server infrastructure designers) except on the bad advice of local VARs and system integrators (SI). There are better technology solutions and business models for VARs and SIs to offer small businesses than a tower or half rack on-premise; those that continue to offer decades-old vertical solutions will quickly find themselves displaced by more nimble competitors. I also see many established companies moving their non-differentiated IT systems to web-based services. The only credible reason to keep an in-house datacenter is to work on problems so large and competitively sensitive that you must make a huge investment even if you outsource.
The best route for private cloud is to just say “no.” In-house datacenters sound like a good idea. Your in-house IT department gets to stay in control, but operating your own datacenter as an on-demand service at a competitive cost structure is a business model all to itself. If it is not your business model–if you are a dentist, a semiconductor manufacturer, a consumer goods retail chain, etc.–then why would you not outsource most or all of your IT? I believe that private cloud is a lot of hype and will not have a lot of enterprise traction in the second half of this decade. Linux vs. Microsoft is an afterthought.
Private cloud also has a measurement challenge, in that many enterprises want their in-house IT to look competitive, so what they self-report as a private cloud may simply be a rack of virtualized servers. It is very difficult to sort out which levels of private cloud services have been deployed with good reliability.
In short, I don’t believe that Linux will ever gain significantly more share of enterprise datacenters; it’s at a market standoff with Windows, that is, until advances in cloud services make owning your own enterprise datacenter obsolete.
Image: R. Grotheus/Shutterstock.com