Owning or leasing your own onsite compute infrastructure is a given for most businesses—or at least those businesses capable of affording it. Most of us assume that will be true forever, but it won’t: we are passing through a phase transition in the industry, and what emerges on the other side could be different. Why? Because we’ve already learned a lesson with a utility with which every data center operator is intimately familiar: electricity.

The Lesson That The Electrical Utility Taught Us

In the 1880s, factories in industrialized economies began installing steam-driven electrical generators in order to eliminate the mechanical pulleys and linkages driven by steam-powered central power shafts. In the process, they increased reliability and worker safety, all while reclaiming a lot of physical space. By the beginning of the 20th century, factory use of locally generated electricity had become commonplace. But an opposing trend soon developed. Continuing innovation and invention enabled the creation of centralized electrical power generation stations. The 1900 US census recorded 50,000 private electrical plants vs. 3,600 central generation stations. But by 1907, electricity as a centralized utility had absorbed 40 percent of total U.S. electricity production. By 1920 the number was 70 percent. And by 1930—during the Great Depression, no less—it reached 80 percent. Excluding the long tail of lagging-edge adoption—rural factories—the conversion from local generation to buying power as a utility took roughly 50 years from the first practical implementation. Why did private electrical plants catch on so quickly, whereas centrally generated electricity took half a century to convert the market? The answer is “switching costs.” Local generation had low switching costs because it was so similar to steam-generated mechanical power. Typically, the switching barrier was breached when the cost of maintaining aging private infrastructure hit a point where manufacturers had to consider the tradeoff between upgrading their private infrastructure plant versus signing up for the utility service. Making the switch to power as a utility eliminated decades-old local jobs and shifted them to fewer workers at a central facility. It also transferred a huge component of responsibility for production uptime to an external entity. In order to compensate, new insurance policies were written, and at the same time smaller backup generators were invented to bridge outages so that some subset of critical production might continue until external power could be restored.

The Utility Model and the Public Cloud

I often hear people complain that the cloud is “not ready for prime time,” “not optimal for my application,” et cetera. But remember that we are only 10 years into the cloud transition, starting with Amazon’s Web Services launch in July 2002. In-house IT faces the same decision that the factory manager did a century ago: when a CEO asks their CIO whether to move to the cloud or not, the CIO is already invested in running in-house IT. It’s more than a small conflict of interest. Despite that, here’s how switching barriers are being lowered:
  • CEOs are converting their CIOs from technology suppliers into business partners. CIOs are now full members of the C-suite and are increasingly compensated on overall company performance.
  • The Innovator’s Dilemma. New, disruptive products are providing lower-cost solutions that force existing providers to raise the cost of upgrading older infrastructure. So as the cost of replacing equipment rises and the cost of cloud services declines (at the same time reliability and security improve) the decision to switch will become easier.
What do I mean by “disruptive?”
  • Chip-level compute capability becomes absurd. It used to be that an 8P server (8 sockets) had only 8 cores running only 8 threads. In the seven years since dual-core x86 server processors hit the market, core counts have mushroomed to the point where a 2P server (2 sockets) now can host 24 or more cores running perhaps 48 threads. Chip vendors are even now considering physical core counts in excess of 32 cores per chip…and the ride won’t stop there.
  • In-rack fabric topologies will change the economics of deploying server processor sockets. Follow the money–SeaMicro, Calxeda, NextIO, and a host of smaller companies are burning through investment dollars to create new network architectures for at-scale computing. However, if you don’t need at least a few racks of these new, very densely packed architectures, you can’t benefit from them.
  • It’s cheaper to ship bits than power. Transmitting power long distances incurs line losses: the longer the distance the greater the loss, and, as electricity costs go up, the dollars spent on lost power add up significantly. Shipping bits via fiber optics or wireless networks is much less expensive, and eco-friendly to boot.
  • The future is co-locating cloud services with power generation facilities. It’s the cheapest available power. And more than a little ironic, as the really big datacenters now depend on captive power supply.
  • Standards matter. As cloud services mature, they will for the most part become interchangeable over time. In-house datacenters have evolved from entirely proprietary solutions (hardware and software) to standard hardware (x86 pizza boxes) running a couple of flavors of OS/infrastructure (i.e., Microsoft and Linux trapped in a stalemate). It used to take a huge effort to port an application between proprietary systems. Managed runtime environments now hide the hardware, including the underlying processor instruction set. Linux and open source runtime environments and apps are standardizing cloud software infrastructure and the interchangeable skill sets needed to support these new cloud services.

A Progression of Service Offerings

Moving to the cloud is all about switching costs. Newly formed start-ups mostly start off in the cloud these days, as their trade-offs are simple: buy equipment, license software, and hire IT (staff or consultants) to configure it all. The other option is to rent exactly the services you need at very tempting prices. For existing companies, hosted web presence has the lowest switching costs for any cloud service, followed by hosted email. They are easily partitioned services, with very clearly defined interfaces, that don’t typically depend on other core services and can be operated remotely in a very convenient manner. While these services are considered to be mission critical to some companies, most have figured out that specialized services can run them better and more economically than in-house services. Importantly, smaller hosters can provide some economies of scale for small and medium businesses. Business apps are good candidates for Software-as-a-Service (SaaS). Common business apps include payroll and HR, sales and CRM, file-sharing, and online backup and storage. These apps are no longer considered to be mission critical—if service is interrupted for brief periods, there is little economic impact. Often these apps are written for specific vertical markets—doctor/dentists, lawyers, auto repairs shops, etc.—and can also scale very well for small business customers at hosting facilities. Many of these services have local caching components to bridge short service interruptions or to enable use while traveling away from the network. Those companies with existing in-house servers and legacy applications face a higher switching cost to a hosted solution. The incremental cost of adding more in-house servers while continuing to maintain in-house infrastructure must be compared to the cost of lighting up a hosted solution and migrating your data, while also paying to maintain in-house infrastructure during the transition. The apps that will stay in-house the longest are those that provide a core, differentiated, distinctive value. These apps were (or will be) written in-house or contracted specifically for the company to provide that distinctive value. They can’t be easily replaced, like the common business apps mentioned above.

The Value of a Private Cloud

Linux and open source apps are already firmly entrenched in the cloud. If you work for an in-house IT department and want to stay there, and you don’t want to touch .NET, what are your options? Let’s go back to distinctive differentiation. I’ll boil it down to two closely related areas: Big Data and High Performance Computing. Big Data was born in the cloud to analyze what are essentially infinitely long serial streams of data. Analyzing everything that happens in your company’s value chain at increasing levels of resolution will throw off enough data that the network pipe between your organization and the cloud becomes an issue. But the insight that it promises to generate legitimately qualifies as a competitive advantage. And that should be securely protected. To create value, however, that information needs to be visualized—a good case to staying local for a while, given the constraints involved. Basically, companies face some difficult decisions in choosing to either build or maintain their own data center, or migrate to a hosted solution. For an IT worker, though, the bottom line is this: if you don’t cultivate the right skills, you won’t be working for an in-house IT department… you’ll be working for a cloud provider. Paul Teich is a contributing analyst at Moor Insights & Strategy.   Image: Arjuna Kodisinghe/Shutterstock.com