Main image of article How Big Companies Can Ship Code Like Startups

In the past few years, every industry from banking to healthcare has seen a crop of scrappy startups take on the Microsofts and Salesforces of the world. Why would small firms believe they could take on the giants? Because they can move much faster than their older, more bureaucratic counterparts. And at the root of that is how quickly—or not—they tend to deploy the right code.

It’s a simple rule of business: companies that ship to customers more frequently learn faster, respond quicker, and iterate to improve the customer experience more successfully. A pattern we sometimes see is small comps can turn code around pretty quickly and deploy to production in minutes or hours. Big companies often have weeks, months, or years for code cycles.

How to help large companies to ship code like startups begins at identifying the barriers. The “why” is different for different environments, but in my experience working alongside engineering teams and executive leadership at large businesses and startups, three factors seem to contribute the most to a slow pace:

  • Policy and processes.
  • How employee labor is allocated.
  • Overall mindset around precautions and consequences.

Let’s start with prohibitive levels of policy and process. Larger corporations tend to have a deeply ingrained policy and process infrastructure around code delivery that has, over the years, become complex. A laundry list of people who need to sign off on everything, or an overly-manual testing set up that takes weeks to complete, are common culprits.

Second, in the ongoing push and pull of automated tasks vs. manual tasks, cautiousness takes priority over speed, and teams don’t strike the most efficient balance. Highly manual processes aren’t automated, or they consider “automation” to be three engineers in a room for five days.

Finally, stakeholders with an apprehensive mindset (the C-suite, engineers, and product marketers, to name a few) don’t readily see the value of shipping code frequently in the first place. If speed and frequency isn’t a priority, then inertia naturally leads to slow, infrequent deployment.

Where to Start with Code

First, stop chasing the illusion of perfect code. It doesn’t exist. There will always be a reason not to ship, and regardless of how comprehensive testing requirements are, it’s impossible to avoid every potential issue. So, a shift in philosophy is where change starts.

In June, Google Calendar had a failure that put the service down for hours during the workday. Everyone took to Twitter to complain, making it a trending hashtag. For many, Google Calendar is mission-critical system. Surely Google was careful. But they made a mistake anyway. That’s just going to happen.

Start by assessing whether the level of apprehension around errors is appropriate for the situation. Sure, nobody wants to be responsible for the bug that has every customer wishing your company the worst on Twitter and Facebook. No one wants to be the one who shipped bad code. But it’s more efficient to put efforts towards a different type of analysis:

  • What it would take to recover?
  • How can we better catch any truly critical failures before a rollout—instead of having twenty people go through every bit of code with a fine-tooth comb?

We would typically address this with tooling (automating testing to catch critical failures rather than having 20 people go through the code).

From an organizational perspective, stakeholders need to have an informed hypothesis about the impact of code failure on their customers and users. At that point, teams can make judgment calls about how much time and money is warranted to prevent the worst errors. (At Truss, we use a variety of tools and practices early in the planning phase, from user research to "pre-mortems" to elicit potential risks and assess impact.)

A shifting mindset will guide how much testing is done, what kinds of systems require which kinds of testing, how much effort and cost the business will spend to validate something is correct, and so on. If there’s an upcoming set of product updates with the potential to take down the whole system, then the software development team needs to recognize which changes fall into which category, and be purposeful about it.

Netflix has won customers over because they ship update quickly and frequently, but those updates are small. This approach allows engineers to make more rapid progress because they allow some imperfections to occur, but within the confines of small changes. Deploying code in smaller bursts reduces the degree of error and the negative impact of any issues. Problems are found and resolved faster when a team is optimized to ship quickly. In my experience, errors are more infrequent this way, as well.

Getting Everyone On Board

When the Los Angeles School District spent $1 billion in 2013 to bring iPads to every teacher and student, an attempt to bring the city’s education into the digital age, it failed. Why? Everyone from journalists to education experts cited a lack of training, a dearth of staff buy-in, and resentment from the taxpayers funding it. In short, getting stakeholders on board is crucial to successfully implementing a new way of doing things.

The key is to identify each stakeholder, research they want to accomplish (but has been unable to do so), and show that the product isn’t making progress. Different teams can lead this research, whether it’s engineering, design, marketing or product, as long as the focus is on learning. Research shows that middle management is particularly resistant to change, while senior leaders and new employees are most willing to take risks. Some businesses use a third party to present this analysis, and this helps to give a side-by-side picture of how the new way would improve revenue and customer loyalty.

If these stakeholders see direct product value (ability to ship more features, achieve annual strategic goals and evolve the code and product for the market), that will make the sell an easy one.

At that point, engineering teams start building and shipping components of prototypes, and demoing those components to stakeholders on a regular cadence. This weekly or bi-weekly demoing is a critical component for creating stakeholder commitment, and is an overlooked part of iterative or “Agile” development. Regularly delivering builds trust, allows for code "course correction," and reinforces alignment.

An immediately evident value to this code philosophy: reducing costs. Shipping smaller batches more frequently and quickly results in significantly lower costs over the long term. Slow and manual processes are costly, and more efficient automation drives that down. In one scenario, we worked with a company that was spending $250,000 per deployment because of the manual labor involved. With an automation makeover, that process was reduced to one person who spent ten minutes on it.

Few companies realize how improvements to this backend part of a business can have further-reaching consequences, such as improved employee engagement. But the main reason I see companies going this code route is that they feel trapped in their existing system, and think the transition would be more painful than upholding the status quo. If companies are thinking longer-term, shipping code quickly and often always leads to better products and improved customer loyalty. If people’s needs are changing—and they always do—the businesses that will survive will move to meet them where they are.

Jen Leech is COO and co-founder at Truss, which works side-by-side with in-house teams to design, build, and scale modern software that exceeds standards for speed and security.