There is no doubt that IT automation has “crossed the chasm,” moving from early adoption to much broader use in many organizations. Few would argue with the goal of lifting employees out of basic, low-level processes and replacing them with consistent, system driven-executions. Similarly, it’s widely recognized that, when automation is executed properly, it brings quite desirable results such as cost and efficiency improvements. But the journey to automation nirvana is a long one, and it would be a mistake to assume that a foray into scripted processes or automated runbook execution is doing anything more than scratching the surface of what is possible. In looking at the potential of automation in a given scenario, I’ve identified six steps that organizations should take to derive the greatest value from their automation initiatives.

1. Evaluation of Requirements and Potential Benefits

The purpose of incorporating automation into an environment is to deliver efficiencies—whether related to people, workload, time, consistency or output. Before commencing with a new implementation, organizations must understand the benefits that they can expect, not only to accurately assess potential ROI, but also to set an expectation against which they can later measure the benefits realized. For simpler automations, benefits typically come in the form of time and cost savings as well as more consistent outputs. But more complex automations that completely replace human intervention across an entire business process are likely to yield much broader benefits. While it might be tempting to pick off the simple use cases first, it is also important to evaluate processes that consume a great deal of time, even if they are more complex, multi-step activities. The most effective approach is to evaluate, as far as possible, the time taken by manual activities and the frequency with which those activities are undertaken. This will allow companies to estimate the potential benefit of automation and measure its realization post-implementation. In addition to the effort reduction, it’s also vital to measure and evaluate a number of other factors that can be improved with automation such as mean-time-to-resolve (MTTR), error rate (estimates put human error at an average of around 10 percent), process compliance and system or service uptime.

2. Process Integration

Remember that the aim of automation is to replicate and replace manual human tasks. While the execution of a command or series of commands is a fundamental deliverable, this may limit the potential benefits of an automation implementation. Rather, to get the best from an automation, it’s important to understand where it fits in the larger business process. What are the triggers on which it might execute? What needs to happen before and after the automation? What input variables will need to be automated for a successful outcome? Understanding the entirety of the process by asking these questions will help apply automation more broadly rather than as singular, isolated tasks. Ultimately, any task is simply a step in a process, but by evaluating an entire process, a series of automations can be defined and chained together into a unified process that passes parameters and variables from one sequence to the next. Let’s look at an example: an automation that executes a simple change, such as allocating more resources when bandwidth utilization reaches a set percentage. By itself, this is a simple rule set and fairly straightforward to execute. But consider the process more broadly. A human might address this problem by measuring the performance of an application, including investigating the underlying causes, evaluating potential remedies, seeking approval to make disruptive changes (if required) and carrying out the required resolution—which may indeed be to increase bandwidth, but may also be any number of other actions. While the tasks in this process could be automated individually, automating the entire process can derive significantly greater value for an organization.

3. Contextual Customizations

Beyond assessing processes from end to end, it’s also important for organizations to take into account the context in which an automation will operate. That might mean different rules for similar systems running different applications, or it might mean different considerations based on a calendar schedule. It might even mean checking for circumstances at the time of execution and following different rules based on those circumstances. The process for resolution of an incident detected on a mission-critical application during business hours, for instance, may be markedly different from a similar incident with a development system in the middle of the night, and will require a different degree of approval. By building contextual considerations into automation, the automation can assess cues like how critical an app is or the time of day, as well as more involved factors like other programs are running and their potential impact on system performance. Based on these various contextual cues, the automation can execute a different set of actions that match the scenario.

4. Testing and Release

Once automation has been closely defined and integrated into existing processes, organizations must test its performance in various circumstances, evaluate the resulting outputs and confirm its success. Many companies will choose to do this in a phased approach, releasing automation into a development system and then into production. This can be time-consuming but is an important step to ensure the automation’s success and efficiency. Another approach is to release automations in a “manual mode” so that an engineer can supervise execution steps and ensure they are working correctly, with the ability to step in and take control if required. Once the automation has been run successfully in this mode a sufficient number of times, the automation can be released and allowed to run on its own. Even post-release it’s important to ensure that all automations create a comprehensive audit trail of actions, decisions, inputs and outputs. This is valuable not only to review anomalies, but also to drive improvements in-life. Fortunately, it is a good deal easier for an automation to deliver a comprehensive audit log than it is for humans.

5. Benefits Realization

Once automation has been implemented, organizations should evaluate real-world performance and validate the delivery of previously-defined efficiencies. Beyond that, it’s also important to continually measure and evaluate the automation to ensure it ffective and optimizedgh additional process tweakss given environment and process flowstake over a five-minute process from hremains effective. By reviewing the automation execution log, it will drive a process of continual improvement and refinement of processes. As is often the case, benefit analysis is often overlooked but incredibly important; organizations should routinely analyze and measure reductions in time, improvements in issue resolution and other benefits from enhanced consistency.

6. Continuous Improvement

As well as the review of existing automations, it’s also important to take a continual service improvement view. The most successful organizations will regularly revisit and revise existing automations and continually look at the boundaries of an automated process, seeking extensions that would derive additional benefits. For instance, are there system-to-system inputs that interface with current automations? Are there consistent decision points made by humans that could be automated? Is it possible to add further contextual understanding to ensure operations more broadly? Taking a structured approach to an automation program will help ensure the delivery of significant benefits. When combined, these six steps will help to deliver not just short-term tactical gains, but also long-term realization of the rationale that drives all automation programs–the reduction in human effort, latency and potential error associated with manual activities, and the benefits in consistency and compliance that automated systems can bring.   Terry Walby joined IPsoft in 2010 after spending eight years as a director at Computacenter, where he built the consulting practice, created the Datacentre Services business, had responsibility for the Technology Solutions portfolio and was Country Manager for Luxembourg. Prior to Computacenter, Terry was Enterprise Solutions Director at GE Capital IT Solutions and also worked for IBM and Granada Computer Services. He earned a degree in electronics and electrical engineering from University College in Canterbury. Image: Gajus