Main image of article Solving the Inefficient Application Lifecycle
Many CIOs are continuously facing budget constraints. Unfortunately, if there is no money in the budget for innovation, the potential for mobile computing, social media and cloud platforms may have to wait. As a result, IT executives are rethinking their strategies and future application architectures to help the business propel forward without compromising security. Studies show that more than 70 percent of IT budgets are consumed by managing and maintaining production and legacy systems. CIOs have to be asking themselves: “How can I make room in my budget for innovation?” And maybe the next question: “How can they make room in their data centers when they are already bursting at the seams with the applications and data they already have?”

Wasteland of Irrelevant Applications

IT executives need to take a hard and fast look at where inefficiencies exist in their own operations to find the answers. Many administrators are slowed down by an ever-increasing volume of data they manage in storage systems, servers, databases and applications. Yet most of the applications and data are rarely accessed or needed to support the business. And for every new application that gets deployed, there is an average of eight copies made of a production environment to support test and development activities. Each environment is backed up several times and most likely replicated to a remote data center for disaster recovery. In one enterprise, one terabyte of data can consume more than one petabyte of storage—yet when the application initiative is launched, the ongoing costs are often overlooked. Enter in a new executive team, a new business initiative, or a merger and acquisition. The result is a new application that makes the existing application landscape redundant. Now take this scenario and multiply it by the number of applications running in the data center or in the cloud. As the application ages, owners change. Processes that were once streamlined become inefficient. Skills that were once in abundance are lacking and expensive when found. Consider all the infrastructure wasted on data that is no longer needed for conducting business, no longer needed for testing and development purposes, no longer needed to be retained for legal or regulatory purposes. So how do you take this wasteland of irrelevant data and inefficient utilization of IT resources and turn it into a revenue source for innovation?

How to Prevent Waste

Throughout this application lifecycle, there are many opportunities to implement proactive measures and processes to prevent data waste – from development through production to retirement. Development: All too often, testers make full copies of production data to serve the needs for test and development. This causes two problems. First, if sensitive data exists in production, it will exist in each and every copy. Second, full copies require an excess of storage and server infrastructure. Additionally, ensure developers and testers document test data set requirements and only create test copies with the data necessary to conduct the tests. When contractors or consultants are leveraged during testing cycles, the systems are often left forgotten once the testers are no longer assigned to the application project. Millions of dollars of infrastructure can be released to other projects if tight controls are placed on test and development copies. If possible, leverage cloud-based testing platforms that can be destroyed once tests are complete. Production: During the application development phase, make sure to classify data and document any data retention requirement from data creation. This will allow for the ability to implement an automated archive and purge workflow process and to remove excess data from production as soon as it is no longer needed. When data is removed from production, it no longer exists in test and development copies, in backups, or replicated environments shrinking the overall application data footprint. For production data that is not eligible for archive or purge, leverage data classification to pair up with data partitioning techniques. This allows storage and server administrators the ability to optimize infrastructure resources with classes of data. New data that is more important to the business can be allocated more compute and processing resources with faster storage. Once data ages, it can be relocated to more cost effective infrastructure. Retirement: Finally, any time a new application is being funded, fully understand the nature of the systems that it will replace. Retiring legacy applications often is a key component to a new application return on investment (ROI). Yet, when projects run long, budgets run over, the application retirement phase gets postponed. The Enterprise Strategy Group found that more than half of enterprises polled state they spend $500,000 or more annually supporting legacy applications. Think of how much innovation that could fund. Data is accumulating in the data center, making inefficiencies more pronounced. Adding mobile, social and cloud data to the mix in big volumes will only introduce a compound effect to existing challenges. CIOs need to reclaim IT budget by eliminating data waste to fuel innovation and realize its potential.   Adam Wilson is the General Manager for Informatica’s Information Lifecycle Management Business Unit. Prior to assuming this role, Wilson was in charge of product definition and go-to-market strategy for Informatica’s enterprise data integration platform. A longtime employee of Informatica, Wilson has held numerous development, product management, and marketing leadership roles within the company. Image: Bloomua/Shutterstock.com