Real-Time Analytics on the Mainframe Could Save Billions of Dollars

Mainframes could help IT departments actually streamline their infrastructure.

Historically, business intelligence has been something gathered after the fact. Transaction processing applications would generate massive amounts of data, which would then be aggregated in a data warehouse. That data warehouse, whether hosted on a mainframe or a distributed computing system, created a costly B.I. application environment that usually required IT people to manage a separate data warehouse running on dedicated IT infrastructure.

But that situation may become a thing of the past.

With the introduction of increasingly faster mainframes, it’s becoming feasible to use the same core database to run transaction processing applications and analytics applications in real time side-by-side. The business implications of that capability are nothing short of profound. Instead of waiting to identify fraudulent transactions after they have already taken place, for example, a new generation of real-time analytics workloads running in parallel with a transaction processing application will identify fraudulent transactions before they are ever processed.

According to Doug Balog, general manager of System z platform for IBM, the potential savings from this approach to analytic will easily be in the range of billions of dollars: “What we’re really talking about is bringing analytics into the transaction.”

Citibank is using IBM’s zEnterprise EC12 mainframe, announced just last week, to help identify fraudulent credit card transactions. That capability could be also deployed in any number of transaction environments subject to fraud, not the least of which is a Medicare system that routinely lose states billions of dollars in revenue because of fraudulent claims.

That’s becoming possible now because, in addition to be able to address 3TB of addressable memory, the EC12 now comes loaded with additional Flash memory that extends the capabilities of 101 configurable cores based on 5.5 GHz processors; all of them developed using 22-nanometer manufacturing technology. That level of horsepower makes it feasible to run transaction processing and analytics application workloads concurrently.

In addition, IBM has developed an IBM DB2 Analytics Accelerator, an appliance based on custom ASIC processors that tightly integrates the Netezza data warehouse with DB2 databases running on an IBM mainframe. As far as the analytics application is concerned, that appliance looks like a natural extension of a DB2 database running on zOS.

That common database architecture, suggests Clabby Analytics president Joe Clabby, carries significant implications for businesses: instead of having multiple databases deployed on any number of distributed systems, IT organizations can more easily rely on a single database. “There won’t be multiple versions of the truth stored in different databases anymore,” he said, “and business people won’t be forced to look at copies of stale data from a data warehouse anymore either.”

The potential savings also extend to IT infrastructure. According to Clabby, a centralized approach to data management reduces the need for a lot of duplicate IT infrastructure for running BI applications, not to mention the need to back up all that data and secure B.I. application licenses.

Clabby also notes that, while distributed systems are typically completely replaced every three to five years, mainframes are usually paid via a leasing model that allows customers to leverage previous investments in older mainframe technologies to help fund ongoing upgrades. The end result is that, while mainframes are more expensive to initially acquire, the cost of ownership associated with the platform over multiple years winds up being substantially less.

Centralization of data management, adds Mike Kahn, managing director for the market research firm The Clipper Group, also contributes to lower IT labor costs: “The business wants one common consistent view of data, preferably using only one staff.”

Of course, any effort to centralized workloads on the mainframe is only going to add more fuel to an already bitter internecine war between IT staffs who are proponents of centralized computing on the mainframe and advocates of distributed computing.

B.I. applications have historically acted as a major source of application workloads for Linux, Windows and Unix systems. Many organizations have undertaken efforts over the past few years to centralize the management of all types of application workloads on the mainframe. The zEnterprise platform, unveiled in 2010, represented an effort to bring the management of application workloads running on mainframes and distributed computing systems together under a mandate for private cloud computing.

The next logical step, says Kahn, is the concurrent processing of analytic and transaction processing workloads using same database: “It’s really about reducing the complexity of managing the data; organizations should then be able to more easily discover insights that previously we’re not getting out to the business.”

None of that means that the need to run distributed computing systems running BI applications are going to disappear tomorrow, or that either platform camp inside the data center is likely to call a truce anytime soon. “We’re talking about the Hatfields versus the McCoys in the data center,” Clabby said.

But it does mean that within enterprises with mainframes at their disposal, the dynamics between mainframe and distributed computing platforms within the context of running analytics and BI applications is shifting; not because of any particular technology bias, but rather because the economics of delivering analytics alongside transaction processing in real time on the mainframe has been fundamentally changed.


Image: Arjuna Kodisinghe/