For more years than anyone can remember, the relationship between databases and disk drives has been one of mutual dependency. Without access to robust I/O systems, database systems simply can’t keep pace with application performance requirements.
But that’s all changing thanks to the rise of multiple types of more affordable memory technologies, which now make it possible to run entire databases in memory. In turn, that’s spurring interest in a new generation of in-memory databases.
In-memory databases have been around for a while, but the availability of comparatively inexpensive memory has sparked development of a variety of new in-memory database platforms, including the High Performance Analytics Appliance (HANA) database platform from SAP and EXASolution from Exasol AG. These compete with more traditional database offerings ,such as the Times Ten in-memory database from Oracle and an open-source Postgres-compatible in-memory database from VMware called vFabric Gembase.
Starcounter AB launched a namesake in-memory database based on what the company claims is the world’s fastest database. Starcounter CEO Asa Holmstrom claims the Starcounter database can process millions of transactions across a terabyte of data on a single server faster than any other commercial offering: “The benefit of that is that the applications and the database are sharing RAM, so there are a lot fewer steps involved in terms of managing the data.”
Interest in in-memory databases, suggests Robin Bloor, principal of the IT market research firm Bloor Group, is being driven by the need to provide faster responses to ad hoc queries that today are bound by the limitations of disk I/O systems. “If you can make the database independent of disk, we’re talking about making applications a thousand times faster than they are today,” he said. “Just about every IT organization is going to take notice of that.”
That’s important, Bloor added, because the next query is usually based on the answer to the previous ones. In-memory databases are critical in terms of driving the emergence of next-generation advanced analytics applications.
Of course, IT organizations don’t necessarily have to adopt new database architectures to take advantage of more affordable memory. Organizations running traditional SQL databases are also taking advantage of any combination of processor memory, Flash memory and solid-state drives to boost performance of existing analytics applications.
Case in point is Vail Systems, a developer of customer care applications for service providers, which replaced its traditional hard disk drives with Flash-based solid-state drives (SSDs) from Virident Systems to speed performance of an analytics application based on Microsoft SQL Server. According to David Fruin, vice president of engineering at Vail Systems, the company is still weighing whether to move that analytics application to an in-memory database or the open-source Hadoop data management framework. In the meantime, though, SSDs provide an inexpensive way to boost performance of an ad hoc query application while Vail Systems figures out what its next option might be. “Virident gave us a quick fix to the problem,” Fruin said.
Other vendors, meanwhile, are moving to make processor memory even more accessible to databases. Atlantis Systems just announced beta testing for Alantis ILIO FlexCloud 1.0, a layer of software that makes processor memory appear to an application as a layer of tiered storage.
Whether an IT organization decides to leverage memory simply to solve performance issues for an existing SQL database technology, move to an in-memory database as part of an effort to deploy an advanced analytics application, or simply opt to use Hadoop to more efficiently drive analytics applications involving massive amounts of data, in all probability IT organizations will be relying on a combination of database platforms for many years to come.
Longer term, however, it’s also apparent that, with the rise of in-memory technologies, the line between applications and database is becoming a whole lot less clear—if for no other reason than the sharp reduction in need to make external calls to disk drives to access data.