[caption id="attachment_10390" align="aligncenter" width="613"] Blue Gene/L at Lawrence Livermore National Laboratory in California.[/caption] As corporations move further up the High-Performance Computing (HPC) spectrum by adding machines powerful enough to handle all modern datacenter challenges, raw computing power has become more important than ever. Even so, the ability to simulate a nuclear explosion or custom-designed molecule is not the kind of problem most corporate datacenters will ever need to handle. As a result, few corporate-computing specialists would ever consider jumping on the latest advances in supercomputing architecture from Cray or IBM. It would be like accompanying Superman on his daily weight-lifting routine—trying to imitate his moves won’t get you any closer to lifting a locomotive. Sometimes, however, advances in supercomputing can help solve pretty much anybody’s datacenter problems. That could be the case with the design changes IBM has made to its BlueGene supercomputer. According to CNET, in order to squeeze out more speed from a supercomputer mapping the brains of mice, IBM did away with some of the hard drives storing the petabytes of data it might need to crunch, and added 128 terabytes’ worth of Solid-State Drives (SSD). Those SSDs allowed it to take advantage of flash memory’s greater speed when compared to hard disks. Flash delivers much higher capacity in a smaller footprint, along with lower power requirements and better performance—but that's not the only role it can play in high-performance computing, according to Ambuj Goyal, GM of IBM's system storage and networking group. The access speed of NAND-based SSDs falls in between that of very expensive RAM and DRAM chips and the very slow (comparatively) disk drives. By adding a big chunk of flash memory, BlueGene’s designers created a third tier of memory for optimizing the flow of data and problems, Goyal said. Applications designed for parallel-processing systems break large problems down into multiple threads that can simultaneously run through multiprocessing servers, but the processes all occur at the same speed: they drag coming off the hard disk, race across the system bus and then queue up to be processed because the CPU got tired of waiting for data to arrive. "Traditional disk technology has not kept pace with CPU technology, resulting in a significant performance gap between storage and computing," suggested Dan Iacono, analyst in IDC's storage systems practice. Having a layer of flash memory in the form of SSDs can create a staging area from which data can be pushed much more quickly toward the CPU, or simply act as an additional tier of storage for high-priority data—matching the speed of the storage medium much more closely to the speed of the machine than is possible using only hard disks for storage and RAM or DRAM for processing. By giving BlueGene more capacity than most datacenters will ever have, even in long-term storage or archives, its designers also took a lot of pressure off the rest of the system. By taking on workloads that require very low latency and high processing speed, SSDs can reduce the demand on spinning disks and improve their performance simply because they're not overtaxed; the layer of SSDs can balance and distribute workloads more efficiently, Goyal said, according to ITWorld Canada. IBM announced the BlueGene update at its IBM Edge 2013 conference earlier this week. It also talked about the new FlashSystem line of SSD-enabled storage products it released last month, and announced that it would spend a total of $1 billion in R&D to develop flash and push it into a wider range of high-end computing products, not just laptops. The result may not come at once. Many datacenters are slow to adopt even hybrid SSD/HDD drives, let alone the far more expensive SSDs.   Image: IBM