[caption id="attachment_9567" align="aligncenter" width="618"] The "Sequoia" Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL).[/caption] The "Sequoia" Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL) has topped a new HPC record, helped along by a new "Time Warp" protocol and benchmark that detects parallelism and automatically improves performance as the system scales out to more cores. Scientists at the Rensselaer Polytechnic Institute and LLNL said Sequoia topped 504 billion events per second, breaking the previous record of 12.2 billion events per second set in 2009. The scientists believe that such performance enables them to reach so-called "planetary"-scale calculations, enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet. While the key to Sequoia's brawn may just be its computational horsepower—the supercomputer, made up of more than 1.57 million POWER cores, is ranked second in the TOP500 list behind "Titan"—the team also attributes much of its strength to a parallel discrete-event simulation synchronization protocol known as "Time Warp," which uncovers the available parallelism in a model through an error detection and rollback recovery mechanism, providing significant cache performance improvements when running at peak scale. In other words, as the system scales, the effective parallelism of the supercomputer improves. The RPI/LLNL teams will present a paper, “Warp Speed: Executing Time Warp on 1,966,080 Cores,” at the upcoming ACM SIGSIM PADS 2013 conference. According to a summary of the paper, the teams will report a 97x performance improvement when scaling from 32,768 to 1,966,080 cores. The teams also suggested that a new, long-term benchmark be developed to help track Time Warp's performance over time: "Warp Speed,” which grows linearly with an exponential increase in the event-rate. "At present, we are now at {Warp Speed 2.7}. It will be nearly 150 years before we expect to reach {Warp Speed} 10.0," the summary suggests. “We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain, and validate models of complex systems than by our ability to execute them in a timely manner," Chris Carothers, director of the Computational Center for Nanotechnology Innovations at RPI (and a joint author of the paper), wrote in a statement. Carothers and others attempted the scaling-out on a two-rack Blue Gene/Q system before porting it to Sequoia. Sequoia is now in classified service, used to perform calculations on the U.S. nuclear weapons stockpile.   Image: LLNL