The U.S. Department of Energy’s Office of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture.
The draft copy of the DOE’s requirements provide for two systems: “Trinity,” which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Berkeley National Laboratory (LBNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 “Hopper” supercomputer first deployed in 2010 for the DOE facilities.
Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level.
The DOE wants a machine with performance at between 10 to 30 times Hopper’s capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time.
NERSC is the principal provider of high performance computing services to Office of Science programs—which explore magnetic fusion energy, high-energy physics, nuclear physics, basic energy sciences, and other topics. The key to the program’s longevity, according to NERSC, is its scale and ability to attack computationally difficult problems such as photosynthesis modeling, global climate modeling, combustion modeling, and more. Of course, the supercomputing infrastructure solicited by the DOE will provide the hardware to solve those problems.
The draft requirements specify elements such as a UNIX-like OS, but with support for C, C++, Fortran 77, Fortran 2008, and Python; a low-jitter environment, and consistent, predictable runtimes. The DOE also said that it would not tolerate any data corruption or data loss over the life of the platform. In terms of measuring performance, the DOE supplied a list of mini-application benchmarks provided by NERSC; for Trinity, an ASC (Advanced Simulation and Computing) code suite will be used to evaluate performance.
Trinity will include memory targets of 4 petabytes, and 2 petabytes for the NERSC-8 system; disk storage capacities are specced at 30 and 20 times, respectively.
The NERSC-8 system will be housed in the Computational Research and Theory building under construction at Lawrence Berkeley National Laboratory and is expected to run for 4-6 years, according to the DOE, which did not offer a similar timetable for Trinity.
Technically, The DOE and Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos National Laboratory and Sandia National Laboratory, are jointly issuing the requirements. The DOE is actually seeking one vendor to supply both systems; HPCwire estimates that the system cost is expected to be in the range of $50 to $100 million. Cray manufactured the existing Hopper system, so it’s likely that the company has the inside track.
Editor’s Note: We’re corrected the name of the U.S. Department of Energy’s Office of Science.