The man who designed the world’s most energy-efficient supercomputer in 2011 has taken on a new task: improving how robo-bugs fly.
Wu-chun Feng, an associate professor of computer science in the College of Engineering at Virginia Tech, previously built Green Destiny, a 240-node supercomputer that consumed 3.2 kilowatts of power—the equivalent of a couple of hair dryers. That was before the Green500, a list that Feng and his team began compiling in 2005, which ranks the world’s fastest supercomputers by performance per watt.
(Feng also designed HokieSpeed, which topped the Green500 list in 2011. It cost a mere tenth of one percent of Japan’s K Computer, which topped the TOP500 list of the fastest supercomputers at the time. HokieSpeed relied on 8,320 cores worth of Intel Xeon 6-core, 2.4-GHz E5645 chips, and the Nvidia Tesla C2050 accelerator.)
On Feb. 5, the Air Force’s Office of Scientific Research announced it had awarded Feng $3.5 million over three years, plus an option to add $2.5 million funding over an additional two years. The contract’s goal: speed up how quickly a supercomputer can simulate the computational fluid dynamics of micro-air vehicles (MAVs), or unmanned aerial vehicles. MAVs can be as small as about five inches, with an aircraft close to insect size expected in the near future. While the robo-bugs can obviously be used for military purposes, they could also serve as scouts in rescue operations.
Feng and his team, made up of representatives from both Virginia Tech and North Carolina State University, will build computational fluid dynamic (CFD) codes and a supporting hardware-software ecosystem for modeling that airflow over MAV wings and rotors. If everything goes well, the end result will be stable, manageable flight.
It’s still unclear what hardware Feng will utilize for the project, although two things are known: first, he plans to “couple innovative advances in algorithms and mathematics with engineering progress in the co-design of hardware and software in supercomputing.” Second, he could pursue a “many-core” approach, possibly including accelerators such as the Intel Xeon Phi or the Nvidia Tesla; he said that many domain scientists and engineers are still using how to exploit parallelism in both hardware and software.
“Coupling hardware-software co-design with advances in algorithmic innovation offers the promise of multiplicative speed-ups,” he added.
Based on his earlier work, Feng said the Virginia Tech research team should be able to “achieve substantial speed-up over current simulations and provide significantly better utilization of the underlying and co-designed hardware-software of a supercomputer.” In a separate email, he also implied that his team would be going the accelerator route: “We have experience with many accelerator-based platforms, ranging from low-power mobile GPUs to commodity GPUs to high-performance GPUs as well as integrated CPU+GPU cores on a die and many others.” While additional platforms “will be brought in and evaluated,” Feng and his team will rely on HokieSpeed, at least at first.
As the project moves forward, Feng wrote in his email, his team intends to “incorporate the ecosystems of our other in-house accelerated systems, i.e., others from NVIDIA (e.g., K20), AMD (e.g., Trinity, Tahiti, etc.), Intel (e.g., MIC), and others.” They’ve found that the co-design process “can reap significant dividends, particularly if programmability, portability, and productivity can be addressed.”