Building better benchmarks
Photo by iStock June 08, 2020
UD computer scientists on team measuring the true performance of supercomputers
Imagine that you’re shopping for a computer — not a laptop or a desktop for your home office, but a supercomputer that can perform as many as a quintillion calculations per second. How would you even begin to assess the capabilities of a machine that powerful?
Today, computing experts measure supercomputer performance using benchmarks that measure just a tiny kernel of the supercomputer’s computation power. For leaders at organizations that invest in supercomputers, scientists who use supercomputers, and experts who build new computers, a more comprehensive suite of benchmarks could be a useful tool when making complex, expensive decisions.
That’s why a multi-institutional team that includes two University of Delaware professors is creating a new, more comprehensive application benchmark for next-generation high-performance computing systems. By stress testing both the hardware and software of supercomputers, they hope to develop truer measures of their speed.
The team is led by principal investigator Robert Henschel, Director of Research Software and Solutions in Indiana University’s Research Technologies Division, co-principal investigator Rudolf Eigenmann, professor of electrical and computer engineering at UD, and co-principal investigator Sunita Chandrasekaran, assistant professor of computer and information sciences at UD. The group received a grant from the National Science Foundation for this project in 2018.
In 2019, the team held a workshop that included a veritable who’s who of benchmarking experts from universities, national labs and funding agencies. Together, they shared ideas about how to develop a better suite of benchmarks. The team is also working with industrial partners to refine their recommendations.
“We the community have been talking for years about creating better benchmarks that use full computer applications to measure the speed of these supercomputers, and our effort and this workshop help exactly in that direction,” said Eigenmann.
A report from the workshop is now available for download.
The suite of benchmarks, which is slated for release in November 2020, will target a variety of applications from different domains of science. “Different applications bring different computational challenges to the table and what we have right now is minimalistic,” said Chandrasekaran. “That is not good enough. We need a comprehensive suite.”
Eigenmann and Chandrasekaran bring together complementary expertise in the supercomputing space.
Eigenmann is an expert in optimizing compilers, programming methodologies, tools and performance evaluation for high-performance computing and the design of cyber infrastructure.
UD is in the midst of implementing the Delaware Advanced Research Workforce and Innovation Network (DARWIN), a major computational and data resource under Eigenmann’s leadership.
Chandrasekaran is an expert in programming accelerators, exploring the suitability of high-level programming models such as OpenMP and OpenACC for current and future platforms, and validating and verifying emerging directive-based parallel programming models.
She leads one of just eight Center for Accelerated Application Readiness (CAAR projects) selected to develop applications for Frontier, the newest supercomputer slated to debut at the U.S. Department of Energy’s Oak Ridge National Lab in Tennessee in 2021. Frontier will be one of the first Exascale systems deployed.
Doctoral student Mayara Gimenes is working with Chandrasekaran on the project. Undergraduate students are also helping out through UD’s Vertically Integrated Projects (VIP) Program.
While experts from Indiana University and UD are leading the NSF project, academic leaders from Stony Brook University, the University of Basel in Switzerland, TU Dresden and RWTH Aachen from Germany, and the University of Tsukuba in Japan along with corporations such as AMD, HPE, Intel, NVIDIA, Lenovo and IBM and national laboratories such as ORNL, LBNL from the US and HZDR from Germany are also pitching in.
The benchmark suite is being developed jointly with the High-Performance Group (HPG) of the Standard Performance Evaluation Corporation (SPEC), a non-profit organization with the goal of creating and maintaining standardized benchmarks to evaluate performance and energy efficiency for the newest generation of computing systems.
“I am really excited about the collaboration of IU and UD on this NSF project, and the much larger community of academic and industry partners that is forming in the SPEC High Performance Group. I strongly believe that this benchmark will have great impact on how supercomputers are evaluated in the future,” said Henschel.