Exploring the exotic world of quarks and gluons at the dawn of the exascale
As nuclear physicists delve ever deeper into the heart of matter, they require the tools to reveal the next layer of nature's secrets. Nowhere is that more true than in computational nuclear physics. A new research effort led by theorists at DOE's Thomas Jefferson National Accelerator Facility (Jefferson Lab) is now preparing for the next big leap forward in their studies thanks to funding under the 2017 SciDAC Awards for Computational Nuclear Physics.
The award was recently announced by DOE's Office of Nuclear Physics and the Office of Advanced Scientific Computing Research in the Office of Science. It will provide $8.25 million for the "Computing the Properties of Matter with Leadership Computing Resources" research project.
The effort will benefit nuclear physics research now taking place at world-leading institutions in the U.S. and abroad by improving future calculations of the theory of Quantum Chromodynamics (QCD). QCD describes the particles and forces that give rise to our visible universe, including the quarks and glue that build the protons and neutrons in the hearts of atoms.
In particular, the calculations that will be dramatically improved by this research pertain to determining the properties of matter at extreme temperatures and densities; explorations of the properties of the glue that binds quarks into exotic configurations; the mapping of the fine structure of protons and neutrons; and research into the complex interactions inside nuclear matter seeking evidence of new physics (beyond the Standard Model).
These calculations will allow scientists to better predict and better understand results that will emerge from the Continuous Electron Beam Accelerator Facility at Jefferson Lab, the Relativistic Heavy Ion Collider at DOE's Brookhaven National Lab, the future Facility for Rare Isotope Beams at Michigan State University, the Large Hadron Collider at CERN, and a proposed Electron-Ion Collider.
The research is a five-year computational effort involving scientists at a dozen institutions to update the underlying software that enables nuclear physicists to carry out complex calculations of QCD on supercomputers, a method called lattice QCD. It's being led by Robert Edwards, a theorist based at Jefferson Lab.
"To tackle this project, we're bringing together a strong team of physicists, computer scientists and mathematicians to optimize our software for the new architectures that will come online with the next-generation supercomputers," he said.
The research effort will enable the theorists to prepare for the big changes in design that will arrive in the next generation of supercomputers. These computers will be very different from current ones. For the last 15 years, the machines have followed a trajectory of progress, with each new generation featuring increasingly complex processors, and connections between them that ran progressively faster and faster. Recently, these systems also started featuring graphics processing units as accelerators to give the systems even more computational power. Scientists effectively lost computing time if they didn't adequately balance their calculations across different processors, or if the connections between processors had slowed.
But the next-generation supercomputers – planned and already being assembled – represent a seismic shift in the underlying architecture. The new machines will feature even more complex processors and accelerators, each with their own memory systems composed of many layers. Running the current software on these new machines could cost the researchers time as computation is stalled while the processors delve into the layers of memory for the right piece of data.
"These deep memory hierarchies are forcing us to rethink our algorithms," said Edwards. "Soon, our algorithms will need to be more aware of the time it takes to retrieve the data needed to continue a calculation from memory and to store data generated by a calculation. The software that we need to run our projects must take into account this shift in architecture."
The researchers aim to develop new software that will be able to fully exploit the capabilities of future systems, by ensuring it optimizes the calculations performed by each processor, while also balancing the computational load across multiple processors and taking into account the time it will take for each to store and retrieve data.
The award is one of three provided by the Scientific Discovery through Advanced Computing program, also known as SciDAC4, and it represents a $2.4 million award to Jefferson Lab over the next five years.
According to the announcement, "Each of these SciDAC projects is a collaboration between scientists and computational experts at multiple national laboratories and universities, who combine their talents in science and computing to address a selected set of high-priority problems at the leading edge of research in nuclear physics, using the very powerful Leadership Class High Performance Computing (HPC) facilities available now and anticipated in the near future."
The project involves researchers at a dozen institutions, including DOE's Brookhaven National Lab, George Washington University, DOE's Jefferson Lab, DOE's Los Alamos National Lab, Intel, MIT, Michigan State University, NVIDIA, DOE's Pacific Northwest National Lab, University of North Carolina, University of Washington, and William & Mary.
Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy's Office of Science.
Jefferson Lab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.