At the fourth annual GPU Hackathon at Princeton held in June, experts from NVIDIA, Princeton, Oak Ridge National Laboratory, Boston College and the Institute of Electrical and Electronics Engineers worked with scientists to help them speed up their code.
One of the most powerful tools in scientific computing comes from one of the least expected places. Originally built to render graphics for video games, the graphics processing unit, or GPU, has emerged in recent years as a key hardware accelerator for a wide range of scientific research projects from modeling supernovae to finding patterns in texts used by social scientists to filtering the vast amounts of data collected by the Large Hadron Collider.
Since 2019, Princeton has helped promote the use of GPUs to accelerate academic research by hosting GPU Hackathons. Run in collaboration with the OpenACC organization, these multi-day events are open to all researchers and developers and help teams of scientists optimize their research code for GPUs by coupling them with experienced programming experts. At Princeton's fourth annual GPU Hackathon held in June, eight teams of researchers including Arizona State University, Prairie View A&M University, University of Cincinnati, and Johns Hopkins University, spent four days working with pairs of mentors learning how to make their software run much more quickly.
"They all saw performance gains and increased their knowledge of GPU computing," says Jonathan Halverson, the Research Software and Computing Training Lead with the Princeton Institute for Computational Science & Engineering (PICSciE) who helped organize this year's event.
"We'd like to see more applicants," he says. "It's an incredibly valuable resource. It's free, and you basically get two experts who are willing to devote themselves to improving your code."
Synergizing to see what's possible
Since the 1970s, video arcade games and home consoles have relied on specialized hardware to handle the computationally expensive task of rendering graphics in real time. This hardware often existed as a separate component from the machine's central processing unit, or CPU. As 3D games became popular in the 1990s, demand rose for more realistic lighting, shadows, waves, explosions, and other effects, and the technology began to develop rapidly. NVIDIA first popularized the term "graphics processing unit" in 1999, with the launch of its GeForce 256, which marked a major leap in graphics performance for PCs.
The big advantage that GPUs have over CPUs is their ability to perform parallel processing. Most modern home computers have CPUs with at least four processor cores, and higher-end desktops can have 20 or more. Under normal circumstances, each core can perform one task at a time.
Modern GPUs, by comparison, have thousands of cores. These processors are far less complex than those found in CPUs, but their sheer number allows GPUs to swiftly render graphics by running thousands of calculations at once.
Michael Sokoloff, an experimental particle physicist at the University of Cincinnati was one of the five members of Team GooFit, which sought to optimize a tool for measuring the scattering of subatomic particles. With mentoring from an NVIDIA engineer with an extensive background in quantum field theory, the team overcame a long-standing problem with memory allocation and got their code to run twice as fast.
"The GPU hackathon was extraordinarily valuable for all of us on Team GooFit," says Sokloff. "We’ve already already recommended the hackathon for another GPU project we are working on."
In 2007, amid growing recognition of the usefulness of GPUs for a broad variety of applications, NVIDIA released its Compute Unified Device Architecture, or CUDA, development environment, which enabled developers to use GPUs to run tasks that were traditionally assigned to CPUs. The era of the general-purpose GPU was off and running.
Since then, GPUs have enabled scientists to perform tasks that would have previously required applying for an allocation on a supercomputer.
"It is mind-blowing how much can be done with GPUs," says Julia Levites, the strategic lead of the Open Hackathons team. “It's really inspiring to see science happen that was not possible before and it is very satisfying that we help scientists do it so much faster."
Izumi Barker, a program manager for Open Hackathons, notes that the history of science is replete with examples of technologies being repurposed for scientific discovery.
"That's part of the genius," Barker says. "While it may be surprising that GPUs can be applied to so many scientific use cases, it’s not surprising that scientists and developers have figured out how to leverage it. It's the perfect example of science, research, and technology synergizing to see what's possible."
Speaking with the voice of the community
Held remotely via Zoom, Slack, and email, Princeton's most recent GPU Hackathon was sponsored by OpenACC, NVIDIA, the Department of Energy's Oak Ridge Leadership Computing Facility, and PICSciE.
The participants hailed from a variety of disciplines, and their projects modeled phenomena ranging from the massive, like supernovae and black holes, to the miniscule, like subatomic particles interacting in a supercollider. One project developed a simulation for combustion. Another developed a face-recognition system. Another one developed an artificial neural network for reading enormous amounts of texts related to the social sciences.
Halverson notes that GPUs are becoming more accessible to beginners thanks to contributions from a growing community of developers. But he adds that coding for GPUs soon gets tricky for those who dig deeper.
"It's easy to begin using GPUs because a lot of the code is already written," he says. "However, when you're writing lower-level code, it requires a lot of expertise."
These barriers to entry are steadily dropping, however, as the GPU software ecosystem continues to expand. OpenACC organization, which emphasizes that its hackathons are free of charge and open to all, hopes to see this trend continue and actively partners with leading academic institutions, supercomputing centers, and technology leaders like NVIDIA to continue to provide value to the research community.
Michael E. Mueller, Professor of Mechanical and Aerospace Engineering and Director of PICSciE’s Graduate Certificate in Computational Science and Engineering, led a team of researchers to substantially speed up a key kernel in their combustion codes that computes the chemical-reaction rates.
Mueller emphasized the Hackathon's ongoing impacts. “While we focused on one complicated and compute-intensive kernel during the Hackathon, the learning that my research team members gained will allow us to replicate this success in other parts of our codes," he says.
"In addition, the reduced barriers to GPU programming allowed participation across the spectrum of my research team, with the Hackathon group including a beginning graduate student, a senior graduate student, and postdocs. Each member was able to make meaningful contributions and learn something new.”
"We are part of the community," says Levites. "We are enabling the community and want to continue to be able to speak with the voice of the community that we serve."
Princeton will hold its fifth annual GPU hackathon in June of 2023. For more information on Open Hackathons, visit www.openhackathons.org.