Overlap in Computer Modeling Holds Key to Next-Generation Processing
June 16, 2017 | Virginia TechEstimated reading time: 3 minutes

Exascale computing — the ability to perform calculations at 1 billion billion per second — is what researchers are striving to push processors to do in the next decade. That’s 1,000 times faster than the first petascale computer that came into existence in 2008.
Achieving efficiency will be paramount to building high-performance parallel computing systems if applications are to run in environments of enormous scale and also limited power.
A team of researchers in the Department of Computer Science in Virginia Tech’s College of Engineering discovered a key to what could keep supercomputing on the road to the ever-faster processing times needed to achieve exascale computing — and what policymakers say is necessary to keep the United States competitive in industries from everything to cybersecurity to ecommerce.
“Parallel computing is everywhere when you think about it,”said Bo Li, computer science Ph.D. candidate and first author on the paper being presented about the team's research this month. “From making Hollywood movies to managing cybersecurity threats to contributing to milestones in life science research, making strides in processing times is a priority to get to the next generation of supercomputing.”
Li will present the team’s research on June 29 at the Association for Computing Machinery’s 26th International Symposium on High Performance Parallel and Distributed Computing in Washington, D.C. The research was funded by the National Science Foundation.
The team used a model called Compute-Overlap-Stall (COS) to better isolate contributions to the total time to completion for important parallel applications. By using the COS model they found that a nebulous measurement called overlap played a key role in understanding the performance of parallel systems. Previous models lumped overlap time into either compute time or memory stall time, but the Virginia Tech team found that when system and application variables changed, the effects of overlap time were unique and could dominate performance. This led to the realization that the dominance and complexity of overlap meant it had to be modeled independently on current and future systems or efficiency would remain elusive.
“What we learned is that overlap time is not an insignificant player in computer run times and the time it takes to perform tasks,” said Kirk Cameron, professor of computer science and lead on the project. “Researchers have spent three decades increasing overlap and we have shown that in order to improve the efficiency of future designs, we must consider their precise impact on overlap in isolation.”
The Virginia Tech researchers applied COS modeling to both Intel and IBM architectures and found that the error rate was as low as 7 percent on Intel systems and as high as 17 percent on IBM architecture. The team validated their models on 19 different applications as benchmarks. The application benchmarks used the following code: LULESH, AMGmk, Rodinia, and pF3D.
“This study is important to all kinds of industries who care about efficiency,” said Li. “Any entity that relies on supercomputing including cybersecurity organizations, large online retailers such as Amazon and video distribution services like Netflix, would be affected by the changes in processing time we found in measuring overlap.”
One of the challenges in the study was “throttling” three elements: central processing unit speed, memory speed, and concurrency, or running several threads at once. Throttling refers to a sequence that causes the computer to be idle for several cycles. This is the first paper to evaluate the simultaneous combined effects of all three methods.
Parallel computing in the exascale realm has the potential to open up so many new frontiers in myriad areas of scientific research, it is almost boundless. Understanding overlap and how to make computers run their most efficiently will be a significant key to achieving the computing power required to run massive amounts of calculations in the not-too-distant future.
Original by: Amy Loeffler
Suggested Items
New Database of Materials Accelerates Electronics Innovation
05/05/2025 | ACN NewswireIn a collaboration between Murata Manufacturing Co., Ltd., and the National Institute for Materials Science (NIMS), researchers have built a comprehensive new database of dielectric material properties curated from thousands of scientific papers.
Hon Hai Research Institute Demonstrates Superiority of Shallow Quantum Circuits Beyond Prior Understanding
05/05/2025 | Hon Hai Technology GroupHon Hai Research Institute (HHRI), in a milestone collaborative effort, has demonstrated that parallel quantum computation can exhibit greater computational power than previously recognized, with its research results accepted for publication in the prestigious journal Nature Communications.
New Database of Materials Accelerates Electronics Innovation
05/02/2025 | ACN NewswireIn a collaboration between Murata Manufacturing Co., Ltd., and the National Institute for Materials Science (NIMS), researchers have built a comprehensive new database of dielectric material properties curated from thousands of scientific papers.
Meet Thiago Guimaraes, IPC's New Director of Industry Intelligence
05/05/2025 | Chris Mitchell, IPC VP, Global Government RelationsThe fast pace of innovation in the electronics manufacturing industry means business owners must continuously adapt their processes and capabilities to meet changing customer demands and market trends. To that end, IPC has hired Thiago Guimaraes as the new director of Industry Intelligence. In this interview, Thiago shares key goals and objectives that could revolutionize the industry as he helps stakeholders navigate industry trends and challenges.
Honeywell Advances Technology for the European Defense Sector
04/29/2025 | HoneywellHoneywell has received two research grants to execute projects aimed at advancing avionics and cybersecurity capabilities for the European defense sector.