Intel, Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster
February 20, 2024 | Intel CorporationEstimated reading time: 2 minutes

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).
AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential.
Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.
The Cardinal Cluster is a heterogeneous system featuring Dell PowerEdge servers and the Intel® Xeon® CPU Max Series with high bandwidth memory (HBM) as the foundation to efficiently manage memory-bound HPC and AI workloads while fostering programmability, portability and ecosystem adoption. The system will have:
- 756 Max Series CPU 9470 processors, which will provide 39,312 total CPU cores.
- 128 gigabytes (GB) HBM2e and 512 GB of DDR5 memory per node.
With a single software stack and traditional programming models on the x86 base, the cluster will more than double OSC’s capabilities while addressing broadening use cases and allowing for easy adoption and deployment.
The system is also equipped with:
- Thirty-two nodes that will have 104 cores, 1 terabyte (TB) of memory and four Nvidia Hopper architecture-based H100 Tensor Core GPUs with 94 GB HBM2e memory interconnected by four NVLink connections.
- Nvidia Quantum-2 InfiniBand, which provides 400 gigabits per second (Gbps) of networking performance with low latency to deliver 500 petaflops of peak AI performance (FP8 Tensor Core, with sparsity) for large AI-driven scientific applications.
- Sixteen nodes that will have 104 cores, 128 GB HBM2e and 2 TB DDR5 memory for large symmetric multiprocessing (SMP) style jobs.
“The Intel Xeon CPU Max Series is an optimal choice for developing and implementing HPC and AI workloads, leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Data Center AI Solutions product line at Intel. “The inherent heterogeneity of this system will empower OSC’s engineers, researchers and scientists, enabling them to fully exploit the doubled memory bandwidth performance it offers. We take pride in supporting OSC and our ecosystem with solutions that significantly expedite the analysis of existing and future data for their targeted focus areas.”
Suggested Items
OKI Develops 124-Layer PCB Technology for Next-Generation AI Semiconductor Testing Equipment
04/28/2025 | BUSINESS WIREOKI Circuit Technology, the OKI Group printed circuit board (PCB) company, has successfully developed 124-layer PCB technology for wafer inspection equipment designed for next-generation high bandwidth memory, such as HBM mounted on AI semiconductors.
Micron Announces Business Unit Reorganization to Capitalize on AI Growth Across All Market Segments
04/23/2025 | MicronMicron Technology, Inc., a leader in innovative memory and storage solutions, announced a market segment-based reorganization of its business units to capitalize on the transformative growth driven by AI, from data centers to edge devices.
STMicroelectronics Future-proofs the Development of Next-gen Cars with Innovative Memory Solution for Automotive Microcontrollers
04/22/2025 | STMicroelectronicsSTMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, has announced Stellar with xMemory, a new generation of extensible memory embedded into its Stellar series of automotive microcontrollers, that transforms the challenging process of developing software-defined vehicles (SDV) and evolving platforms for electrification.
In-Memory Computing: Revolutionizing Data Processing for the Modern Era
04/21/2025 | Persistence Market ResearchIn a world where milliseconds matter, traditional computing architectures often struggle to keep up with the massive influx of real-time data.
Cadence Enables Next-Gen AI and HPC Systems with Industry’s Fastest HBM4 12.8Gbps IP Memory System Solution
04/21/2025 | Cadence Design SystemsCadence announced the industry’s fastest HBM4 12.8Gbps memory IP solution, which meets the increasingly higher memory bandwidth needs of SoCs targeted for the next generation of AI training and HPC hardware systems.