Intel, Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster
February 20, 2024 | Intel CorporationEstimated reading time: 2 minutes

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).
AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential.
Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.
The Cardinal Cluster is a heterogeneous system featuring Dell PowerEdge servers and the Intel® Xeon® CPU Max Series with high bandwidth memory (HBM) as the foundation to efficiently manage memory-bound HPC and AI workloads while fostering programmability, portability and ecosystem adoption. The system will have:
- 756 Max Series CPU 9470 processors, which will provide 39,312 total CPU cores.
- 128 gigabytes (GB) HBM2e and 512 GB of DDR5 memory per node.
With a single software stack and traditional programming models on the x86 base, the cluster will more than double OSC’s capabilities while addressing broadening use cases and allowing for easy adoption and deployment.
The system is also equipped with:
- Thirty-two nodes that will have 104 cores, 1 terabyte (TB) of memory and four Nvidia Hopper architecture-based H100 Tensor Core GPUs with 94 GB HBM2e memory interconnected by four NVLink connections.
- Nvidia Quantum-2 InfiniBand, which provides 400 gigabits per second (Gbps) of networking performance with low latency to deliver 500 petaflops of peak AI performance (FP8 Tensor Core, with sparsity) for large AI-driven scientific applications.
- Sixteen nodes that will have 104 cores, 128 GB HBM2e and 2 TB DDR5 memory for large symmetric multiprocessing (SMP) style jobs.
“The Intel Xeon CPU Max Series is an optimal choice for developing and implementing HPC and AI workloads, leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Data Center AI Solutions product line at Intel. “The inherent heterogeneity of this system will empower OSC’s engineers, researchers and scientists, enabling them to fully exploit the doubled memory bandwidth performance it offers. We take pride in supporting OSC and our ecosystem with solutions that significantly expedite the analysis of existing and future data for their targeted focus areas.”
Suggested Items
Cadence Introduces Industry-First LPDDR6/5X 14.4Gbps Memory IP to Power Next-Generation AI Infrastructure
07/10/2025 | Cadence Design SystemsCadence announced the tapeout of the industry’s first LPDDR6/5X memory IP system solution optimized to operate at 14.4Gbps, up to 50% faster than the previous generation of LPDDR DRAM.
NVIDIA RTX PRO 6000 Shipments Expected to Rise Amid Market Uncertainties
06/24/2025 | TrendForceThe NVIDIA RTX PRO 6000 has recently generated significant buzz in the market, with expectations running high for strong shipment performance driven by solid demand.
President Trump Secures $200B Investment from Micron Technology for Memory Chip Manufacturing in the United States
06/16/2025 | U.S. Department of CommerceThe Department of Commerce announced that Micron Technology, Inc., the leading American semiconductor memory company, plans to invest $200 billion in semiconductor manufacturing and R&D to dramatically expand American memory chip production.
Micron Ships HBM4 to Key Customers to Power Next-Gen AI Platforms
06/11/2025 | MicronThe importance of high-performance memory has never been greater, fueled by its crucial role in supporting the growing demands of AI training and inference workloads in data centers. Micron Technology, Inc., announced the shipment of HBM4 36GB 12-high samples to multiple key customers.
NVIDIA Expected to Launch RTX PRO 6000 Special Edition for China’s AI Market, Potentially Boosting Future GDDR7 Demand
05/28/2025 | TrendForceTrendForce reports that following the new U.S. export restrictions announced in April—which require additional permits for the export of NVIDIA’s H20 or any chips with equivalent memory bandwidth or interconnect performance to China—NVIDIA is expected to release a special low-power, downscaled version of the RTX PRO 6000 (formerly B40) for the Chinese market.