-
-
News
News Highlights
- Books
Featured Books
- pcb007 Magazine
Latest Issues
Current IssueThe Legislative Outlook: Helping or Hurting?
This month, we examine the rules and laws shaping the current global business landscape and how these factors may open some doors but may also complicate business operations, making profitability more challenging.
Advancing the Advanced Materials Discussion
Moore’s Law is no more, and the advanced material solutions to grapple with this reality are surprising, stunning, and perhaps a bit daunting. Buckle up for a dive into advanced materials and a glimpse into the next chapters of electronics manufacturing.
Inventing the Future With SEL
Two years after launching its state-of-the-art PCB facility, SEL shares lessons in vision, execution, and innovation, plus insights from industry icons and technology leaders shaping the future of PCB fabrication.
- Articles
- Columns
- Links
- Media kit
||| MENU - pcb007 Magazine
Compal Showcases Comprehensive Data Center Solutions at 2025 OCP Global Summit
October 16, 2025 | Compal Electronics Inc.Estimated reading time: 2 minutes
Global data centers are facing new challenges driven by AI – greater compute demand, larger working sets, and more stringent energy efficiency requirements. At this year’s OCP Global Summit, Compal Electronics presented a comprehensive vision for the data center of the future, delivering end-to-end solutions that cover compute, memory, and cooling.
In terms of compute power, Compal is showcasing its latest AI server — SGX30-2 / 10U — based on the NVIDIA HGX B300 platform. The system is built on the NVIDIA Blackwell architecture, supports eight NVIDIA Blackwell Ultra GPUs connected with fifth-generation NVIDIA NVLink, and features dual Intel® Xeon® 6 processors, purpose-built for large-scale AI model training, inference, and HPC (high-performance computing) workloads.
NVIDIA Blackwell Ultra GPUs are based on NVIDIA Blackwell architecture, provides up to 2.1 TB of HBM3e memory and 1.8 TB/s GPU-to-GPU NVLink bandwidth, for a total interconnect bandwidth of 14.4 TB/s, ensuring low-laten, high-speed access to massive working sets. It delivers 144 PFLOPS of FP4 inference performance and approximately 72 PFLOPS of FP8 performance, representing a 7x increase in compute performance compared to the previous NVIDIA Hopper generation. This design enables enterprises to perform high-throughput training and efficient inference on a single platform, significantly shortening AI model development and deployment cycles.
The showcase also featured CXL (Compute Express Link) and RDMA (Remote Direct Memory Access) technologies for AI (Artificial Intelligence) Memory Expansion, addressing the growing memory bottlenecks in workloads such as large language model training and HPC. Through the CXL.mem protocol, servers can seamlessly access pooled SCM (Storage-Class Memory)-based Memory Expanders within a rack, enabling CPUs and GPUs to process larger datasets beyond HBM (High Bandwidth Memory) limits with cache-coherent efficiency.
Within the evolving memory architecture, bridging GPU memory and storage a direct path through PCIe interface. It’s intelligent DMA (Direct Memory Access) offload and low-latency data-path design transform conventional NVMe (Non-Volatile Memory Express) storage into a memory-like extension, realizing the Storage-as-Memory concept that enables high-speed data access without increasing CPU overhead.
Extending beyond the rack, RDMA allows direct data movement across servers and data centers via InfiniBand or RoCE (RDMA over Converged Ethernet) networks, supporting large-scale memory disaggregation and resource pooling. Together, these technologies redefine how data flows across AI clusters, creating a unified, reconfigurable, and energy-efficient infrastructure that evolves from today’s PCIe-based systems toward next-generation GPU direct storage and CXL-enabled architectures.
With this comprehensive showcase, Compal demonstrates a clear, practical path from today’s infrastructure to the data center of the future, and underscores its role as a true system integrator — not just a technology provider, but a long-term strategic partner for enterprises in their digital transformation journey,” said Alan Chang, Vice President of the Infrastructure Solutions Business Group at Compal.
Testimonial
"The I-Connect007 team is outstanding—kind, responsive, and a true marketing partner. Their design team created fresh, eye-catching ads, and their editorial support polished our content to let our brand shine. Thank you all! "
Sweeney Ng - CEE PCBSuggested Items
Unique Active Memory Computer Purpose-built for AI Science Applications
08/26/2025 | PNNLWith the particular needs of scientists and engineers in mind, researchers at the Department of Energy’s Pacific Northwest National Laboratory have co-designed with Micron a new hardware-software architecture purpose-built for science.
Cadence Introduces Industry-First LPDDR6/5X 14.4Gbps Memory IP to Power Next-Generation AI Infrastructure
07/10/2025 | Cadence Design SystemsCadence announced the tapeout of the industry’s first LPDDR6/5X memory IP system solution optimized to operate at 14.4Gbps, up to 50% faster than the previous generation of LPDDR DRAM.
NVIDIA RTX PRO 6000 Shipments Expected to Rise Amid Market Uncertainties
06/24/2025 | TrendForceThe NVIDIA RTX PRO 6000 has recently generated significant buzz in the market, with expectations running high for strong shipment performance driven by solid demand.
President Trump Secures $200B Investment from Micron Technology for Memory Chip Manufacturing in the United States
06/16/2025 | U.S. Department of CommerceThe Department of Commerce announced that Micron Technology, Inc., the leading American semiconductor memory company, plans to invest $200 billion in semiconductor manufacturing and R&D to dramatically expand American memory chip production.
Micron Ships HBM4 to Key Customers to Power Next-Gen AI Platforms
06/11/2025 | MicronThe importance of high-performance memory has never been greater, fueled by its crucial role in supporting the growing demands of AI training and inference workloads in data centers. Micron Technology, Inc., announced the shipment of HBM4 36GB 12-high samples to multiple key customers.