Share on:

Share on LinkedIn Share on X Share on Facebook Share with email

Testimonial

"In a year when every marketing dollar mattered, I chose to keep I-Connect007 in our 2025 plan. Their commitment to high-quality, insightful content aligns with Koh Young’s values and helps readers navigate a changing industry. "

Brent Fischthal - Koh Young

Suggested Items

HBM4 Validation Expected in 2Q26; Three Major Suppliers Poised to Shape NVIDIA Supply Landscape

02/13/2026 | TrendForce
TrendForce’s latest analysis of the HBM industry reveals that as the ongoing expansion of AI infrastructure continues to fuel GPU demand, NVIDIA’s upcoming Rubin platform is expected to become a major catalyst for HBM4 adoption once mass production begins.

HBM4 Mass Production Delayed to End of 1Q26 By Spec Upgrades and Nvidia Strategy Adjustments

01/08/2026 | TrendForce
TrendForce’s recent investigations indicate that Nvidia has revised the HBM4 specifications for its Rubin platform in 3Q25, raising the required per-pin speed to above 11 Gbps.

NVIDIA Seeks to Raise HBM4 Specs in Response to AMD Competition; SK hynix Expected to Remain Largest Supplier in 2026

09/18/2025 | TrendForce
TrendForce reports that NVIDIA has recently pressed key component suppliers of its Vera Rubin server racks to upgrade product specifications, specifically requesting that HBM4 speed per pin be raised to 10 Gbps, as AMD gets set to launch its MI450 Helios platform in 2026.

Micron Ships HBM4 to Key Customers to Power Next-Gen AI Platforms

06/11/2025 | Micron
The importance of high-performance memory has never been greater, fueled by its crucial role in supporting the growing demands of AI training and inference workloads in data centers. Micron Technology, Inc.,  announced the shipment of HBM4 36GB 12-high samples to multiple key customers.

Cadence Enables Next-Gen AI and HPC Systems with Industry’s Fastest HBM4 12.8Gbps IP Memory System Solution

04/21/2025 | Cadence Design Systems
Cadence announced the industry’s fastest HBM4 12.8Gbps memory IP solution, which meets the increasingly higher memory bandwidth needs of SoCs targeted for the next generation of AI training and HPC hardware systems.
Copyright © I-Connect007 | IPC Publishing Group Inc. All rights reserved. Log in