-
- News
- Books
Featured Books
- design007 Magazine
Latest Issues
Current IssueRules of Thumb
This month, we delve into rules of thumb—which ones work, which ones should be avoided. Rules of thumb are everywhere, but there may be hundreds of rules of thumb for PCB design. How do we separate the wheat from the chaff, so to speak?
Partial HDI
Our expert contributors provide a complete, detailed view of partial HDI this month. Most experienced PCB designers can start using this approach right away, but you need to know these tips, tricks and techniques first.
Silicon to Systems: From Soup to Nuts
This month, we asked our expert contributors to weigh in on silicon to systems—what it means to PCB designers and design engineers, EDA companies, and the rest of the PCB supply chain... from soup to nuts.
- Articles
- Columns
Search Console
- Links
- Media kit
||| MENU - design007 Magazine
NVIDIA’s Data Center Business Fuels Explosive Growth in FY2Q25 Revenue; H200 Set to Dominate AI Server Market from 2H24
September 4, 2024 | TrendForceEstimated reading time: 2 minutes
TrendForce’s latest findings show that NVIDIA’s data center business is driving a remarkable surge in the company’s revenue, which more than doubled in the second quarter of its 2025 fiscal year to reach an impressive US$30 billion. This growth is largely fueled by skyrocketing demand for NVIDIA’s core Hopper GPU products. Supply chain surveys indicate that recent demand from CSP and OEM customers for the H200 GPU is on the rise, and this GPU is expected to become NVIDIA’s primary shipment driver from the third quarter of 2024 onward.
NVIDIA’s financial report reveals that its data center business revenue in FY2Q25 grew by 154% YoY, outpacing other segments and raising its contribution to total revenue to nearly 88%. TrendForce estimates that nearly 90% of NVIDIA’s GPU product line in 2024 will be based on the Hopper platform. This includes the H100, H200, a customized H20 version for Chinese customers, and the GH200 solution integrated with NVIDIA’s Grace CPU, which targets specific AI applications in the HPC market. Starting from Q3 this year, NVIDIA is expected to maintain a no-price-cut strategy for the H100. Once customers complete their existing orders, the H100 will naturally phase out, and the H200 will take over as the main product supplied to the market.
In China, demand for AI servers equipped with the H20 GPU has significantly increased since 2Q24, driven by cloud customers building LLMs, search engines, or chatbots locally.
Looking ahead, TrendForce predicts that the rising demand for H200-powered AI servers will help NVIDIA navigate potential supply challenges if the rollout of its new Blackwell platform experiences delays due to supply chain adjustments. This steady demand is expected to keep NVIDIA’s data center revenue strong in the latter half of 2024.
Additionally, the Blackwell platform is expected to ramp up production in 2025. With a die size twice that of the Hopper platform, the Blackwell platform is set to boost demand for CoWoS packaging solutions once it becomes mainstream next year. TrendForce reports that CoWoS’s main supplier, TSMC, has recently boosted its monthly capacity planning to nearly 70–80K units—nearly double that of 2024—with NVIDIA poised to take up more than half of this capacity.
HBM3e suppliers seize opportunity with NVIDIA’s product rollout
NVIDIA’s product lineup this year is set to make waves, with the H200 being the first GPU to feature HBM3e 8-Hi memory stacks. The upcoming Blackwell chips are also slated to fully adopt HBM3e. Meanwhile, Micron and SK hynix completed their HBM3e qualification in 1Q24 and began mass shipments in Q2. Micron is primarily supplying HBM3e for the H200, while SK hynix is catering to both the H200 and B100 series. Although Samsung was a bit late to the game, it has recently completed its HBM3e qualification and begun shipping HBM3e 8Hi for the H200, with qualification for the Blackwell series well underway.