-
- News
- Books
Featured Books
- I-Connect007 Magazine
Latest Issues
Current Issue
Beyond the Rulebook
What happens when the rule book is no longer useful, or worse, was never written in the first place? In today’s fast-moving electronics landscape, we’re increasingly asked to design and build what has no precedent, no proven path, and no tidy checklist to follow. This is where “Design for Invention” begins.
March Madness
From the growing role of AI in design tools to the challenge of managing cumulative tolerances, these articles in this issue examine the technical details, design choices, and manufacturing considerations that determine whether a board works as intended.
Looking Forward to APEX EXPO 2026
I-Connect007 Magazine previews APEX EXPO 2026, covering everything from the show floor to the technical conference. For PCB designers, we move past the dreaded auto-router and spotlight AI design tools that actually matter.
- Articles
- Columns
- Links
- Media kit
||| MENU - I-Connect007 Magazine
Top Eight CSPs’ CapEx to Surpass $710B in 2026; Google Leads TPU Deployment
February 25, 2026 | TrendForceEstimated reading time: 2 minutes
Global CSPs are accelerating investment in AI servers and infrastructure to support expanding AI deployment and upgrades, according to TrendForce’s latest findings on the AI server market. Combined capital expenditures by the world’s eight leading CSPs—Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu— are projected to exceed $710 billion in 2026, representing approximately 61% YoY growth.
In addition to the continued procurement of NVIDIA and AMD GPU platforms, CSPs are increasingly investing in ASICs to optimize AI workload suitability and improve the cost efficiency of data centers.
TrendForce estimates that Alphabet, Google’s parent company, will see its 2026 capital expenditure surpass $178.3 billion, up 95% YoY. Google began developing in-house ASICs earlier than its peers and has accumulated substantial advantages in research and development. Its TPU roadmap is expected to transition to the next-generation v8 platform this year.
Driven by demand from the Google Cloud Platform and Gemini AI applications, TPUs are projected to account for nearly 78% of AI servers shipped to Google in 2026. This is expected to further widen the gap with GPU-based systems. Google remains the only CSP whose AI server build-out features more ASIC-based servers than GPU-based ones.
Amazon has recently increased procurement of NVIDIA GB300 and V200 rack-scale systems, reflecting accelerated deployment of higher-power, higher-density GPU platforms to support expanding AI training and inference services. GPUs are expected to represent nearly 60% of AWS’s AI server build-out in 2026.
On the ASIC front, Amazon’s next-generation Trainium 3 is expected to ramp starting 2Q26, following the rollout of Trainium 2/2.5. However, shipment momentum may become more pronounced in the second half of the year as software maturity and system validation progress.
TrendForce projects Meta’s 2026 CapEx to exceed $124.5 billion, up 77% YoY. Meta’s AI servers will continue to rely primarily on NVIDIA and AMD GPUs, with GPU-based systems accounting for over 80% of its build-out.
Although Meta aims to advance its in-house MTIA ASIC platform to lower unit compute costs and reduce supplier dependence, supply chain sources indicate that software-hardware tuning challenges may constrain actual shipment volumes relative to initial expectations.
Microsoft remains optimistic about long-term demand for large-scale model training and inference and continues to procure NVIDIA rack-scale systems to support AI server deployments. It has also introduced its in-house Maia 200 chip, targeting high-efficiency AI inference applications.
Meanwhile, Oracle is expanding GPU rack-scale deployments to support AI data center projects tied to initiatives such as Stargate and OpenAI.
On the Chinese side, while ByteDance has not publicly disclosed detailed 2026 CapEx plans, TrendForce estimates that over half of its investment will be allocated to procuring AI chips. NVIDIA H200 is expected to serve as a key solution for ByteDance’s AI servers, subject to U.S.-China regulatory developments. The company is also expanding the adoption of domestic AI chips, including solutions from Cambricon.。
Tencent continues to procure NVIDIA GPUs to support cloud and generative AI services, while collaborating with local partners to develop in-house ASIC solutions across networking, data center infrastructure, and online AI applications to diversify compute sources and enhance integration flexibility.
Alibaba and Baidu are both actively advancing proprietary ASIC development. Alibaba, through T-head and Alibaba Cloud, supports public cloud and AI infrastructure services while developing Qwen LLMs and application software for enterprise and consumer markets.
Baidu plans to roll out its next-generation Kunlun chips after 2026, targeting large-scale AI training and inference workloads. In parallel, it is advancing its Tianchi AI server cluster platform, capable of linking hundreds of AI chips to boost system-level computing power.
Testimonial
"The I-Connect007 team is outstanding—kind, responsive, and a true marketing partner. Their design team created fresh, eye-catching ads, and their editorial support polished our content to let our brand shine. Thank you all! "
Sweeney Ng - CEE PCBSuggested Items
Cadence, NVIDIA Expand AI & Accelerated Computing Partnership
04/17/2026 | Cadence Design Systems, Inc.At CadenceLIVE Silicon Valley 2026, Cadence announced an expanded partnership with NVIDIA to deliver accelerated solutions across agentic AI, physics-based simulation and digital twins to unlock new levels of productivity and accelerate next‑generation engineering design flows across semiconductor design, physical AI systems and hyperscale AI factories.
Siemens and Humanoid Bring Physical AI to Factory Floor with NVIDIA
04/16/2026 | SiemensSiemens and Humanoid announced a landmark milestone in the journey to bring physical AI from vision to industrial reality.
Dan’s Biz Bookshelf: ‘The 'NVIDIA Way: Jensen Huang and the Making of a Tech Giant’
04/09/2026 | Dan Beaulieu -- Column: Dan's Biz BookshelfI just finished "The NVIDIA Way" by Tae Kim, and let me tell you, this isn’t just a book about a semiconductor company. It’s a book about conviction, stubborn vision, and, most of all, what happens when a leader refuses to think small. At the center of it all is Jensen Huang. Kim does a masterful job showing us that NVIDIA’s rise wasn’t luck, timing, or some Silicon Valley fairy dust. It was discipline and obsession. It was long-term thinking in a world addicted to quarterly results.
Rubin Faces Delays; Blackwell to Drive 70%+ of NVIDIA High-End GPU Shipments in 2026
04/08/2026 | TrendForceAccording to TrendForce’s latest findings on AI servers, NVIDIA’s high-end AI chip shipment mix is expected to change in 2026.
AI Compute Demand Drives 44% YoY Growth for Top 10 Global Fabless IC Firms in 2025
04/01/2026 | TrendForceContinued investment in AI infrastructure by major CSPs, including purchases of GPUs and deployment of in-house ASICs, has driven strong growth among AI-related chip designers, according to TrendForce’s latest findings.