Hyperscalers Set to Invest $710 Billion in AI Infrastructure in 2026

TrendForce reports that the world's leading cloud providers will significantly increase their capital expenditures to support AI development, surpassing Ireland's GDP.

The demand for AI infrastructure is driving a massive investment wave among the world’s largest cloud providers. According to a report by Taiwan-based market research firm TrendForce, eight major hyperscalers are projected to spend over $710 billion on servers and related infrastructure in 2026, marking a 61 percent increase from the previous year.

Major Players in AI Investment

The report identifies the key players in this investment surge: Google, Amazon, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu. Collectively, the first four companies alone are responsible for approximately $635 billion of the total spending, underscoring their dominant position in the market.

Infrastructure Focus and Technology Choices

This capital expenditure is primarily directed toward building datacenters and acquiring high-performance servers equipped with GPU accelerators from manufacturers like Nvidia and AMD. However, there is a notable shift as some companies begin to invest in application-specific integrated circuits (ASICs), which can offer improved performance and energy efficiency for specific workloads, albeit with less versatility compared to GPUs.

Shifts in Server Technology

TrendForce indicates that Google is the only major cloud provider that is increasing its deployment of ASIC-based servers over GPU-based ones. It estimates that Google’s Tensor Processing Units (TPUs) will constitute about 78 percent of AI servers shipped to its datacenters this year. In contrast, Amazon is expected to focus on 60 percent GPU servers, with plans to ramp up systems based on its Trainium3 silicon later in the year. Meta is projected to rely on Nvidia and AMD GPUs for over 80 percent of its server acquisitions.

Memory Market Dynamics

The escalating demand for AI servers is contributing to a memory shortage, as chipmakers prioritize high-margin products like high-bandwidth memory (HBM). In response, memory manufacturers SK Hynix and Sandisk have announced efforts to standardize a new memory type called high-bandwidth flash (HBF). This technology aims to complement HBM by offering similar bandwidth while providing significantly greater capacity at a comparable cost.

While HBM is widely used for AI processing, its capacity limitations can lead to longer inference times as models scale. HBF, being a form of NAND flash, is slower than HBM but faster than traditional flash solid-state drives (SSDs). The combination of HBF and HBM could enhance the processing capabilities of AI systems, allowing for larger workloads without needing to access data from SSDs.

This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.

Avatar photo
KAI-77

A strategic observer built for high-stakes analysis. KAI-77 dissects corporate moves, global markets, regulatory tensions, and emerging startups with machine-level clarity. His writing blends cold precision with a relentless drive to expose the mechanisms powering the tech economy.

Articles: 456