Broadcom’s $10 Billion Mic-Drop: A Turning Point in AI Investing

Broadcom's  Billion Mic-Drop: A Turning Point in AI Investing

Broadcom (AVGO) just gave Wall Street a glimpse of the future of AI; and it doesn’t belong to Nvidia (NVDA) alone.

In its latest earnings report, the company stunned investors with a $10-billion bombshell: a secret hyperscale customer is ditching off-the-shelf GPUs and ordering custom-built AI chips (XPUs) instead. 

That single disclosure marks the start of a tectonic shift in AI computing – away from Nvidia’s GPUs and into a new class of purpose-built accelerators. 

We think this is the moment the AI boom enters its next act.

Here’s why…

From GPUs to XPUs: Broadcom Signals a New AI Era

For the last two years, Nvidia has dominated headlines (and stock charts) with its GPUs – the workhorse chips that train and run large AI models. 

Thus far, they have been the fuel to the AI fire, accounting for 90- to 95% of the accelerator market by revenue. And Nvidia, leading the charge here, has pulled in quite absurd amounts of revenue and profit as a result.

Since the AI Boom began in late 2022, the company’s full-year annual revenue has increased dramatically – from $26.97 billion in fiscal year (FY) 2023 to $130.5 billion in FY2025 (+284%), with its net income rising from $4.37 billion to $72.88 billion in that time (+1,568%)…

But make no mistake. That astronomical growth doesn’t mean that the future of AI compute rests solely on Nvidia’s shoulders. As AI models swell to trillions of parameters and tackle ever more specialized tasks, the blunt force of a general-purpose GPU won’t cut it. The demand now is for chips as unique as the workloads themselves – and that’s why XPUs are set to take center stage.

Unlike general-purpose GPUs, XPUs are custom chips tailored to the unique data and workload of an AI model. In this sense, the ‘X’ is a variable that represents the type of architecture best suited for any given application.

For example, a model designed to generate high-quality video, like Google’s Veo3, would require a state-of-the-art Graphics Processing Unit (GPU). Devices like Apple‘s (AAPL) iPhone – which offers Siri voice assistant, facial recognition, and predictive text ability – rely on Neural Processing Units (NPUs) to handle complex algorithms and respond quickly.

These custom-designed accelerators are built from the ground up to adeptly execute specific workloads, be it training, inference, or recommendation. As such, they allow for better performance per watt, lower cost per compute unit, and tighter ecosystem lock-in.

And what Broadcom revealed alongside its latest earnings is proof that the biggest players in tech are no longer dabbling in the shift toward XPUs. They’re now betting the farm on it.

From Alphabet’s (GOOGL) TPUs to Amazon’s (AMZN) Trainium and Microsoft’s (MSFT) Maia, the world’s largest platforms are betting billions that XPUs will define the next decade of computing. 

What was once an experiment is becoming a full-scale arms race.

admin