Over the past decade, a paradigm ruled the semiconductor industry. Britain’s Arm produced the basic designs. American companies led by Apple, NVIDIA, and others designed the chips. Taiwan’s TSMC built the foundries that manufactured them.

AI is upending this division of labor, opening the door to new entrants and increased specialization. 

Arm is the poster child for this transformation. It plans to build and sell its own chips for AI data centers. This represents a historic shift away from the British company’s traditional, neutral model licensing instruction sets, collecting upfront fees and then a small royalty per chip. Although AI players still license Arm’s intellectual property, the fastest growth now lies in tightly integrated, custom chips co‑designed with foundries.

Arm’s licensing-centric business emerged to power smartphones. Dozens of chipmakers —Texas Instruments, Infineon, Analog Devices, Samsung, Qualcomm, and other Asian entities —competed fiercely on price and integration, so it made no sense for Arm to build its own chips and undercut a broad, growing customer base. It became the “Switzerland” of mobile phone chips.

AI chips are different. Chip demand has exploded and system architecture for large‑scale AI workloads are still being worked out. A handful of players design their own CPUs for AI data centers. NVIDIA dominates, followed by AMD and Intel. 

The three giant cloud computing providers also are moving to design their own chips: Amazon Web Services develops Graviton CPUs and Trainium/Inferentia accelerators, Google produces Axion CPUs and TPUs, and Microsoft makes its Cobalt CPU and Maia accelerators. These “in-house” products optimize performance‑per‑watt and cost for their own workloads, compared to buying off-the-shelf semiconductors.

Few see Arm’s own chip plans as a direct betrayal. Arm estimates that its own chips could eventually generate about $15 billion a year, the size of its entire current business. This revenue boost could not be reached by modest increases in license fees or royalties. Arm remains the foundation, not the rival: AWS’s Graviton and Google’s Axion are based on Arm’s intellectual property. 

An industry survey suggests that AWS, Google, and Microsoft chose Arm’s proprietary software because its ecosystem, tools and software support are more mature and lower-risk than newer, open-source options such as RISC‑V. They reuse Arm’s core designs and toolchains, focusing engineering resources on complex tasks such as putting all essential components on a single chip (System-on-Chip (SoC) integration), optimizing memory architectures and connections with AI accelerators and the outside world. It remains too slow and too expensive to develop their own chip cores. 

Arm’s own chipmaking plans follow a similar route. Its AGI chip is codeveloped with Meta, who will become the lead customer and plans to use AGIs in its own AI data centers rather than sell them. This lets Arm avoid creating a new competitor. The partnership gives Arm direct visibility into hyperscale AI workloads and agentic orchestration layers, instead of seeing those requirements only through second‑hand feedback from licensees.

Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA's work.

Nor does Arm plan to take on market leader NVIDIA. Its AVI chips will complement NVIDIA’s. They are designed for AI agent work — planning, routing requests, coordinating different models — while NVIDIA GPUs remain dominant for raw matrix multiplication and training. Arm chips will orchestrate. NVIDIA’s will process data. 

By using TSMC to make the chips, Arm avoids becoming a vertically integrated manufacturer. By focusing on orchestration chips, Arm avoids competing against its own licensees and instead targets competitors using Intel-designed technology. Arm smartphone licensing should emerge unscathed.

Even so, the strategy confronts real risks. The biggest danger is execution: if Arm chips under‑deliver on performance, energy efficiency, or software support, customers may treat them as a side‑experiment rather than a core platform. If AI spending slows, Arm could miss its optimistic sales forecasts. 

But the AI revolution is upending the semiconductor industry. Classical IP licensing remains effective in smartphones and much of the server market. The AI frontier requires more than licensing, pushing major players toward co‑designed, custom silicon.

As Google, Amazon, and Microsoft commission Arm‑based chips from TSMC or Intel for their own clouds and Arm moves forward with its own chips, democratic allies gain a new, non‑Chinese option for AI data centers that complements NVIDIA. The “Free World” semiconductor ecosystem wins.

Christopher Cytera CEng MIET is a senior fellow with the Tech Policy Program at the Center for European Policy Analysis and a technology business executive with over 30 years of experience in semiconductors, electronics, communications, video, and imaging.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

Tech 2030

A Roadmap for Europe-US Tech Cooperation

Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More