OpenAI and Broadcom Partner on 10 GW of Custom AI Chips
OpenAI has signed a multi-billion-dollar agreement with Broadcom to design and deploy custom “AI accelerator” chips and rack systems. Broadcom will manufacture and install the systems in OpenAI’s infrastructure and partner data centers, with deployments starting in the second half of 2026 and a rollout expected to finish by the end of 2029.
The deal covers roughly 10 gigawatts (GW) of compute capacity—part of a broader procurement push by OpenAI that already includes large agreements with NVIDIA and AMD. Recent deals include NVIDIA committing 10 GW alongside a reported $100 billion investment, AMD supplying about 6 GW, and Oracle providing 4.5 GW of data center capacity for OpenAI’s Stargate Project.
- Agreement size: ~10 GW of chips with Broadcom, described as worth “multiple billions”.
- Timeline: Deployments begin H2 2026, aim to complete by end of 2029.
- Context: OpenAI CEO Sam Altman has set an internal goal of building ~250 GW of compute over the next eight years—far larger than the ~2 GW expected by year-end.
OpenAI is designing the accelerators while Broadcom handles manufacturing and rack deployment; the two companies reportedly began working together around 18 months ago. These deals collectively illustrate the scale of infrastructure OpenAI is lining up to train and run next-generation models—and the enormous capital required to reach Altman’s 250 GW target (industry estimates place the acquisition cost in the trillions).
For more details, see the original coverage on Engadget and reporting from Reuters.
Why it matters: securing tens of gigawatts of specialized AI hardware is a strategic move—companies that control large pools of optimized compute can train bigger models faster and at lower per-unit cost. But scaling to the hundreds of gigawatts Altman envisions would require unprecedented financing and energy resources, raising questions about sustainability, market concentration, and who ultimately benefits from these capabilities.
Key questions:
- Can OpenAI and its partners finance and deploy hundreds of gigawatts without consolidating too much market power?
- What are the environmental and grid impacts of scaling AI compute to the levels proposed?
- How will competition between chipmakers (NVIDIA, AMD, Broadcom and others) shape future AI hardware design?
Discussion: Do you think this rapid, large-scale compute buildout will accelerate breakthroughs in AI, or create risks around concentration and sustainability? Share your thoughts below.
