Code Velocity
Enterprise AI

Anthropic, Google, Broadcom Partner for Gigawatts of AI Compute

·7 min read·Anthropic, Google, Broadcom·Original source
Share
Anthropic partners with Google and Broadcom to secure gigawatts of AI compute

Anthropic Secures Gigawatts of Next-Gen AI Compute with Google and Broadcom

San Francisco, CA – April 7, 2026 – Anthropic, a leader in AI safety and research, today announced a landmark expansion of its partnership with tech giants Google and Broadcom. This new agreement commits Anthropic to multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity, slated to come online starting in 2027. This monumental investment underscores Anthropic’s aggressive strategy to scale its computational infrastructure, powering its frontier Claude models and addressing the surging global demand from its enterprise customer base.

The collaboration represents a pivotal moment for Anthropic, ensuring the necessary resources to maintain its rapid pace of innovation. "This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure," stated Krishna Rao, CFO of Anthropic. "We are making our most significant compute commitment to date to keep pace with our unprecedented growth, enabling Claude to define the frontier of AI development." This strategic move solidifies Anthropic’s position at the forefront of AI, providing the backbone for increasingly complex and capable models.

Unprecedented Growth Fuels Massive Infrastructure Investment

The decision to dramatically expand compute capacity directly reflects Anthropic's explosive commercial success and adoption. The demand for Claude models has accelerated at an astonishing rate throughout 2026. Anthropic’s run-rate revenue has now surpassed an impressive $30 billion, a more than threefold increase from approximately $9 billion at the end of 2025.

Adding to this remarkable growth, the number of business customers spending over $1 million on an annualized basis has doubled in less than two months, now exceeding 1,000. This rapid expansion in enterprise engagement highlights the critical role Claude plays in driving business value and digital transformation across various industries. To sustain this trajectory and continue serving its growing clientele effectively, such substantial investments in core infrastructure are not merely advantageous but absolutely essential.

Anthropic's Growth Milestones

Metric2025 Year-End EstimateEarly 2026 (Current)Growth Factor (Approx.)
Run-Rate Revenue~$9 Billion>$30 Billion~3.3x
Annual Customers Spending >$1M500+>1,000>2x

This table illustrates the aggressive growth metrics that necessitate Anthropic's significant investment in AI compute, positioning it for continued market leadership.

Strengthening American AI Infrastructure and Global Reach

A significant portion of this newly acquired compute power will be strategically located within the United States. This aligns with Anthropic’s broader commitment to strengthening American AI infrastructure, building upon its November 2025 pledge to invest $50 billion in domestic computing capabilities. By establishing a robust, U.S.-centric AI compute footprint, Anthropic not only enhances its operational security and efficiency but also contributes to national technological sovereignty and economic growth in the advanced computing sector.

The partnership also deepens Anthropic’s existing collaboration with Google Cloud, an evolution from the increased TPU capacity announced last October. This long-standing relationship underscores a shared vision for advancing AI development and deployment. While a significant portion of this compute will be in the U.S., Anthropic's commitment to a global presence remains unwavering, ensuring its models are accessible to customers worldwide.

Strategic Cloud and Hardware Diversification for Resilience

Anthropic’s approach to AI hardware and cloud deployment is notably diverse, a strategy designed to maximize performance, efficiency, and resilience. The company trains and runs its Claude models across a spectrum of cutting-edge AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. This multi-vendor, multi-architecture strategy enables Anthropic to meticulously match specific AI workloads to the chips best suited for them, optimizing for both speed and cost-effectiveness.

"This diversity of platforms translates to better performance and greater resilience for customers who depend on Claude for critical work," the company stated. This distributed and adaptable infrastructure is vital for scaling AI for everyone, particularly for large enterprises that require unwavering reliability and throughput. Furthermore, while Amazon remains Anthropic’s primary cloud provider and training partner, continuing their close collaboration on Project Rainier, Claude stands out as the only frontier AI model available to customers across all three of the world's largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry). This broad cloud availability, alongside their Amazon Partnership, offers unparalleled flexibility and choice for enterprises adopting advanced AI.

The Future of Claude: Enterprise Scale and Beyond

The infusion of multiple gigawatts of next-generation TPU capacity from Google and Broadcom marks a new era for Anthropic and its Claude models. This massive scaling of resources is not just about meeting current demand; it's about enabling the next generation of AI capabilities, pushing the boundaries of what frontier models can achieve. For enterprises, this means access to even more powerful, reliable, and sophisticated AI assistants capable of handling complex tasks with unprecedented accuracy and safety.

As Anthropic continues its rapid growth, this strategic compute commitment ensures that its research and development teams have the horsepower required to explore new frontiers in AI, from more advanced reasoning abilities to enhanced multimodal understanding. The collaboration with industry giants like Google and Broadcom positions Anthropic to solidify its leadership, driving innovation that benefits its vast customer base and the broader AI ecosystem for years to come.

Frequently Asked Questions

What is the primary objective of Anthropic's expanded partnership with Google and Broadcom regarding AI compute?
The core objective of Anthropic's newly announced partnership with Google and Broadcom is to secure a massive expansion of its AI compute infrastructure, specifically multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity. This significant commitment, set to come online starting in 2027, is designed to power the continued development and deployment of Anthropic's frontier Claude models. It directly addresses the exponential growth in demand from its global customer base, ensuring that Anthropic has the necessary computational resources to maintain its leadership in AI innovation and deliver high-performance, reliable services to enterprises adopting its advanced AI solutions.
How has Anthropic's customer base and revenue grown, influencing this infrastructure investment?
Anthropic has experienced an unprecedented surge in customer demand and financial growth, which directly necessitates this substantial infrastructure investment. As of early 2026, the company's run-rate revenue has soared past $30 billion, a significant leap from approximately $9 billion at the close of 2025. Furthermore, the number of business customers each spending over $1 million annually with Anthropic has doubled in less than two months, now exceeding 1,000. This rapid, sustained growth across its enterprise client base underscores the critical need for a massive expansion in compute power to both support existing users and enable future advancements of its Claude models.
What role does this partnership play in Anthropic's commitment to strengthening American AI infrastructure?
This partnership represents a major pillar in Anthropic's broader strategic commitment to strengthening American AI infrastructure. The vast majority of the newly secured compute capacity from Google and Broadcom will be sited within the United States. This move significantly expands upon Anthropic's previously announced $50 billion investment initiative from November 2025, which aims to bolster domestic computing capabilities. By housing such critical AI infrastructure on U.S. soil, Anthropic not only addresses its own operational needs but also contributes to national technological resilience and job creation within the rapidly evolving AI sector, fostering a robust domestic AI ecosystem.
How does Anthropic's multi-platform strategy enhance the performance and resilience of Claude models?
Anthropic employs a deliberate multi-platform strategy to enhance the performance and resilience of its Claude models. This involves training and running Claude across a diverse range of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. This approach allows Anthropic to strategically match specific workloads to the most suitable chips, optimizing for efficiency and specialized processing. By not relying on a single hardware vendor or cloud provider, Anthropic gains significant flexibility, reduces dependency risks, and ensures that customers receive optimal performance and increased operational resilience for their critical AI-driven tasks, a key factor for enterprise adoption and trust in frontier AI.
Which major cloud platforms offer Anthropic's Claude models to their customers?
Anthropic's Claude models are uniquely available across all three of the world's largest cloud platforms, providing unparalleled accessibility and choice for enterprise customers. Specifically, Claude is offered through Amazon Web Services (AWS) via its Bedrock service, on Google Cloud through Vertex AI, and within Microsoft Azure via its Foundry program. This broad availability ensures that organizations, regardless of their existing cloud infrastructure or strategic partnerships, can seamlessly integrate and leverage Anthropic's frontier AI capabilities, making Claude a highly versatile and accessible option in the competitive AI market.
What is the expected timeline for the new compute capacity to become operational?
The new compute capacity, secured through the expanded partnership with Google and Broadcom, is anticipated to become operational relatively soon. Anthropic expects this significant expansion of next-generation TPU capacity to come online starting in 2027. This phased rollout ensures a steady increase in computational power, strategically aligned with the projected growth in demand for Anthropic's Claude models and the continuous advancement of its AI research and development efforts, providing a clear roadmap for scaling its infrastructure to meet future challenges and opportunities in the AI landscape.

Stay Updated

Get the latest AI news delivered to your inbox.

Share