Code Velocity
Enterprise AI

Anthropic, Amazon Expand Compute Partnership for Claude

·8 min read·Anthropic·Original source
Share
Anthropic and Amazon logos symbolizing their expanded AI compute partnership for Claude on AWS Trainium.

Anthropic and Amazon Forge Mega-Partnership for 5 Gigawatts of AI Compute Power

San Francisco, CA – April 21, 2026 – In a landmark announcement set to redefine the future of large language model development, Anthropic and Amazon have unveiled a massive expansion of their strategic collaboration. This deepened partnership secures up to 5 gigawatts (GW) of compute capacity for Anthropic, ensuring the infrastructure necessary to train, deploy, and scale its cutting-edge AI models, including the highly popular Claude series. This unprecedented commitment underscores the escalating demand for advanced AI and the critical role of robust, scalable compute in the generative AI arms race.

The agreement builds on an already strong foundation, initiated in 2023, which saw the launch of Project Rainier – one of the world's largest compute clusters – and Anthropic's current utilization of over one million Trainium2 chips. This latest deal propels the partnership into a new era, committing vast resources to meet the explosive growth of AI applications.

Massive Compute Power: Fueling Claude's Expansion

At the heart of this expanded collaboration is Anthropic's commitment to secure up to 5 GW of new compute capacity over the next decade, primarily leveraging AWS technologies. This colossal infrastructure will be instrumental in the continued development and deployment of Claude, Anthropic's flagship AI model. The commitment encompasses AWS's custom-designed AI silicon, specifically the powerful Trainium2 and forthcoming Trainium3 chips, with provisions for future generations of Amazon's advanced processors.

Significant Trainium2 capacity is slated to come online in the second quarter of this year, followed by scaled Trainium3 capacity later in 2026. This rapid rollout ensures that Anthropic can immediately address its surging compute needs. The agreement also includes the expansion of inference capabilities in key international markets, such as Asia and Europe, aiming to better serve Claude's rapidly growing global customer base and reduce latency for users worldwide.

Strategic Infrastructure Investment: AWS as Primary Cloud Provider

Anthropic is solidifying its long-term alliance with AWS by committing more than $100 billion over the next ten years to AWS technologies. This makes AWS the primary training and cloud provider for Anthropic’s mission-critical workloads, a testament to the performance and cost-efficiency of Amazon's custom AI silicon.

Andy Jassy, CEO of Amazon, emphasized the value proposition of their proprietary chips, stating, "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand. Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI." This commitment highlights the strategic importance of specialized hardware in driving down the operational costs of large-scale AI.

Seamless Integration: Claude Platform on AWS

A significant development for enterprise users is the impending availability of the full Claude Platform directly within AWS. This integration will allow organizations to access Claude's advanced capabilities through their existing AWS accounts, maintaining their current controls, billing structures, and security protocols. This means no additional credentials or contracts will be necessary, significantly streamlining the adoption process for businesses.

The "Claude Platform on AWS" is currently in private beta, offering a controlled environment for early adopters to leverage this seamless integration. Anthropic remains unique in offering its frontier AI models across all three major cloud platforms—AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry)—providing unparalleled flexibility for enterprise clients. This direct access on AWS will empower more businesses to build sophisticated AI applications, meeting stringent governance and compliance requirements.

Amazon's Financial Backing and Future Vision

Beyond compute and infrastructure, Amazon is bolstering its financial commitment to Anthropic, investing an additional $5 billion today, with a potential for up to $20 billion more in the future. This builds upon Amazon's previous $8 billion investment, showcasing deep confidence in Anthropic's long-term vision and leadership in the AI space.

Dario Amodei, CEO and co-founder of Anthropic, articulated the urgency behind this expansion: "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand. Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS." This strategic financial and technological alliance ensures Anthropic has the runway to innovate and scale.

Here's a summary of the key facets of the expanded partnership:

AspectDetails
Compute CapacityUp to 5 gigawatts (GW) over 10 years for training and deployment
Primary ChipsAWS Trainium2, Trainium3, and future generations of custom silicon, plus Graviton
Infrastructure Investment>$100 billion committed to AWS technologies over 10 years
Amazon Investment$5 billion initially, up to $20 billion more (on top of previous $8 billion)
Claude AvailabilityFull Claude Platform directly on AWS (private beta) with unified billing and controls
Global ExpansionExpanded inference capacity for Claude in Asia and Europe to better serve international customers
Customer ImpactImproved reliability, performance, and scalability for growing enterprise and consumer demand

Meeting Unprecedented Demand: Scaling for Growth

The demand for Claude has seen an unprecedented acceleration in 2026, encompassing enterprise, developer, and consumer segments. Anthropic reports a significant surge in consumer usage across its free, Pro, and Max tiers. This rapid growth is reflected in their financial performance, with run-rate revenue now surpassing $30 billion, a substantial leap from approximately $9 billion at the end of 2025.

However, this explosive growth has placed inevitable strain on Anthropic's infrastructure, impacting reliability and performance, especially during peak hours for all user tiers. This new agreement is a direct response to these challenges, designed to quickly expand available compute capacity. The goal is to deliver meaningful compute within the next three months and nearly 1GW in total before the end of the year, mitigating performance issues and ensuring a robust user experience.

Diversified Hardware Strategy and Global Reach

Anthropic's strategy involves not only securing massive compute from AWS but also maintaining a diversified hardware approach. This includes partnerships with other providers, such as the announced Google and Broadcom collaboration for compute, ensuring that workloads are spread across a range of chip architectures. This multi-vendor strategy provides resilience, optimizes costs, and allows Anthropic to harness the best available technology for various aspects of its AI development and deployment.

The expansion of Claude's inference capabilities into Asia and Europe is a strategic move to cater to its burgeoning international customer base. By bringing compute resources closer to users in these regions, Anthropic aims to reduce latency, improve response times, and offer a more seamless experience for global enterprises and developers building with Claude. This global reach is crucial for scaling AI for everyone and cementing Claude's position as a truly global frontier AI model.

The partnership with Amazon is a critical component of Anthropic's plan to meet the escalating global demand for advanced AI, ensuring that Claude remains at the forefront of generative AI innovation with unmatched reliability and performance.

Frequently Asked Questions

What is the main announcement regarding Anthropic and Amazon's expanded collaboration?
The core announcement is a significant expansion of their partnership, securing up to 5 gigawatts (GW) of compute capacity for Anthropic over the next ten years. This infrastructure will primarily be powered by Amazon's custom AI silicon, including Trainium2 and Trainium3 chips, enabling Anthropic to train and deploy its frontier AI models like Claude at an unprecedented scale, addressing the rapidly increasing global demand for their services and ensuring robust performance. This strategic move aims to solidify Claude's position as a leading generative AI model.
How much financial investment is Amazon making in Anthropic through this agreement?
Amazon is making a substantial financial commitment, investing $5 billion in Anthropic initially with the potential for an additional $20 billion in the future. This builds upon the $8 billion Amazon had previously invested, bringing the total potential investment to $33 billion. This capital infusion demonstrates Amazon's deep confidence in Anthropic's AI research and development, providing crucial resources for Anthropic to continue its rapid growth and innovation in the competitive generative AI landscape and maintain its technological edge.
What specific AWS technologies will Anthropic leverage for its compute needs?
Anthropic will primarily leverage Amazon's custom AI silicon, specifically Trainium2 and Trainium3 chips, with options to utilize future generations as they become available. Additionally, the commitment spans Graviton processors. This diversified approach ensures high performance at lower costs for training and deploying large language models. The agreement also includes significant Trainium2 capacity coming online soon and scaled Trainium3 capacity later in the year, with Anthropic continuing to use AWS as its primary cloud provider for mission-critical workloads, ensuring reliability and scalability.
What does the 'Claude Platform on AWS' offering entail for enterprise customers?
The 'Claude Platform on AWS' will make the full suite of Claude's capabilities directly available within the AWS environment. This means enterprise customers can access Claude via their existing AWS accounts, benefiting from familiar controls, unified billing, and without needing additional credentials or separate contracts. This integration is designed to meet existing governance and compliance requirements, streamlining the process for organizations to build with Claude. It is currently in private beta, indicating a focused rollout to key clients who can request access through their account teams.
How will this expanded partnership address the rapidly growing demand for Claude?
The agreement directly addresses the surge in demand for Claude, which has led to significant revenue growth and occasional infrastructure strain, particularly during peak hours. By securing up to 5GW of new capacity, Anthropic aims to rapidly expand its available compute resources, delivering meaningful capacity within three months and nearly 1GW total by year-end. This, combined with a diversified hardware strategy, is crucial for maintaining Claude's reliability, performance, and ability to remain at the frontier of AI innovation for its expanding user base across free, Pro, and Max tiers.
Where will Anthropic expand Claude's inference capabilities globally as part of this deal?
As part of the expanded collaboration, Anthropic will extend Claude's inference capabilities to customers in Asia and Europe. This strategic geographical expansion is designed to better serve Claude's growing international customer base, ensuring lower latency and improved performance for users in these regions. It signifies Anthropic's commitment to global accessibility and its strategy to solidify Claude's presence across key markets worldwide, making its advanced AI models more readily available and efficient for a broader audience building applications on AWS Bedrock.

Stay Updated

Get the latest AI news delivered to your inbox.

Share