Anthropic said Monday that it signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, with the first systems expected to come online in 2027. The company said the new infrastructure will be used to train and serve future Claude models as demand continues to rise.

The announcement is notable because it points to how far major model labs are now planning ahead for compute. Rather than adding capacity quarter by quarter, Anthropic is locking in a large supply of custom accelerator hardware years before the full deployment arrives. The company did not disclose financial terms or a specific chip count.

Anthropic also used the post to disclose updated business metrics. It said annualized revenue has now passed $30 billion, up from about $9 billion at the end of 2025. The Verge separately highlighted the same figure while covering the deal, giving outside confirmation to one of the biggest numbers in the announcement.

Most of the new TPU capacity is expected to be based in the United States, according to Anthropic. That extends the company’s earlier push to build out domestic AI infrastructure while keeping multiple hardware options in play. Even without detailed rollout figures, the agreement shows how competition in frontier AI is increasingly shaped by long-term access to power, chips, and cloud capacity, not just model releases.