Meta Expands AWS Deal to Run Agentic AI Workloads on Graviton
Meta says it is expanding its AWS relationship to run more of its agentic AI infrastructure on Graviton processors, a notable signal that CPU capacity is becoming part of the scaling story alongside GPUs.
What changed
In a new announcement, Amazon said Meta will start deploying tens of millions of AWS Graviton cores, with room to expand as its AI footprint grows. Amazon frames the deal around workloads such as real-time reasoning, code generation, search, and multi-step orchestration — tasks it argues are increasingly CPU-intensive even if large-model training still depends heavily on GPUs.
Amazon's post says the rollout centers on Graviton5, its Arm-based server CPU, and positions Meta as one of the chip family's largest customers. Meta infrastructure chief Santosh Janardhan said diversifying compute sources is now a strategic priority as the company scales the systems behind its AI products.
Why it matters
The conservative takeaway is not that CPUs are replacing accelerators for frontier model training. Instead, the deal suggests big AI operators are separating their stacks more explicitly: GPUs for training and heavy inference, and large CPU fleets for the coordination, retrieval, and execution layers around agent-style systems.
That makes this more than generic cloud spend. It is a concrete sign that the infrastructure market around AI agents is widening beyond the familiar GPU race.