NVIDIA and Thinking Machines Partner on 1GW AI Cloud, Anchored by Vera Rubin GPUs
NVIDIA and Thinking Machines Lab have announced one of the largest startup infrastructure commitments of the year: a strategic plan to build a 1-gigawatt DGX Cloud cluster for frontier AI training and deployment.
What Was Announced
The partnership, published on March 10, combines two tracks. First, Thinking Machines will scale on NVIDIA's cloud stack immediately through DGX Cloud and NVIDIA software services. Second, the company will become an early adopter of NVIDIA Vera Rubin systems as those next-generation platforms come online.
NVIDIA says the deployment roadmap includes thousands of Rubin GPUs over time, aimed at supporting full-cycle model work from pretraining to inference. Thinking Machines positions this as the compute base for an integrated multimodal assistant platform.
Why It Matters
This deal is notable because it ties a newly formed frontier lab directly to NVIDIA's most advanced roadmap rather than only current-generation hardware. That can compress time-to-market for large model teams competing with incumbents.
Reuters also reports that financial terms were not disclosed, but the arrangement highlights continued demand for top-tier AI compute despite broader cost pressure across the model ecosystem.
For builders, the signal is clear: compute access remains a core moat, and partnerships around next-wave infrastructure are now a headline strategic asset.