Anthropic Says Claude Agents Negotiated 186 Real Trades in Project Deal
Anthropic says it ran a one-week internal experiment called Project Deal to test whether AI agents could handle real marketplace negotiations on behalf of humans.
What Anthropic tested
According to the company, 69 employees were each given a $100 budget and had Claude interview them about what they wanted to buy or sell. Anthropic then let those agents negotiate with each other in Slack without human approval during the bargaining itself.
In the "real" run of the experiment, Anthropic says the agents completed 186 deals across more than 500 listed items, totaling just over $4,000 in transaction value. The goods reportedly ranged from snowboards and bicycles to office odds and ends, and the company says participants later completed the actual exchanges in person.
What the results suggest
Anthropic also ran parallel versions of the market using different Claude models. Its write-up says people represented by Claude Opus 4.5 generally got better outcomes than those represented by Claude Haiku 4.5, including more completed deals and better pricing on identical items. At the same time, Anthropic says participants on the weaker model often did not realize they were at a disadvantage.
That makes Project Deal notable less as a commerce product launch than as an early warning about agent quality gaps. Anthropic describes the test as a pilot experiment with a self-selected participant pool, but the results offer a concrete example of AI agents already negotiating real transactions, not just demo tasks.