Vercel Adds DeepSeek V4 Pro and Flash to AI Gateway
Vercel has added DeepSeek V4 Pro and DeepSeek V4 Flash to AI Gateway, giving developers a quick way to start routing the newly released DeepSeek models through Vercel's gateway layer.
What changed
Vercel's changelog says both models are now available in AI Gateway under the identifiers deepseek/deepseek-v4-pro and deepseek/deepseek-v4-flash. The company also says a 1 million token context window is the default for both variants. DeepSeek's own API announcement lines up with that, describing V4 Pro and V4 Flash as its new preview models and saying both support 1M context starting today.
Why it matters
This is not the launch of DeepSeek V4 itself. The practical update is distribution: teams already using AI Gateway for routing, monitoring, and failover can plug the new models into an existing control plane instead of wiring up a separate path on day one.
That matters most for developers running agent-style workloads. Vercel positions V4 Pro for coding, long-horizon reasoning, and tool-using workflows, while V4 Flash is the cheaper and faster option for higher-volume tasks. AI Gateway's docs describe the service as a single endpoint for budgets, usage tracking, retries, and provider switching, so adding DeepSeek V4 broadens the set of frontier models available behind that layer.
The conservative takeaway is that Vercel is moving quickly to make new open model releases operationally usable, not just technically reachable.