Mistral AI announced Forge at Nvidia GTC 2026 โ€” a platform that lets enterprises and governments train AI models on their own proprietary data, not just adapt general-purpose ones.

Beyond Fine-Tuning

Most enterprise AI today relies on fine-tuning or retrieval-augmented generation (RAG): bolting company data onto pre-built models at runtime. Forge takes a different approach. The platform supports the full model training lifecycle โ€” pre-training on large internal datasets, supervised fine-tuning, DPO and ODPO post-training, and reinforcement learning pipelines to align models with internal policies over time.

The practical difference matters. Natively trained models handle domain-specific language and non-English data better, exhibit more consistent behavior, and don't depend on third-party model providers that could change or deprecate APIs without notice.

The Enterprise Bet

Mistral CEO Arthur Mensch says the company is on track to surpass $1 billion in annual recurring revenue in 2026, built almost entirely on corporate clients while OpenAI and Anthropic chased consumer adoption. Forge is the next step in that strategy.

Customers using Forge get access to Mistral's open-weight model library โ€” including the recently released Mistral Small 4 โ€” plus a team of forward-deployed engineers who embed with clients to surface the right training data and adapt models to operational needs. It's a model borrowed from Palantir and IBM.

What It Means

Forge is a direct challenge to hyperscale cloud AI offerings from OpenAI, Anthropic, Google, and Amazon. For enterprises that want to own their AI โ€” not rent it โ€” Mistral is betting that real ownership requires real training.