Runway Demos Real-Time AI Video That Generates in Under 100ms
Runway has teamed up with NVIDIA to demonstrate something that's never been done before in AI video: a model that starts generating high-definition footage in under 100 milliseconds โ less time than it takes to blink.
The demo was shown at NVIDIA's GPU Technology Conference (GTC) in San Jose on March 18. Announced via Runway's official X account, the model is described as a research preview and part of their broader General World Model (GWM-1) initiative, a project aimed at building AI systems that can simulate and understand the physical world.
Sub-blink latency
For context, the human blink takes 100 to 400 milliseconds. Runway's model clears that threshold with time to spare, streaming the first HD frame in under 100ms from prompt submission. Previous video generation models typically require seconds to minutes per clip.
The demo ran on NVIDIA's Vera Rubin supercomputer โ a rack-scale system packing 72 Rubin GPUs, 36 Vero CPUs, 54 terabytes of CPU memory, and 20.7 terabytes of GPU memory. It's not consumer hardware. Vera Rubin is scheduled to begin shipping in the second half of 2026.
What it means
The immediate implication is interactive AI video โ video that responds to you in real time, like a game engine driven by a generative model rather than pre-authored assets. Runway is already working on playable world generation using their GWM research as a foundation.
The less cheery implication: deepfakes and synthetic media that stream instantly, tailored and reactive in real time, without the processing lag that currently makes them detectable.
No public release date has been announced. The model remains a research preview.