Google Brings Gemma 4 Agent Skills to Phones and Edge Devices
Google is extending Gemma 4 beyond a standalone model release and into actual edge tooling. In a new developer post, the company said its AI Edge stack now supports Gemma 4 for agentic workflows that run on-device through AI Edge Gallery and the LiteRT-LM runtime.
What shipped
The practical change is that Gemma 4 is now wired into Google's reference app and runtime instead of being just another model announcement. Google's AI Edge Gallery README says the latest app release adds official Gemma 4 support plus Agent Skills, a system that lets the model call modular tools for tasks like knowledge lookups, maps, and visual summaries while staying on-device. Google's LiteRT-LM project also lists Gemma 4 support as new in v0.10.1 and says the runtime now ships with a CLI and function-calling support for agentic workflows.
Why it matters
That makes this a developer infrastructure story, not just a model refresh. Google is packaging an open model, a mobile showcase app, and a cross-platform inference runtime into one path for building local agents on Android, iOS, desktop, web, and IoT hardware.
The conservative reading is that Google still has to prove real-world adoption, and on-device agents remain constrained by memory, battery, and device class. But the tooling is getting more opinionated and more usable: developers no longer need to stitch together every layer themselves just to experiment with private, local agent behavior.