Deploy AI across mobile, web, and embedded applications
-
On device
Reduce latency. Work offline. Keep your data local & private.
-
Cross-platform
Run the same model across Android, iOS, web, and embedded.
-
Multi-framework
Compatible with JAX, Keras, PyTorch, and TensorFlow models.
-
Full AI edge stack
Flexible frameworks, turnkey solutions, hardware accelerators
Ready-made solutions and flexible frameworks
Low-code APIs for common AI tasks
Cross-platform APIs to tackle common generative AI, vision, text, and audio tasks.
Deploy custom models cross-platform
Performantly run JAX, Keras, PyTorch, and TensorFlow models on Android, iOS, web, and embedded devices, optimized for traditional ML and generative AI.
Shorten development cycles with visualization
Visualize your model’s transformation through conversion and quantization. Debug hotspots by overlaying benchmarks results.
Build custom pipelines for complex ML features
Build your own task by performantly chaining multiple ML models along with pre and post processing logic. Run accelerated (GPU & NPU) pipelines without blocking on the CPU.
A low level framework used to build high performance accelerated ML pipelines, often including multiple ML models combined with pre and post processing.
Model Explorer
Visually explore, debug, and compare your models. Overlay performance benchmarks and numerics to pinpoint troublesome hotspots.
Gemini Nano in Android & Chrome
Build generative AI experiences using Google's most powerful, on-device model

Recent videos and blog posts
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],[],[],[]]