Over 340,000 developers use Lightning Cloud - purpose-built for LitServe, PyTorch and PyTorch Lightning. Get GPUs from $0.19, frontier-grade training/inference clusters, vibe training/inference workspaces, notebooks with AI assistance and inference for your custom models.
LitServe lets you build your own inference engine. Serving engines such as vLLM serve specific model types (LLMs) with rigid abstractions. LitServe gives you the low-level control to serve any model (vision, audio, text, multi-modal), and define exactly how inference works - from batching, caching, streaming, and routing, to multi-model orchestration and custom logic. LitServe is perfect for building inference APIs, agents, chatbots, MCP servers, RAG, pipelines and more.
Self host LitServe or deploy in one-click to Lightning AI.
Install LitServe via pip (more options):
Example 1: Toy inference pipeline with multiple models.
Example 2: Minimal agent to fetch the news (with OpenAI API).
(Advanced examples):
Deploy for free to Lightning cloud (or self host anywhere):
Test the server: Simulate an http request (run this on any terminal):
Test it:
A few key benefits:
- Deploy any pipeline or model: Agents, pipelines, RAG, chatbots, image models, video, speech, text, etc...
- No MLOps glue: LitAPI lets you build full AI systems (multi-model, agent, RAG) in one place (more).
- Instant setup: Connect models, DBs, and data in a few lines with setup() (more).
- Optimized: autoscaling, GPU support, and fast inference included (more).
- Deploy anywhere: self-host or one-click deploy with Lightning (more).
- FastAPI for AI: Built on FastAPI but optimized for AI - 2× faster with AI-specific multi-worker handling (more).
- Expert-friendly: Use vLLM, or build your own with full control over batching, caching, and logic (more).
⚠️ Not a vLLM or Ollama alternative out of the box. LitServe gives you lower-level flexibility to build what they do (and more) if you need it.
Here are examples of inference pipelines for common model types and use cases.
Toy model: Hello world LLMs: Llama 3.2, LLM Proxy server, Agent with tool use RAG: vLLM RAG (Llama 3.2), RAG API (LlamaIndex) NLP: Hugging face, BERT, Text embedding API Multimodal: OpenAI Clip, MiniCPM, Phi-3.5 Vision Instruct, Qwen2-VL, Pixtral Audio: Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet) Vision: Stable diffusion 2, AuraFlow, Flux, Image Super Resolution (Aura SR), Background Removal, Control Stable Diffusion (ControlNet) Speech: Text-speech (XTTS V2), Parler-TTS Classical ML: Random forest, XGBoost Miscellaneous: Media conversion API (ffmpeg), PyTorch + TensorFlow in one API, LLM proxy serverBrowse 100+ community-built templates
Self-host with full control, or deploy with Lightning AI in seconds with autoscaling, security, and 99.995% uptime.
Free tier included. No setup required. Run on your cloud
Docker-first deployment | ✅ DIY | ✅ One-click deploy |
Cost | ✅ Free (DIY) | ✅ Generous free tier with pay as you go |
Full control | ✅ | ✅ |
Use any engine (vLLM, etc.) | ✅ | ✅ vLLM, Ollama, LitServe, etc. |
Own VPC | ✅ (manual setup) | ✅ Connect your own VPC |
(2x)+ faster than plain FastAPI | ✅ | ✅ |
Bring your own model | ✅ | ✅ |
Build compound systems (1+ models) | ✅ | ✅ |
GPU autoscaling | ✅ | ✅ |
Batching | ✅ | ✅ |
Streaming | ✅ | ✅ |
Worker autoscaling | ✅ | ✅ |
Serve all models: (LLMs, vision, etc.) | ✅ | ✅ |
Supports PyTorch, JAX, TF, etc... | ✅ | ✅ |
OpenAPI compliant | ✅ | ✅ |
Open AI compatibility | ✅ | ✅ |
MCP server support | ✅ | ✅ |
Asynchronous | ✅ | ✅ |
Authentication | ❌ DIY | ✅ Token, password, custom |
GPUs | ❌ DIY | ✅ 8+ GPU types, H100s from $1.75 |
Load balancing | ❌ | ✅ Built-in |
Scale to zero (serverless) | ❌ | ✅ No machine runs when idle |
Autoscale up on demand | ❌ | ✅ Auto scale up/down |
Multi-node inference | ❌ | ✅ Distribute across nodes |
Use AWS/GCP credits | ❌ | ✅ Use existing cloud commits |
Versioning | ❌ | ✅ Make and roll back releases |
Enterprise-grade uptime (99.95%) | ❌ | ✅ SLA-backed |
SOC2 / HIPAA compliance | ❌ | ✅ Certified & secure |
Observability | ❌ | ✅ Built-in, connect 3rd party tools |
CI/CD ready | ❌ | ✅ Lightning SDK |
24/7 enterprise support | ❌ | ✅ Dedicated support |
Cost controls & audit logs | ❌ | ✅ Budgets, breakdowns, logs |
Debug on GPUs | ❌ | ✅ Studio integration |
20+ features | - | - |
LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.
Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.
Reproduce the full benchmarks here (higher is better).
These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).
💡 Note on LLM serving: For high-performance LLM serving (like Ollama/vLLM), integrate vLLM with LitServe, use LitGPT, or build your custom vLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.
LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.