The v0-1.0-md model is designed for building modern web applications. It supports text and image inputs, provides fast streaming responses, and is compatible with the OpenAI Chat Completions API format.
- Framework aware completions: Evaluated on modern stacks like Next.js and Vercel.
- Auto-fix: Identifies and corrects common coding issues during generation.
- Quick edit: Streams inline edits as they’re available.
- OpenAI compatible: Can be used with any tool or SDK that supports OpenAI's API format.
- Multimodal: Supports both text and image inputs (base64-encoded image data).
You can experiment with the v0-1.0-md model in the AI Playground to test prompts and view responses.
The v0 API is currently in beta and requires a Premium or Team plan with usage-based billing enabled. For details, visit the pricing page.
To start using the v0-1.0-md model, create an API key on v0.dev.
You can then integrate it using the AI SDK, a TypeScript library designed for working with v0 and other OpenAI-compatible models.
The v0-1.0-md model is the default model served by the v0 API.
Capabilities:
- Supports text and image inputs (multimodal)
- Compatible with OpenAI’s Chat Completions format
- Supports function/tool calls
- Streaming responses with low latency
- Optimized for frontend and full-stack web development
POST https://api.v0.dev/v1/chat/completions
This endpoint generates a model response based on a list of messages.
Authorization | Yes | Bearer token: Bearer $V0_API_KEY |
Content-Type | Yes | Must be application/json |
model | string | Yes | Model name. Use "v0-1.0-md". |
messages | array | Yes | List of message objects forming the conversation. |
stream | boolean | No | If true, the response will be returned as a stream of data chunks. |
tools | array | No | Optional tool definitions (e.g., functions or API calls). |
tool_choice | string or object | No | Specifies which tool to call, if tools are provided. |
Each message object must contain:
role | string | Yes | One of "user", "assistant", or "system". |
content | string or array | Yes | The message content. Can be a string or array of text/image blocks. |
If stream is false (default), the response is a JSON object:
If stream is true, the server returns a series of data chunks formatted as Server-Sent Events (SSE). Each line begins with data: followed by a partial delta:
Max messages per day | 200 |
Max context window size | 128,000 tokens |
Max output context size | 32,000 tokens |
To request a higher limit, contact us at [email protected].
By using our API, you agree to our API Terms.