Opencode: AI coding agent, built for the terminal

5 days ago 1

OpenCode Terminal UI

⚠️ Early Development Notice: This project is in early development and is not yet ready for production use. Features may change, break, or be incomplete. Use at your own risk.

A powerful terminal-based AI assistant for developers, providing intelligent coding assistance directly in your terminal.

OpenCode is a Go-based CLI application that brings AI assistance to your terminal. It provides a TUI (Terminal User Interface) for interacting with various AI models to help with coding tasks, debugging, and more.

  • Interactive TUI: Built with Bubble Tea for a smooth terminal experience
  • Multiple AI Providers: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, and OpenRouter
  • Session Management: Save and manage multiple conversation sessions
  • Tool Integration: AI can execute commands, search files, and modify code
  • Vim-like Editor: Integrated editor with text input capabilities
  • Persistent Storage: SQLite database for storing conversations and sessions
  • LSP Integration: Language Server Protocol support for code intelligence
  • File Change Tracking: Track and visualize file changes during sessions
  • External Editor Support: Open your preferred editor for composing messages
  • Named Arguments for Custom Commands: Create powerful custom commands with multiple named placeholders
# Install the latest version curl -fsSL https://opencode.ai/install | bash # Install a specific version curl -fsSL https://opencode.ai/install | VERSION=0.1.0 bash

Using Homebrew (macOS and Linux)

brew install sst/tap/opencode
# Using yay yay -S opencode-bin # Using paru paru -S opencode-bin
go install github.com/sst/opencode@latest

OpenCode looks for configuration in the following locations:

  • $HOME/.opencode.json
  • $XDG_CONFIG_HOME/opencode/.opencode.json
  • ./.opencode.json (local directory)

You can configure OpenCode using environment variables:

Environment Variable Purpose
ANTHROPIC_API_KEY For Claude models
OPENAI_API_KEY For OpenAI models
GEMINI_API_KEY For Google Gemini models
VERTEXAI_PROJECT For Google Cloud VertexAI (Gemini)
VERTEXAI_LOCATION For Google Cloud VertexAI (Gemini)
GROQ_API_KEY For Groq models
AWS_ACCESS_KEY_ID For AWS Bedrock (Claude)
AWS_SECRET_ACCESS_KEY For AWS Bedrock (Claude)
AWS_REGION For AWS Bedrock (Claude)
AZURE_OPENAI_ENDPOINT For Azure OpenAI models
AZURE_OPENAI_API_KEY For Azure OpenAI models (optional when using Entra ID)
AZURE_OPENAI_API_VERSION For Azure OpenAI models

Configuration File Structure

{ "data": { "directory": ".opencode" }, "providers": { "openai": { "apiKey": "your-api-key", "disabled": false }, "anthropic": { "apiKey": "your-api-key", "disabled": false }, "groq": { "apiKey": "your-api-key", "disabled": false }, "openrouter": { "apiKey": "your-api-key", "disabled": false } }, "agents": { "primary": { "model": "claude-3.7-sonnet", "maxTokens": 5000 }, "task": { "model": "claude-3.7-sonnet", "maxTokens": 5000 }, "title": { "model": "claude-3.7-sonnet", "maxTokens": 80 } }, "mcpServers": { "example": { "type": "stdio", "command": "path/to/mcp-server", "env": [], "args": [] } }, "lsp": { "go": { "disabled": false, "command": "gopls" } }, "shell": { "path": "/bin/zsh", "args": ["-l"] }, "debug": false, "debugLSP": false }

OpenCode supports a variety of AI models from different providers:

  • GPT-4.1 family (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano)
  • GPT-4.5 Preview
  • GPT-4o family (gpt-4o, gpt-4o-mini)
  • O1 family (o1, o1-pro, o1-mini)
  • O3 family (o3, o3-mini)
  • O4 Mini
  • Claude 3.5 Sonnet
  • Claude 3.5 Haiku
  • Claude 3.7 Sonnet
  • Claude 3 Haiku
  • Claude 3 Opus
  • Gemini 2.5
  • Gemini 2.5 Flash
  • Gemini 2.0 Flash
  • Gemini 2.0 Flash Lite
  • Claude 3.7 Sonnet
  • Llama 4 Maverick (17b-128e-instruct)
  • Llama 4 Scout (17b-16e-instruct)
  • QWEN QWQ-32b
  • Deepseek R1 distill Llama 70b
  • Llama 3.3 70b Versatile
  • GPT-4.1 family (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano)
  • GPT-4.5 Preview
  • GPT-4o family (gpt-4o, gpt-4o-mini)
  • O1 family (o1, o1-mini)
  • O3 family (o3, o3-mini)
  • O4 Mini
  • Gemini 2.5
  • Gemini 2.5 Flash

To use bedrock models with OpenCode you need three things.

  1. Valid AWS credentials (the env vars: AWS_SECRET_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_REGION)
  2. Access to the corresponding model in AWS Bedrock in your region. a. You can request access in the AWS console on the Bedrock -> "Model access" page.
  3. A correct configuration file. You don't need the providers key. Instead you have to prefix your models per agent with bedrock. and then a valid model. For now only Claude 3.7 is supported.
{ "agents": { "primary": { "model": "bedrock.claude-3.7-sonnet", "maxTokens": 5000, "reasoningEffort": "" }, "task": { "model": "bedrock.claude-3.7-sonnet", "maxTokens": 5000, "reasoningEffort": "" }, "title": { "model": "bedrock.claude-3.7-sonnet", "maxTokens": 80, "reasoningEffort": "" } }, }
# Start OpenCode opencode # Start with debug logging opencode -d # Start with a specific working directory opencode -c /path/to/project

Non-interactive Prompt Mode

You can run OpenCode in non-interactive mode by passing a prompt directly as a command-line argument or by piping text into the command. This is useful for scripting, automation, or when you want a quick answer without launching the full TUI.

# Run a single prompt and print the AI's response to the terminal opencode -p "Explain the use of context in Go" # Pipe input to OpenCode (equivalent to using -p flag) echo "Explain the use of context in Go" | opencode # Get response in JSON format opencode -p "Explain the use of context in Go" -f json # Or with piped input echo "Explain the use of context in Go" | opencode -f json # Run without showing the spinner opencode -p "Explain the use of context in Go" -q # Or with piped input echo "Explain the use of context in Go" | opencode -q # Enable verbose logging to stderr opencode -p "Explain the use of context in Go" --verbose # Or with piped input echo "Explain the use of context in Go" | opencode --verbose # Restrict the agent to only use specific tools opencode -p "Explain the use of context in Go" --allowedTools=view,ls,glob # Or with piped input echo "Explain the use of context in Go" | opencode --allowedTools=view,ls,glob # Prevent the agent from using specific tools opencode -p "Explain the use of context in Go" --excludedTools=bash,edit # Or with piped input echo "Explain the use of context in Go" | opencode --excludedTools=bash,edit

In this mode, OpenCode will process your prompt, print the result to standard output, and then exit. All permissions are auto-approved for the session.

You can control which tools the AI assistant has access to in non-interactive mode:

  • --allowedTools: Comma-separated list of tools that the agent is allowed to use. Only these tools will be available.
  • --excludedTools: Comma-separated list of tools that the agent is not allowed to use. All other tools will be available.

These flags are mutually exclusive - you can use either --allowedTools or --excludedTools, but not both at the same time.

OpenCode supports the following output formats in non-interactive mode:

Format Description
text Plain text output (default)
json Output wrapped in a JSON object

The output format is implemented as a strongly-typed OutputFormat in the codebase, ensuring type safety and validation when processing outputs.

Flag Short Description
--help -h Display help information
--debug -d Enable debug mode
--cwd -c Set current working directory
--prompt -p Run a single prompt in non-interactive mode
--output-format -f Output format for non-interactive mode (text, json)
--quiet -q Hide spinner in non-interactive mode
--verbose Display logs to stderr in non-interactive mode
--allowedTools Restrict the agent to only use specified tools
--excludedTools Prevent the agent from using specified tools
Shortcut Action
Ctrl+C Quit application
Ctrl+? Toggle help dialog
? Toggle help dialog (when not in editing mode)
Ctrl+L View logs
Ctrl+A Switch session
Ctrl+K Command dialog
Ctrl+O Toggle model selection dialog
Esc Close current overlay/dialog or return to previous mode
Shortcut Action
Ctrl+N Create new session
Ctrl+X Cancel current operation/generation
i Focus editor (when not in writing mode)
Esc Exit writing mode and focus messages
Shortcut Action
Ctrl+S Send message (when editor is focused)
Enter or Ctrl+S Send message (when editor is not focused)
Ctrl+E Open external editor
Esc Blur editor and focus messages
Shortcut Action
↑ or k Previous session
↓ or j Next session
Enter Select session
Esc Close dialog
Shortcut Action
↑ or k Move up
↓ or j Move down
← or h Previous provider
→ or l Next provider
Esc Close dialog

Permission Dialog Shortcuts

Shortcut Action
← or left Switch options left
→ or right or tab Switch options right
Enter or space Confirm selection
a Allow permission
A Allow permission for session
d Deny permission
Shortcut Action
Backspace or q Return to chat page

OpenCode's AI assistant has access to various tools to help with coding tasks:

Tool Description Parameters
glob Find files by pattern pattern (required), path (optional)
grep Search file contents pattern (required), path (optional), include (optional), literal_text (optional)
ls List directory contents path (optional), ignore (optional array of patterns)
view View file contents file_path (required), offset (optional), limit (optional)
write Write to files file_path (required), content (required)
edit Edit files Various parameters for file editing
patch Apply patches to files file_path (required), diff (required)
diagnostics Get diagnostics information file_path (optional)
Tool Description Parameters
bash Execute shell commands command (required), timeout (optional)
fetch Fetch data from URLs url (required), format (required), timeout (optional)
agent Run sub-tasks with the AI agent prompt (required)

OpenCode supports multiple themes for customizing the appearance of the terminal interface.

The following predefined themes are available:

  • opencode (default)
  • catppuccin
  • dracula
  • flexoki
  • gruvbox
  • monokai
  • onedark
  • tokyonight
  • tron
  • custom (user-defined)

You can set a theme in your .opencode.json configuration file:

{ "tui": { "theme": "monokai" } }

You can define your own custom theme by setting the theme to "custom" and providing color definitions in the customTheme map:

{ "tui": { "theme": "custom", "customTheme": { "primary": "#ffcc00", "secondary": "#00ccff", "accent": { "dark": "#aa00ff", "light": "#ddccff" }, "error": "#ff0000" } } }

Custom theme colors support two formats:

  1. Simple Hex String: A single hex color string (e.g., "#aabbcc") that will be used for both light and dark terminal backgrounds.

  2. Adaptive Object: An object with dark and light keys, each holding a hex color string. This allows for adaptive colors based on the terminal's background.

You can define any of the following color keys in your customTheme:

  • Base colors: primary, secondary, accent
  • Status colors: error, warning, success, info
  • Text colors: text, textMuted, textEmphasized
  • Background colors: background, backgroundSecondary, backgroundDarker
  • Border colors: borderNormal, borderFocused, borderDim
  • Diff view colors: diffAdded, diffRemoved, diffContext, etc.

You don't need to define all colors. Any undefined colors will fall back to the default "opencode" theme colors.

OpenCode allows you to configure the shell used by the bash tool. By default, it uses:

  1. The shell specified in the config file (if provided)
  2. The shell from the $SHELL environment variable (if available)
  3. Falls back to /bin/bash if neither of the above is available

To configure a custom shell, add a shell section to your .opencode.json configuration file:

{ "shell": { "path": "/bin/zsh", "args": ["-l"] } }

You can specify any shell executable and custom arguments:

{ "shell": { "path": "/usr/bin/fish", "args": [] } }

OpenCode is built with a modular architecture:

  • cmd: Command-line interface using Cobra
  • internal/app: Core application services
  • internal/config: Configuration management
  • internal/db: Database operations and migrations
  • internal/llm: LLM providers and tools integration
  • internal/tui: Terminal UI components and layouts
  • internal/logging: Logging infrastructure
  • internal/message: Message handling
  • internal/session: Session management
  • internal/lsp: Language Server Protocol integration

OpenCode supports custom commands that can be created by users to quickly send predefined prompts to the AI assistant.

Custom commands are predefined prompts stored as Markdown files in one of three locations:

  1. User Commands (prefixed with user:):

    $XDG_CONFIG_HOME/opencode/commands/

    (typically ~/.config/opencode/commands/ on Linux/macOS)

    or

    $HOME/.opencode/commands/
  2. Project Commands (prefixed with project:):

    <PROJECT DIR>/.opencode/commands/

Each .md file in these directories becomes a custom command. The file name (without extension) becomes the command ID.

For example, creating a file at ~/.config/opencode/commands/prime-context.md with content:

RUN git ls-files READ README.md

This creates a command called user:prime-context.

OpenCode supports named arguments in custom commands using placeholders in the format $NAME (where NAME consists of uppercase letters, numbers, and underscores, and must start with a letter).

For example:

# Fetch Context for Issue $ISSUE_NUMBER RUN gh issue view $ISSUE_NUMBER --json title,body,comments RUN git grep --author="$AUTHOR_NAME" -n . RUN grep -R "$SEARCH_PATTERN" $DIRECTORY

When you run a command with arguments, OpenCode will prompt you to enter values for each unique placeholder. Named arguments provide several benefits:

  • Clear identification of what each argument represents
  • Ability to use the same argument multiple times
  • Better organization for commands with multiple inputs

You can organize commands in subdirectories:

~/.config/opencode/commands/git/commit.md

This creates a command with ID user:git:commit.

  1. Press Ctrl+K to open the command dialog
  2. Select your custom command (prefixed with either user: or project:)
  3. Press Enter to execute the command

The content of the command file will be sent as a message to the AI assistant.

MCP (Model Context Protocol)

OpenCode implements the Model Context Protocol (MCP) to extend its capabilities through external tools. MCP provides a standardized way for the AI assistant to interact with external services and tools.

  • External Tool Integration: Connect to external tools and services via a standardized protocol
  • Tool Discovery: Automatically discover available tools from MCP servers
  • Multiple Connection Types:
    • Stdio: Communicate with tools via standard input/output
    • SSE: Communicate with tools via Server-Sent Events
  • Security: Permission system for controlling access to MCP tools

MCP servers are defined in the configuration file under the mcpServers section:

{ "mcpServers": { "example": { "type": "stdio", "command": "path/to/mcp-server", "env": [], "args": [] }, "web-example": { "type": "sse", "url": "https://example.com/mcp", "headers": { "Authorization": "Bearer token" } } } }

Once configured, MCP tools are automatically available to the AI assistant alongside built-in tools. They follow the same permission model as other tools, requiring user approval before execution.

LSP (Language Server Protocol)

OpenCode integrates with Language Server Protocol to provide code intelligence features across multiple programming languages.

  • Multi-language Support: Connect to language servers for different programming languages
  • Diagnostics: Receive error checking and linting information
  • File Watching: Automatically notify language servers of file changes

Language servers are configured in the configuration file under the lsp section:

{ "lsp": { "go": { "disabled": false, "command": "gopls" }, "typescript": { "disabled": false, "command": "typescript-language-server", "args": ["--stdio"] } } }

The AI assistant can access LSP features through the diagnostics tool, allowing it to:

  • Check for errors in your code
  • Suggest fixes based on diagnostics

While the LSP client implementation supports the full LSP protocol (including completions, hover, definition, etc.), currently only diagnostics are exposed to the AI assistant.

  • Go 1.24.0 or higher
# Clone the repository git clone https://github.com/sst/opencode.git cd opencode # Build go build -o opencode # Run ./opencode

OpenCode gratefully acknowledges the contributions and support from these key individuals:

Special thanks to the broader open source community whose tools and libraries have made this project possible.

OpenCode is licensed under the MIT License. See the LICENSE file for details.

Contributions are welcome! Here's how you can contribute:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please make sure to update tests as appropriate and follow the existing code style.

Read Entire Article