OCode is a sophisticated terminal-native AI coding assistant
1 day ago
4
Powered by Ollama Models
OCode is a sophisticated terminal-native AI coding assistant that provides deep codebase intelligence and autonomous task execution. Built to work seamlessly with local Ollama models, OCode brings enterprise-grade AI assistance directly to your development workflow.
Terminal-native workflow – runs directly in your shell environment
Deep codebase intelligence – automatically maps and understands your entire project
Autonomous task execution – handles multi-step development tasks end-to-end
Direct Ollama integration – streams completions from local/remote Ollama without proxies
# Download from python.org and run installer# https://www.python.org/downloads/windows/# Make sure to check "Add Python to PATH" during installation# Or using Chocolatey
choco install python311
# Or using winget
winget install Python.Python.3.11
Verify Python installation:
# Check Python version
python3 --version
# or
python --version
# Check pip
python3 -m pip --version
# or
pip --version
# If you get "command not found", try:
python3.11 --version
/usr/bin/python3 --version
Set up virtual environment (strongly recommended):
# Create a dedicated directory for OCode
mkdir ~/ocode-workspace
cd~/ocode-workspace
# Create virtual environment
python3 -m venv ocode-env
# Activate virtual environment# On macOS/Linux:source ocode-env/bin/activate
# On Windows:
ocode-env\Scripts\activate
# Your prompt should now show (ocode-env)# Verify you're in the virtual environment
which python # Should show path to venv
python --version
If you have permission issues with pip:
# Option 1: Use --user flag (installs to user directory)
python3 -m pip install --user --upgrade pip
# Option 2: Use virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
# Option 3: Fix pip permissions (macOS)
sudo chown -R $(whoami) /usr/local/lib/python3.*/site-packages
# Option 4: Use homebrew Python (macOS)
brew install [email protected]# Then use /opt/homebrew/bin/python3 instead
macOS:
# Using Homebrew (recommended)
brew install ollama
# Or download from https://ollama.ai
Linux:
# One-line installer
curl -fsSL https://ollama.ai/install.sh | sh
# Or using package manager# Ubuntu/Debian
sudo apt install ollama
# Arch Linux
yay -S ollama
Windows:
# Download from https://ollama.ai# Or use WSL with Linux instructions above
Start Ollama:
# Start the Ollama service
ollama serve
# In a new terminal, download a model
ollama pull llama3.2
# or
ollama pull codellama
# or
ollama pull gemma2
Verify Ollama is working:
# Check if service is running
curl http://localhost:11434/api/version
# Should return something like: {"version":"0.7.1"}# List available models
ollama list
# Clone and run installation script
git clone https://github.com/haasonsaas/ocode.git
cd ocode
./scripts/install.sh
Method 2: Development Installation
# Clone the repository
git clone https://github.com/haasonsaas/ocode.git
cd ocode
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate# Install in development mode with enhanced features
pip install -e .# This will install new dependencies for data processing:# - pyyaml>=6.0 (YAML parsing)# - jsonpath-ng>=1.5.3 (JSONPath queries)# - python-dotenv>=1.0.0 (Environment file handling)
# Install pipx if you don't have it
pip install pipx
# Install ocode
pipx install git+https://github.com/haasonsaas/ocode.git
Step 3: Verify Installation
# Check if ocode command is available
python -m ocode_python.core.cli --help
# Should show:# Usage: python -m ocode_python.core.cli [OPTIONS] COMMAND [ARGS]...# OCode - Terminal-native AI coding assistant powered by Ollama models.
If you get "command not found": See Troubleshooting section below.
Step 4: Initialize Your First Project
# Navigate to your project directorycd /path/to/your/project
# Initialize OCode for this project
python -m ocode_python.core.cli init
# Should output:# ✓ Initialized OCode in /path/to/your/project# Configuration: /path/to/your/project/.ocode/settings.json
# Set Ollama host (if using default localhost, this isn't needed)export OLLAMA_HOST=http://localhost:11434
# Test with a simple prompt
python -m ocode_python.core.cli -p "Hello! Tell me about this project."# Test enhanced multi-action detection
python -m ocode_python.core.cli -p "Run tests and commit if they pass"# Test comprehensive tool listing
python -m ocode_python.core.cli -p "What tools can you use?"# Should connect to Ollama and demonstrate enhanced conversation parsing
OCode includes advanced conversation parsing with multi-action detection:
🤖 Multi-Action Query Detection
# These queries now correctly identify multiple required actions:
python -m ocode_python.core.cli -p "Run tests and commit if they pass"# test_runner + git_commit
python -m ocode_python.core.cli -p "Find all TODO comments and replace them"# grep + file_edit
python -m ocode_python.core.cli -p "Analyze architecture and write documentation"# architect + file_write
python -m ocode_python.core.cli -p "Create a component and write tests for it"# file_write + test_runner
python -m ocode_python.core.cli -p "Parse config.json and update environment"# json_yaml + env
python -m ocode_python.core.cli -p "Monitor processes and kill high CPU ones"# ps + bash
# Start interactive session
python -m ocode_python.core.cli
# Interactive mode commands:# /help - Show available commands# /exit - Exit OCode# /model - Change model# /clear - Clear conversation context# /save - Save current session# /load - Load previous session
Single Prompt Mode:
# Ask a question
python -m ocode_python.core.cli -p "Explain the authentication system"# Request code changes
python -m ocode_python.core.cli -p "Add error handling to the user login function"# Generate code
python -m ocode_python.core.cli -p "Create a REST API endpoint for user profiles"
Specify Model:
# Use a specific model
python -m ocode_python.core.cli -m llama3.2 -p "Review this code for security issues"# Use a larger model for complex tasks
python -m ocode_python.core.cli -m codellama:70b -p "Refactor the entire payment processing module"
Different Output Formats:
# JSON output
python -m ocode_python.core.cli -p "List all functions in main.py" --out json
# Streaming JSON (for real-time processing)
python -m ocode_python.core.cli -p "Generate tests for UserService" --out stream-json
# Generate new features
ocode -p "Create a user authentication system with JWT tokens"# Add functionality to existing code
ocode -p "Add input validation to all API endpoints"# Generate documentation
ocode -p "Add comprehensive docstrings to all functions in utils.py"
# Understand existing code
ocode -p "Explain how the database connection pooling works"# Security review
ocode -p "Review the payment processing code for security vulnerabilities"# Performance analysis
ocode -p "Identify performance bottlenecks in the user search functionality"
Testing & Quality Assurance
# Generate tests
ocode -p "Write comprehensive unit tests for the UserRepository class"# Fix failing tests
ocode -p "Run the test suite and fix any failing tests"# Code coverage
ocode -p "Improve test coverage for the authentication module"
# Smart commits
ocode -p "Create a git commit with a descriptive message for these changes"# Code review
ocode -p "Review the changes in the current branch and suggest improvements"# Branch analysis
ocode -p "Compare this branch with main and summarize the changes"
Data Processing & Analysis
# JSON/YAML processing
ocode -p "Parse the config.json file and extract all database connection strings"# Data validation
ocode -p "Validate the structure of all YAML files in the configs/ directory"# JSONPath queries
ocode -p "Query user data: find all users with admin roles using JSONPath"# Environment management
ocode -p "Load variables from .env.production and compare with current environment"
System Monitoring & Operations
# Process monitoring
ocode -p "Show all Python processes and their memory usage"# Performance analysis
ocode -p "Find processes consuming more than 50% CPU and analyze them"# Network connectivity
ocode -p "Test connectivity to all services defined in docker-compose.yml"# Environment troubleshooting
ocode -p "Check if all required environment variables are set for production"
# Architecture review
ocode -p "Analyze the current project architecture and suggest improvements"# Dependency analysis
ocode -p "Review project dependencies and identify outdated packages"# Migration planning
ocode -p "Create a plan to migrate from Python 3.8 to Python 3.11"
For different task types:
# Fast responses for simple queriesexport OCODE_MODEL="llama3.2:3b"# Balanced performance for general codingexport OCODE_MODEL="llama3.2:latest"# Usually 7B-8B# Complex reasoning and large refactorsexport OCODE_MODEL="codellama:70b"# Specialized coding tasksexport OCODE_MODEL="codellama:latest"
# Use the full Python module path
python -m ocode_python.core.cli --help
# Or if using virtual environmentsource venv/bin/activate
python -m ocode_python.core.cli --help
Solution 2: Install and check script location
# Install in development mode
pip install -e .# Find where scripts were installed
pip show ocode | grep Location
# Add to PATH if neededecho'export PATH="$HOME/.local/bin:$PATH"'>>~/.bashrc
source~/.bashrc
# Install without sudo using --user flag
pip install --user ocode
# Or use virtual environment
python -m venv ocode-env
source ocode-env/bin/activate
pip install ocode
Problem: Failed to connect to Ollama
Diagnosis:
# Check if Ollama is running
ps aux | grep ollama
# Check if port is open
netstat -ln | grep 11434
# or
lsof -i :11434
# Test direct connection
curl http://localhost:11434/api/version
Solutions:
# Start Ollama service
ollama serve
# Check for different portexport OLLAMA_HOST="http://localhost:11434"# For Docker installations
docker ps | grep ollama
docker logs ollama-container-name
# Check firewall (Linux)
sudo ufw status
sudo ufw allow 11434
# Check firewall (macOS)
sudo pfctl -sr | grep 11434
Problem: Model not found errors
Solution:
# List available models
ollama list
# Pull required model
ollama pull llama3.2
# Update OCode config to use available model
ocode config --set model=llama3.2:latest
# Or set environment variableexport OCODE_MODEL="llama3.2:latest"
Problem: Slow responses or timeouts
Solutions:
# Increase timeoutexport OCODE_TIMEOUT=600 # 10 minutes# Use smaller modelexport OCODE_MODEL="llama3.2:3b"# Check system resources
top
df -h
free -h
# Monitor Ollama logs
ollama logs
Problem: Configuration not loading
Diagnosis:
# Check config file location
ocode config --list
# Verify file exists and is readable
ls -la .ocode/settings.json
cat .ocode/settings.json
# Preload model in Ollama
ollama run llama3.2:latest "hello"# Reduce project scan scopeecho"node_modules/\n.git/\ndist/">> .ocodeignore
# Use SSD for project files# Move project to faster storage
Problem: Connection refused (remote Ollama)
Solutions:
# Test network connectivity
ping your-ollama-server
telnet your-ollama-server 11434
# Check Ollama server binding# On Ollama server, ensure it binds to 0.0.0.0:11434
OLLAMA_HOST=0.0.0.0 ollama serve
# Update client configurationexport OLLAMA_HOST="http://your-server-ip:11434"
Problem: Model gives poor responses
Solutions:
# Try different models
ocode config --set model=codellama:latest
ocode config --set model=llama3.2:70b
# Adjust temperature
ocode config --set temperature=0.1 # More focused
ocode config --set temperature=0.7 # More creative# Increase context
ocode config --set max_tokens=8192
ocode config --set max_context_files=30
Problem: Model responses cut off
Solutions:
# Increase token limit
ocode config --set max_tokens=8192
# Use model with larger context windowexport OCODE_MODEL="llama3.2:latest"# Break large requests into smaller parts
ocode -p "First, analyze the authentication system"
ocode -p "Now suggest improvements for the auth system"
Enable verbose logging:
# Environment variableexport OCODE_VERBOSE=true
# CLI flag
ocode -v -p "Debug this issue"# Config setting
ocode config --set verbose=true
Check system information:
# OCode version and config
ocode --help
ocode config --list
# Python environment
python --version
pip list | grep ocode
# Ollama status
ollama list
curl http://localhost:11434/api/version
# System resources
df -h
free -h
ps aux | grep -E "(ollama|ocode)"
For the full documentation index, see docs/index.md.
We welcome contributions! Here's how to get started:
git clone https://github.com/haasonsaas/ocode.git
cd ocode
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install in development mode with dev dependencies
pip install -e ".[dev]"# Run tests
pytest
# Code quality checks
black ocode_python/
isort ocode_python/
flake8 ocode_python/
mypy ocode_python/
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the LICENSE file for details.
The AGPL-3.0 is a strong copyleft license that requires any modifications to be released under the same license. This ensures that the software remains free and open source, and that any improvements made to it are shared with the community.