Because Context is Key!
A macOS menu bar app that brings AI to your fingertips. Select any text, press a hotkey, and get instant AI-powered answers without leaving your workflow.
Works with Ollama (local) and any API - OpenAI, Anthropic, Google Gemini, xAI, or your own custom endpoint.
ContextKey lets you:
- 🚀 Select text anywhere and instantly query AI about it
- 🖼️ Select images and files in finder for analysis
- 💬 Keep conversation history with full context preservation
- 🤖 Use any LLM: Ollama (local), OpenAI, Anthropic, xAI, Google Gemini, or any custom API
- ⚡ Two modes: Full-featured main window + quick popup window
- 🎯 Control context: Choose what to include in each query
ContextKey_Vid2.mp4
- Download the .dmg file from Releases
- Move to Applications folder and launch
- Grant permissions when prompted (File Access, Accessibility)
- Select a folder to store your conversations and settings
- Add an LLM configuration (see below)
Run AI models locally on your Mac:
# Install Ollama
brew install ollama
# Start Ollama
ollama serve
# Pull a model (in a new terminal)
ollama pull llama3.2
In ContextKey:
- Click ⚙️ Settings → Add Configuration
- Select "Ollama" (not Custom)
- Endpoint: http://localhost:11434
- Click "Fetch Models" to see installed models
- Select your model → Add Configuration
- Click "Set Active" to use it
All cloud APIs use the "Custom" option. Here are examples:
- Click ⚙️ Settings → Add Configuration
- Select "Custom"
- Fill in:
- Name: GPT-4
- API Key: Get from platform.openai.com
- Endpoint: https://api.openai.com/v1/chat/completions
- HTTP Method: POST
- Headers: {"Content-Type": "application/json"}
- Request Template:
{ "model": "gpt-4", "messages": [ { "role": "user", "content": "{{input}}" } ] }
- Response Path: choices[0].message.content
- Click Add Configuration → Set Active
- Click ⚙️ Settings → Add Configuration
- Select "Custom"
- Fill in:
- Name: Claude
- API Key: Get from console.anthropic.com
- Endpoint: https://api.anthropic.com/v1/messages
- HTTP Method: POST
- Headers: {"Content-Type": "application/json", "anthropic-version": "2023-06-01"}
- Request Template:
{ "model": "claude-3-5-sonnet-20241022", "max_tokens": 1024, "messages": [ { "role": "user", "content": "{{input}}" } ] }
- Response Path: content[0].text
- Click Add Configuration → Set Active
- Click ⚙️ Settings → Add Configuration
- Select "Custom"
- Fill in:
- Name: Gemini Pro
- API Key: Get from ai.google.dev
- Endpoint: https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent
- HTTP Method: POST
- Headers: {"Content-Type": "application/json"}
- Request Template:
{ "contents": [ { "parts": [ { "text": "{{input}}" } ] } ] }
- Response Path: candidates[0].content.parts[0].text
- Click Add Configuration → Set Active
Template Guide:
- Use {{input}} where you want the user's message + context inserted
- API key is automatically added to headers for you
- Response Path uses dot notation: field.nested.array[0].value
- Select text in any app or a file in Finder.
- Press Cmd+Shift+K OR the hotkey you set in settings.
- Ask your question
- Get instant answers!
Or press Cmd+Option+K or the hotkey you set in settings to open without context.
- Open the app from menu bar
- (Optional) Add initial context or attach files
- Type your question and press Enter
- Continue the conversation or browse history in the sidebar
Before sending a message, choose what to include:
- ✅ Initial Context + Conversation: Full context with history (default)
- 📄 Initial Context Only: Just the starting context
- ❌ Neither: Only your current question
- All data stored locally on your Mac
- API keys saved in local files (you control backups)
- Zero telemetry or tracking
- Code is open source—audit it yourself!
Contributions welcome! Fork the repo, make your changes, and open a Pull Request.
MIT License - see LICENSE for details.
Context really is key. 🔑
.png)


