A CLI tool for managing and comparing LLM prompts using semantic diffing instead of traditional text-based comparison.
LLM Prompt Semantic Diff delivers a lightweight command‑line workflow for managing, packaging, and semantic diffing of Large Language Model prompts. It addresses the blind spot where ordinary text‑based git diff fails to reveal meaning‑level changes that materially affect model behaviour.
F-1 : prompt init - Generates skeletal prompt files and default manifests
F-2 : prompt pack - Embeds prompts into .pp.json with semantic versioning
F-3 : prompt diff - Semantic comparison with percentage scores and exit codes
F-4 : Dual embedding providers (OpenAI cloud + SentenceTransformers local)
F-5 : JSON output for CI/CD integration
F-6 : Schema validation for all manifests
F-7 : Comprehensive test suite with >75% coverage
Install from source:
git clone https://github.com/aatakansalar/llm-prompt-semantic-diff
cd llm-prompt-semantic-diff
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .
1. Initialize a New Prompt
Creates my-greeting.prompt and my-greeting.pp.json with default structure.
2. Package an Existing Prompt
prompt pack my-prompt.prompt
Generates embeddings and creates a versioned manifest.
3. Compare Prompt Versions
# Human-readable output
prompt diff v1.pp.json v2.pp.json --threshold 0.8
# JSON output for CI/CD
prompt diff v1.pp.json v2.pp.json --json --threshold 0.8
Returns exit code 1 if similarity below threshold.
4. Validate Manifest Schema
prompt validate my-prompt.pp.json
Uses SentenceTransformers with all-MiniLM-L6-v2 model:
prompt pack my-prompt.prompt --provider sentence-transformers
Requires OPENAI_API_KEY environment variable:
export OPENAI_API_KEY=" your-api-key"
prompt pack my-prompt.prompt --provider openai
Use --json flag for machine-readable output:
- name : Check prompt changes
run : |
prompt diff main.pp.json feature.pp.json --json --threshold 0.8
if [ $? -eq 1 ]; then
echo "Prompt changes exceed threshold - review required"
exit 1
fi
Prompts are packaged into .pp.json files:
{
"content" : " Your prompt text here..." ,
"version" : " 0.1.0" ,
"embeddings" : [0.1 , -0.2 , 0.3 , ... ],
"description" : " Optional description" ,
"tags" : [" category" , " type" ],
"model" : " gpt-4"
}
# Create new prompt
prompt init greeting
# Edit greeting.prompt file
# ... make changes ...
# Package with embeddings
prompt pack greeting.prompt
# Create modified version
cp greeting.prompt greeting-v2.prompt
# ... make more changes ...
prompt pack greeting-v2.prompt
# Compare versions
prompt diff greeting.pp.json greeting-v2.pp.json
# Output:
# Semantic similarity: 85.2%
# Threshold: 80.0%
# Above threshold: Yes
# Version A: 0.1.0
# Version B: 0.1.0
Local-first : No data leaves your machine unless OpenAI provider is explicitly selected
API keys : Only read from environment variables (OPENAI_API_KEY)
No telemetry : No analytics, tracking, or hidden network calls
git clone https://github.com/aatakansalar/llm-prompt-semantic-diff
cd llm-prompt-semantic-diff
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e " .[dev]"
pytest tests/ -v
Licensed under the MIT License. See LICENSE for details.