Show HN: Kafy – kubectl-style CLI for Kafka management
1 hour ago
2
A comprehensive Kafka productivity CLI tool that simplifies Kafka operations with a kubectl-like design philosophy. Replace complex native kafka-* shell scripts with intuitive, short commands and intelligent tab completion.
Context-aware operations - Switch between dev, staging, and prod clusters seamlessly
Unified command structure - kubectl-inspired commands for all Kafka operations
Intelligent tab completion - Auto-complete topics, consumer groups, brokers, and more
Multiple output formats - Human-readable tables, JSON, and YAML for automation
Linux: kafy-v1.0.0-linux-amd64.tar.gz (x86_64) or kafy-v1.0.0-linux-arm64.tar.gz (ARM64)
macOS: kafy-v1.0.0-darwin-amd64.tar.gz (Intel) or kafy-v1.0.0-darwin-arm64.tar.gz (Apple Silicon)
Windows: kafy-v1.0.0-windows-amd64.zip
Extract the archive:
# Linux/macOS
tar -xzf kafy-v1.0.0-linux-amd64.tar.gz
# Windows (PowerShell)
Expand-Archive kafy-v1.0.0-windows-amd64.zip
Move to PATH (optional but recommended):
# Linux/macOS
sudo mv kafy /usr/local/bin/
# Windows: Add the extracted folder to your PATH environment variable
Verify installation:
For developers and contributors - Build from source code:
Go 1.21+ - Required for compilation
librdkafka development libraries - Required for Kafka connectivity
Git - For cloning the repository
Docker (optional) - For cross-platform builds
# Clone the repository
git clone https://github.com/KLogicHQ/kafy.git
cd kafy
# Download Go dependencies
go mod tidy
# Build for current platform
go build -o kafy .# Optional: Install globally
sudo mv kafy /usr/local/bin/ # Linux/macOS# Or add to PATH on Windows
Use the included build script for multi-platform binaries:
# Make executable and run
chmod +x build.sh
./build.sh
# Find built packages in release/dist/
ls release/dist/
The build script will:
Build for Linux (amd64, arm64) using Docker when available
Attempt native cross-compilation for macOS and Windows
Create distribution packages (tar.gz for Linux/macOS, zip for Windows)
Provide detailed build status for each platform
kafy provides intelligent tab completion for all shells. Set it up once and get auto-completion for topics, consumer groups, broker IDs, cluster names, and command flags.
# For current session onlysource<(kafy completion bash)# Install permanently on Linux
kafy completion bash | sudo tee /etc/bash_completion.d/kafy
# Install permanently on macOS (with Homebrew bash-completion)
kafy completion bash >$(brew --prefix)/etc/bash_completion.d/kafy
# For current session onlysource<(kafy completion zsh)# Install permanently
kafy completion zsh >~/.zsh/completions/_kafy
# Then add to ~/.zshrc: fpath=(~/.zsh/completions $fpath)# Or for oh-my-zsh users
kafy completion zsh >~/.oh-my-zsh/completions/_kafy
kafy completion fish >~/.config/fish/completions/kafy.fish
# Add a development cluster with single bootstrap server
kafy config add dev --bootstrap "localhost:9092" --broker-metrics-port 9308
# Add a production cluster with multiple bootstrap servers for high availability
kafy config add prod --bootstrap "kafka-prod-1:9092,kafka-prod-2:9092,kafka-prod-3:9092" --broker-metrics-port 9308
# List configured clusters
kafy config list
# Switch between clusters
kafy config use prod
kafy config current
# List all topics
kafy topics list
# View partition details with insync status
kafy topics partitions # All topics
kafy topics partitions orders # Specific topic# Create a new topic
kafy topics create orders --partitions 3 --replication 2
# Describe a topic
kafy topics describe orders
# Delete a topic (with confirmation)
kafy topics delete test-topic
# Manage topic configurations
kafy topics configs list # List configs for ALL topics
kafy topics configs get orders # Get configs for specific topic
kafy topics configs set orders retention.ms=86400000
# Interactive message production
kafy produce orders
# Produce from a fileecho'{"order_id": 123, "amount": 99.99}'> order.json
kafy produce orders --file order.json --format json
# Generate test messages
kafy produce orders --count 10
# Produce with a specific key
kafy produce orders --key "customer-123"
# Consume messages interactively
kafy consume orders
# Consume from multiple topics simultaneously
kafy consume orders users events
# Consume from beginning with limit
kafy consume orders --from-beginning --limit 20
# Consume from latest messages only
kafy consume orders --from-latest
# Consume with specific consumer group
kafy consume orders --group my-service
# Output in JSON format
kafy consume orders --output json --limit 5
# Tail messages in real-time (like tail -f)
kafy tail orders
# Tail multiple topics simultaneously
kafy tail orders users events
📖 Complete Command Reference
Command
Description
Examples
kafy config list
List all configured clusters
Show cluster overview with metrics port and current context
kafy config current
Show current active cluster
Display active cluster details including metrics port
# Default table format (human-readable)
kafy topics list
# JSON format (for automation)
kafy topics list --output json
# YAML format
kafy topics list --output yaml
🔄 Temporary Cluster Switching
The global -c or --cluster flag allows you to temporarily switch to a different cluster for any command without changing your current context:
# Run command against specific cluster without switching context
kafy topics list -c prod # List topics on prod cluster
kafy consume orders -c staging --limit 10 # Consume from staging cluster
kafy health check -c dev # Check dev cluster health# Multiple commands can use different clusters
kafy topics create test-topic -c dev --partitions 1
kafy cp test-topic backup-topic -c prod --limit 100
# Your current context remains unchanged
kafy config current # Still shows your original active cluster
Key Benefits:
No context switching required - Run commands on any configured cluster instantly
Safe operations - Current context is never modified
Script-friendly - Perfect for automation scripts that work across multiple environments
Works with all commands - Available for every kafy command that connects to Kafka
# The tool supports various SASL mechanisms# Configure through the YAML config file or environment variables
kafy config add secure-cluster --bootstrap "secure-kafka:9092"# Then edit ~/.kafy/config.yml to add security settings
# Set up development environment
kafy config add local --bootstrap "localhost:9092" --broker-metrics-port 9308
kafy config use local# Create test topic and generate data
kafy topics create test-events --partitions 1 --replication 1
kafy produce test-events --count 100
# Monitor partition health and sync status
kafy topics partitions test-events
# Monitor the data
kafy consume test-events --from-beginning --limit 10
# Monitor multiple topics simultaneously
kafy consume orders test-events users --limit 20
# Or tail real-time messages from multiple topics
kafy tail test-events orders users
# Configure topic settings
kafy topics configs set test-events retention.ms=3600000
kafy topics configs get test-events
# Remove topic configuration overrides
kafy topics configs delete test-events cleanup.policy
# Add production cluster with multiple bootstrap servers for HA
kafy config add prod --bootstrap "kafka-prod-1:9092,kafka-prod-2:9092,kafka-prod-3:9092" --broker-metrics-port 9308
kafy config use prod
# Check cluster health
kafy health check
# Monitor consumer groups with state and member information
kafy groups list
kafy groups describe critical-processor
kafy groups lag critical-processor
# Check partition health and sync status
kafy topics partitions critical-topic
kafy topics partitions # All topics# Inspect broker configurations and metrics
kafy brokers configs list
kafy brokers describe 1
kafy brokers metrics 1 # Requires --broker-metrics-port# List topic details and configurations
kafy topics list
kafy topics describe critical-topic
kafy topics configs get critical-topic
#!/bin/bash# Set up cluster with multiple bootstrap servers
kafy config add prod --bootstrap "kafka-1:9092,kafka-2:9092,kafka-3:9092" --broker-metrics-port 9308
# Get all topics as JSON for processing
TOPICS=$(kafy topics list --output json)# Check if specific topic existsif kafy topics describe user-events --output json > /dev/null 2>&1;thenecho"Topic exists"# Check partition sync status
kafy topics partitions user-events --output json
elseecho"Creating topic..."
kafy topics create user-events --partitions 6 --replication 3
# Configure the topic
kafy topics configs set user-events retention.ms=86400000
kafy topics configs set user-events cleanup.policy=delete
fi# Monitor multiple topics simultaneously in automation
kafy consume user-events orders payments --output json --limit 100
# Monitor consumer group with detailed member information
GROUPS=$(kafy groups list --output json)
kafy groups describe my-service --output json
LAG=$(kafy groups lag my-service --output json)echo"Current lag: $LAG"
# Partition data movement for rebalancing or migration
kafy topics move-partition orders --source-partition 0 --dest-partition 3
# Monitor partition health before and after migration
kafy topics partitions orders
# Copy messages between topics for backup or testing
kafy cp production-events staging-events --limit 5000
# Multi-topic consumption for aggregated monitoring
kafy consume orders payments notifications --output json --limit 50
# Real-time monitoring across multiple topics
kafy tail critical-events error-logs audit-trail
# Add clusters with single or multiple bootstrap servers
kafy config add dev --bootstrap "localhost:9092" --broker-metrics-port 9308
kafy config add staging --bootstrap "kafka-stage-1:9092,kafka-stage-2:9092" --broker-metrics-port 9308
kafy config add prod --bootstrap "kafka-prod-1:9092,kafka-prod-2:9092,kafka-prod-3:9092" --broker-metrics-port 9308
# Update existing cluster configurations
kafy config update dev --bootstrap "localhost:9092,localhost:9093" --broker-metrics-port 9309
kafy config update prod --zookeeper "zk-prod-1:2181,zk-prod-2:2181,zk-prod-3:2181"# View complete cluster information
kafy config list # Shows all clusters with metrics port and zookeeper
kafy config current # Shows detailed current cluster info# Topic configuration management
kafy topics configs list # All topic configs
kafy topics configs get orders # Specific topic
kafy topics configs set orders retention.ms=604800000 # Update setting
kafy topics configs delete orders cleanup.policy # Remove override
Broker Configuration Management
# List all broker configurations
kafy brokers configs list
# View specific broker configuration
kafy brokers configs get 1
# Update broker settings
kafy brokers configs set 1 log.retention.hours=72
Monitoring & Health Checks
# Configure cluster with metrics support
kafy config add prod --bootstrap "kafka-1:9092,kafka-2:9092,kafka-3:9092" --broker-metrics-port 9308
# Comprehensive cluster health monitoring
kafy health check # Full cluster diagnostics
kafy health brokers # Broker connectivity
kafy health topics # Topic accessibility
kafy health groups # Consumer group health# Partition health and sync monitoring
kafy topics partitions # All topics with INSYNC status
kafy topics partitions critical-topic # Specific topic partitions# Consumer group monitoring with member details
kafy groups list # Groups with state and member count
kafy groups describe payment-service # Detailed member information
kafy groups lag payment-service # Partition lag metrics# Broker metrics monitoring (Prometheus)
kafy brokers metrics 1 # Kafka server and JVM metrics
kafy brokers metrics 2 # Network I/O and process stats# AI-powered metrics analysis (optional)export OPENAI_API_KEY="your-openai-key"# Configure API key
kafy brokers metrics 1 --analyze # OpenAI analysis with gpt-4o (default)
kafy brokers metrics 1 --analyze --provider claude # Use Claude with default model
kafy brokers metrics 1 --analyze --provider grok # Use Grok with default model
kafy brokers metrics 1 --analyze --provider gemini # Use Gemini with default model# Custom model examples
kafy brokers metrics 1 --analyze --model gpt-4o-mini # Use cheaper OpenAI model
kafy brokers metrics 1 --analyze --provider claude --model claude-3-haiku-20240307 # Use faster Claude model
AI-Powered Metrics Analysis
The broker metrics command supports optional AI analysis to provide intelligent recommendations and root cause analysis:
Provider
Environment Variable
Default Model
OpenAI (default)
OPENAI_API_KEY
gpt-4o
Claude
ANTHROPIC_API_KEY
claude-3-sonnet-20240229
Grok
XAI_API_KEY
grok-beta
Gemini
GOOGLE_API_KEY
gemini-pro
Configure API Key: Set the environment variable for your chosen provider:
# For OpenAI (default)export OPENAI_API_KEY="your-openai-api-key"# For Claudeexport ANTHROPIC_API_KEY="your-anthropic-api-key"# For Grokexport XAI_API_KEY="your-xai-api-key"# For Geminiexport GOOGLE_API_KEY="your-google-api-key"
Use AI Analysis: Add the --analyze flag to any metrics command:
# Basic AI analysis with OpenAI (uses gpt-4o by default)
kafy brokers metrics 1 --analyze
# Use specific AI provider with default model
kafy brokers metrics 1 --analyze --provider claude
# Use specific AI provider with custom model
kafy brokers metrics 1 --analyze --provider openai --model gpt-4o-mini
kafy brokers metrics 1 --analyze --provider claude --model claude-3-haiku-20240307
kafy brokers metrics 1 --analyze --provider gemini --model gemini-1.5-pro
The AI analysis provides structured insights:
📊 Summary: Overall health assessment
⚠️ Issues Identified: Performance problems and bottlenecks
🔍 Root Cause Analysis: Explanations of what's causing issues
💡 Recommendations: Specific, actionable solutions
Example output:
================================================================================
🤖 AI ANALYSIS & RECOMMENDATIONS
================================================================================
🔄 Analyzing metrics with OPENAI...
📊 SUMMARY:
Broker shows healthy performance with normal memory usage and stable throughput
⚠️ ISSUES IDENTIFIED:
1. High GC frequency indicates memory pressure
2. Consumer lag building up on topic '__consumer_offsets'
💡 RECOMMENDATIONS:
1. Increase heap size from 4GB to 6GB
2. Tune G1GC settings for better throughput
3. Monitor consumer group distribution
Note: AI analysis is completely optional and requires your own API keys. All metrics are processed securely through the configured AI provider.
# Test broker connectivity
kafy health brokers
# Verify current configuration
kafy config current
# Test with specific cluster
kafy config use dev
kafy brokers list
# Check if topic exists and view details
kafy topics describe my-topic
# List all topics to verify
kafy topics list
# Check topic configurations
kafy topics configs list # All topics
kafy topics configs get my-topic # Specific topic
# Check consumer groups and lag
kafy groups list
kafy groups describe my-group
kafy groups lag my-group
# Reset consumer offsets if needed
kafy groups reset my-group --to-earliest
# Verify cluster configuration
kafy config current
# Export and inspect full config
kafy config export --output yaml
# Test cluster connectivity
kafy health check
Fork the repository
Create your feature branch
Make your changes
Test thoroughly with tab completion
Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
Happy Kafka-ing! 🎉
For more help with any command, use kafy <command> --help or enable tab completion for the best experience.