A real-time LLM stream interceptor for token-level interaction research
Every Other Token is a research tool that intercepts OpenAI's streaming API responses and applies transformations to alternating tokens in real-time. Instead of waiting for complete responses, it intervenes at the token level, creating a new paradigm for LLM interaction and analysis.
This tool opens up novel research possibilities:
- ** Token Dependency Analysis**: Study how LLMs handle disrupted token sequences
- ** Interpretability Research**: Understand token-level dependencies and causality
- ** Creative AI Interaction**: Build co-creative systems with human-AI token collaboration
- ** Real-time LLM Steering**: Develop new prompt engineering techniques
- ** Stream Manipulation**: Explore how semantic meaning degrades with token alterations
The script intercepts the OpenAI streaming API response and applies transformations based on token position:
- Even tokens (0, 2, 4, 6...): Passed through unchanged
- Odd tokens (1, 3, 5, 7...): Transformed using the selected method
Original: "The quick brown fox jumps over the lazy dog" With reverse transform: "The kciuq brown xof jumps revo the yzal dog"
| reverse | Reverses odd tokens | "hello" → "olleh" |
| uppercase | Converts odd tokens to uppercase | "hello" → "HELLO" |
| mock | Creates alternating case (mocking text) | "hello" → "hElLo" |
| noise | Adds random characters to odd tokens | "hello" → "hello*" |
- PROMPT: Your input prompt (required)
- TRANSFORM: Transformation type (default: reverse)
- MODEL: OpenAI model (default: gpt-3.5-turbo)
The tool provides detailed statistics:
- Causality Testing: How does corrupting early tokens affect later generation?
- Semantic Drift: At what corruption level does meaning break down?
- Model Comparison: How do different models handle token disruption?
- Domain Analysis: Which topics are most/least robust to token corruption?
- Recursive Mutation: Feed transformed output back as input
- Multi-Model Chains: Use tokens from different models alternately
- Human-in-the-Loop: Replace odd tokens with human input
- Bidirectional Analysis: Compare forward vs backward token importance
The tool includes comprehensive error handling:
- API Key Validation: Checks for valid OpenAI API key
- Network Error Recovery: Handles connection issues gracefully
- Invalid Transform Detection: Validates transformation types
- Model Availability: Checks if requested model exists
We welcome contributions! Here are ways to get involved:
- New Transformations: Add creative token transformation functions
- Analysis Tools: Build utilities for analyzing output patterns
- Visualization: Create tools to visualize token-level changes
- Documentation: Improve examples and research applications
If you use this tool in academic research, please cite:
- Web Interface: Browser-based tool for easier experimentation
- Batch Processing: Process multiple prompts simultaneously
- Export Functionality: Save results in various formats (JSON, CSV)
- Visualization Dashboard: Real-time charts and analysis
- Custom Transformations: User-defined transformation functions
- Multi-API Support: Extend to other LLM providers (Anthropic, Cohere)
- Collaborative Mode: Multiple users contributing tokens
- Research Templates: Pre-built experiments for common research patterns
- API Costs: Streaming API calls count toward your OpenAI usage
- Rate Limits: Respect OpenAI's rate limiting policies
- Research Ethics: Consider implications when studying AI behavior
- Data Privacy: Be mindful of sensitive information in prompts
- Very long responses may hit API timeout limits
- Some Unicode characters may not transform correctly
- Rapid token streaming can occasionally cause display issues
MIT License - see LICENSE file for details.
- OpenAI for the streaming API
- The AI research community for inspiration
- Contributors and beta testers
- Email: [email protected]
Made with 🧬 for AI researchers, prompt engineers, and curious minds
"Every token tells a story. Every other token tells a different one."
.png)


