This repository contains a PyTorch layer implementing power retention, a linear-cost variant of attention whose state size can be controlled independently of context length and parameter count.
For details on the approach, see our paper: Scaling Context Requires Rethinking Attention
Documentation: https://m-a-n-i-f-e-s-t.github.io/retention/
On a wide range of FLOPs budgets, power retention models achieve the lowest perplexity.
In a head-to-head comparison on long-context generation, power retention models like PowerCoder are able to attain vastly greater token througput than transformers.
(Measured above is a 3B-parameter models on an A100, with prefill length of 2048.)
- Efficient chunked algorithm for linear scaling with sequence length (O(t) cost vs O(t²) for standard attention)
- Support for gated attention and rotary embeddings
- CUDA kernels optimized for A100
- FP16 and BF16 support
Requirements:
- Python 3.11 or 3.12 (3.13 depends on the upcoming Triton 3.2 release)
- CUDA Toolkit 12.4
- GCC/G++ with C++17 support
- Linux (Windows/MacOS not supported)
All other dependencies (PyTorch, Ninja build system, etc.) will be automatically installed through pip.
For practical deployment guideline, refer to deployment.
The main entry point is the power_retention function, which implements symmetric power retention. Here's a basic example:
For inference, a separate interface power_retention_inference is provided, which allows for constant-time token generation regardless of context size.
The first call to power_retention_inference usually provides K, V as the arguments, since there's no initial state. Once the sequence size of K and V grows beyond the switch_over_seq_len, a state update will happen, converting K, V of shape batch x seq_len x num_heads x head_dim into a state of shape batch x num_heads x D x head_dim, where D is controlled by the power parameter p. sum_of_keys are the accumulated normalization factor, having a shape of batch x num_heads x D.
You always need to keep the state and sum_of_keys around for the next inference call, just like KV cache. However, they size do not grow with context size, unlike KV cache.
The package includes a drop-in replacement for standard attention in transformer models. See train/model.py for a complete example of using power retention in a GPT-style model:
The package uses pip's editable install mode for development. First, activate your Python virtual environment, then:
Run correctness tests:
Run benchmarks:
See benchmark for details.
To view the documentation locally, run:
To update it publicly, run:
To immediately see the kernel in action, cd deploy and use:
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a new branch for your feature/fix: git checkout -b feature-name
- Install development dependencies: pip install -e .[dev]
- Code Style: Follow PEP 8 for Python code. For CUDA code, follow the existing style in the codebase
- Documentation: Add docstrings to new functions and update README if needed
- Testing: Add tests for new features and ensure all tests pass
- Benchmarking: If your code changes affect performance, delete the plots/benchmark_results and rerun some benchmarks with python -m perf.benchmark fwd+bwd
- Commits: Write clear, concise commit messages
- Performance: For CUDA kernels, include benchmarks showing performance impact
- Update documentation for any new features
- Add or update tests as needed
- Ensure all tests pass: pytest
- Run benchmarks if performance-critical code was changed: python3 -m perf.benchmark fwd+bwd
- Create a Pull Request with a clear description of changes
- Wait for review and address any feedback
- Performance optimizations for different GPU architectures
- Documentation improvements
- Bug fixes
- Test coverage improvements
For major changes, please open an issue first to discuss what you would like to change.
- Update the version in pyproject.toml
- Run pytest and benchmarks if applicable
- Run make release-test to build & push to Test PyPI for all Python targets
- Run make release to build & push to PyPI for all Python targets
If you use this code in your research, please cite:
Apache 2.0 (see LICENSE)