Chronos – Fair GPU Time-Sharing (Side Project)

10 hours ago 1

Fair GPU Time-Sharing for Everyone

CI License PyPI Downloads Documentation

Time-based GPU partitioning with automatic expiration Simple. Fair. Just works.™

from chronos import Partitioner # Get 50% of GPU 0 for 1 hour - guaranteed with Partitioner().create(device=0, memory=0.5, duration=3600) as partition: train_model() # Your code here # Auto-cleanup when done

You have one expensive GPU and multiple users who need it.

Without Chronos:

❌ Resource conflicts and crashes ❌ No fair allocation ❌ Manual coordination required ❌ Wasted compute time ❌ Politics and frustration

With Chronos:

✅ Everyone gets guaranteed time ✅ Automatic resource cleanup ✅ Zero conflicts ✅ < 1% performance overhead ✅ No manual coordination

# PyPI (recommended) pip install chronos-gpu # Or quick script curl -sSL https://raw.githubusercontent.com/oabraham1/chronos/main/install.sh | sudo bash # Or from source git clone https://github.com/oabraham1/chronos cd chronos && ./install-quick.sh
# Check your GPUs chronos stats # Allocate 50% of GPU 0 for 1 hour chronos create 0 0.5 3600 # List active partitions chronos list # It auto-expires - no cleanup needed!
from chronos import Partitioner p = Partitioner() # Simple usage with p.create(device=0, memory=0.5, duration=3600) as partition: import torch model = torch.nn.Sequential(...).cuda() model.fit(X, y) # Automatic cleanup

Time-based partitions mean no resource hogging. Everyone gets their fair share.

  • 3.2ms partition creation
  • < 1% GPU overhead
  • Sub-second expiration accuracy
  • Per-user partitions
  • Memory enforcement
  • Automatic expiration
  • No manual cleanup
  • Any GPU: NVIDIA, AMD, Intel, Apple Silicon
  • Any OS: Linux, macOS, Windows
  • Any Framework: PyTorch, TensorFlow, JAX, etc.
  • Research labs with shared GPUs
  • Small teams with limited hardware
  • Universities with many students
  • Development environments

Feature Chronos NVIDIA MIG MPS Time-Slicing
Time-based allocation
Auto-expiration
Multi-vendor GPU
User isolation
Zero setup
< 1% overhead

#!/bin/bash # Allocate GPU for the team every morning chronos create 0 0.30 28800 --user alice # 30%, 8 hours chronos create 0 0.20 28800 --user bob # 20%, 8 hours chronos create 0 0.15 28800 --user carol # 15%, 8 hours # 35% left for ad-hoc use

ML Training with Auto-Save

from chronos import Partitioner import torch with Partitioner().create(device=0, memory=0.5, duration=14400) as p: model = MyModel().cuda() for epoch in range(1000): train_epoch(model) # Auto-save when time is running out if p.time_remaining < 600: # 10 minutes left torch.save(model.state_dict(), 'checkpoint.pt') print("Checkpoint saved!") break
from chronos import Partitioner # At the start of your notebook p = Partitioner() partition = p.create(device=0, memory=0.5, duration=7200) # 2 hours # Your analysis here import tensorflow as tf model = build_model() model.fit(data) # Check remaining time print(f"Time left: {partition.time_remaining}s") # Release when done (or it auto-expires) partition.release()

Benchmarked on Ubuntu 22.04 with NVIDIA RTX 3080:

Operation Latency Overhead
Create partition 3.2ms ± 0.5ms -
Release partition 1.8ms ± 0.3ms -
GPU compute - 0.8%
Memory tracking 0.1ms -

24-hour stress test: 1.2M operations, zero failures, zero memory leaks.

Full benchmarks →



# Linux/macOS curl -sSL https://raw.githubusercontent.com/oabraham1/chronos/main/install.sh | sudo bash # Or user install (no sudo) curl -sSL https://raw.githubusercontent.com/oabraham1/chronos/main/install-user.sh | bash
docker pull ghcr.io/oabraham1/chronos:latest docker run --gpus all ghcr.io/oabraham1/chronos:latest chronos stats
git clone https://github.com/oabraham1/chronos cd chronos mkdir build && cd build cmake .. && make sudo make install

Full installation guide →


┌─────────────────────────────────────────┐ │ User Applications │ │ (PyTorch, TensorFlow, JAX, etc.) │ └──────────────┬──────────────────────────┘ │ ┌──────────────▼──────────────────────────┐ │ Chronos Partitioner │ │ ┌──────────────────────────────────┐ │ │ │ Time-Based Allocation Engine │ │ │ └──────────────────────────────────┘ │ │ ┌──────────────────────────────────┐ │ │ │ Memory Enforcement Layer │ │ │ └──────────────────────────────────┘ │ │ ┌──────────────────────────────────┐ │ │ │ Auto-Expiration Monitor │ │ │ └──────────────────────────────────┘ │ └──────────────┬──────────────────────────┘ │ ┌──────────────▼──────────────────────────┐ │ OpenCL Runtime Layer │ └──────────────┬──────────────────────────┘ │ ┌──────────────▼──────────────────────────┐ │ GPU Hardware (Any Vendor) │ └─────────────────────────────────────────┘

Key Components:

  • C++ Core: High-performance partition management
  • Python Bindings: Easy-to-use API
  • CLI Tool: Command-line interface
  • Monitor Thread: Automatic expiration handling
  • Lock Files: Inter-process coordination

We welcome contributions! See CONTRIBUTING.md for guidelines.

Good first issues:

  • Add more examples
  • Improve error messages
  • Write tests
  • Update documentation
  • Fix bugs


If you use Chronos in research, please cite:

@software{chronos2025, title={Chronos: Time-Based GPU Partitioning for Fair Resource Sharing}, author={Abraham, Ojima}, year={2025}, url={https://github.com/oabraham1/chronos}, version={1.0.1} }

Apache License 2.0 - Use it anywhere, for anything.

See LICENSE for full terms.



Thanks to all contributors and early adopters who helped shape Chronos!

Special thanks to the open-source community for inspiration and support.


Read Entire Article