Sirius is a GPU-native SQL engine. It plugs into existing databases such as DuckDB via the standard Substrait query format, requiring no query rewrites or major system changes. Sirius currently supports DuckDB and Doris (coming soon), other systems marked with * are on our roadmap.
Running TPC-H on SF=100, Sirius achieves ~10x speedup over existing CPU query engines at the same hardware rental cost, making it well-suited for interactive analytics, financial workloads, and ETL jobs.
- Ubuntu >= 20.04
- NVIDIA Volta™ or higher with compute capability 7.0+
- CUDA >= 11.2
- CMake >= 3.30.4 (follow this instruction to upgrade CMake)
- We recommend building Sirius with at least 16 vCPUs to ensure faster compilation.
For users who have access to AWS and want to launch AWS EC2 instances to run Sirius, the following images are prepared with dependencies fully installed.
| AMI Name | AWS Region | AMI ID |
| Sirius Dependencies AMI (Ubuntu 24.04) 20250611 | us-east-1 | ami-06020f2b2161f5d62 |
| us-east-2 | ami-016b589f441fecc5d | |
| us-west-2 | ami-060043bae3f9b5eb4 |
Supported EC2 instances: G4dn, G5, G6, Gr6, G6e, P4, P5, P6.
To use the docker image with dependencies fully installed:
If encounting errors like the following when running the docker image as above:
This means nvidia-driver or nvidia-container-toolkit is not installed.
To install nvidia-driver:
To install nvidia-container-toolkit, please follow the instructions.
Finally restart docker by
If CUDA is not installed, download here. Follow the instructions for the deb(local) installer and complete the post-installation steps.
Verify installation:
libcudf will be installed via conda/miniconda. Miniconda can be downloaded here. After downloading miniconda, install libcudf by running these commands:
Set the environment variables LIBCUDF_ENV_PREFIX to the conda environment's path. For example, if we installed miniconda in ~/miniconda3 and installed libcudf in the conda environment libcudf-env, then we would set the LIBCUDF_ENV_PREFIX to ~/miniconda3/envs/libcudf-env.
It is recommended to add the environment variables to your bashrc to avoid repetition.
To clone the Sirius repository:
The --recurse-submodules will ensure DuckDB is pulled which is required to build the extension.
To build Sirius:
Optionally, to use the Python API in Sirius, we also need to build the duckdb-python package with the following commands:
Common issues: If pip install . only works inside an environment, then do the following from the Sirius home directory before the installation:
To generate the TPC-H dataset
To load the TPC-H dataset to duckdb:
To run Sirius CLI, simply start the shell with ./build/release/duckdb {DATABASE_NAME}.duckdb. From the duckdb shell, initialize the Sirius buffer manager with call gpu_buffer_init. This API accepts 2 parameters, the GPU caching region size and the GPU processing region size. The GPU caching region is a memory region where the raw data is stored in GPUs, whereas the GPU processing region is where intermediate results are stored in GPUs (hash tables, join results .etc). For example, to set the caching region as 1 GB and the processing region as 2 GB, we can run the following command:
After setting up Sirius, we can execute SQL queries using the call gpu_processing:
The cold run in Sirius would be significantly slower due to data loading from storage and conversion from DuckDB format to Sirius native format. Subsequent runs would be faster since it benefits from caching on GPU memory.
All 22 TPC-H queries are saved in tpch-queries.sql. To run all queries:
Make sure to build the duckdb-python package before using the Python API with the method described here. To use the Sirius Python API, add the following code to the beginning of your Python program:
To execute query in Python:
Sirius provides a unit test that compares Sirius against DuckDB for correctness across all 22 TPC-H queries. To run the unittest, generate SF=1 TPC-H dataset using the method described here and run the unittest using the following command:
Make sure to build the duckdb-python package before running this test using the method described here. To test Sirius performance against DuckDB across all 22 TPC-H queries, run the following command (replace {SF} with the desired scale factor):
Sirius uses spdlog for logging messages during query execution. Default log directory is ${CMAKE_BINARY_DIR}/log and default log level is info, which can be configured by environment variables SIRIUS_LOG_DIR and SIRIUS_LOG_LEVEL. For example:
Sirius is under active development, and several features are still in progress. Notable current limitations include:
- Working Set Size Limitations: Sirius recently switches to libcudf to implement FILTER, PROJECTION, JOIN, GROUP-BY, ORDER-BY, AGGREGATION. However, since libcudf uses int32_t for row IDs, this imposes limits on the maximum working set size that Sirius can currently handle (~2B rows). See libcudf issue #13159 for more details. We are actively addressing this by adding support for partitioning and chunked pipeline execution. See Sirius issue #12 for more details.
- Data Type Coverage: Sirius currently supports data types including INTEGER, BIGINT, FLOAT, DOUBLE, VARCHAR, DATE, and DECIMAL. We are actively working on supporting additional data types—such as TIME and nested types. See issue #20 for more details.
- Operator Coverage: At present, Sirius only supports a range of operators including FILTER, PROJECTION, JOIN, GROUP-BY, ORDER-BY, AGGREGATION, TOP-N, LIMIT, and CTE. We are working on adding more advanced operators such as WINDOW functions and ASOF JOIN, etc. See issue #21 for more details.
- No Support for Partially NULL Columns: Sirius currently does not support columns where only some values are NULL. This limitation is being tracked and will be addressed in a future update. See issue #27 for more details.
For a full list of current limitations and ongoing work, please refer to our GitHub issues page. If these issues are encountered when running Sirius, Sirius will gracefully fallback to DuckDB query execution on CPUs.
Sirius is still under major development and we are working on adding more features to Sirius, such as storage/disk support, multi-GPUs, multi-node, more operators, data types, accelerating more engines, and many more.
Sirius always welcomes new contributors! If you are interested, check our website, reach out to our email, or join our slack channel.
Let's kickstart the GPU eras for Data Analytics!
.png)



