InfluxDB wasn't made to store years of time series data. Historian was. Run a smaller, cost-effective InfluxDB cluster for recent data while archiving historical data to cold storage — reducing hardware costs and InfluxDB Enterprise licensing by up to 90%.
The Problem with InfluxDB Long-term Storage
InfluxDB is optimized for high-performance ingestion and querying of recent data — not long-term storage. Keeping years of data means expensive hardware scaling and hefty InfluxDB Enterprise licensing costs, or losing valuable historical insights.
# Query archived InfluxDB data via SQL
POST
https://historian.yourcompany.com/api/query
SELECT
time, temperature, humidity
WHERE
time
>
= '2023-01-01'
AND
location = 'datacenter-1'
# Results (100 rows returned)
2023-12-31T23:59:00Z22.5°C45%
2023-12-31T23:58:00Z22.3°C46%
2023-12-31T23:57:00Z22.1°C47%
...
Reduce Infrastructure Costs
Cut storage and hardware costs by 90%. Run smaller InfluxDB clusters with reduced Enterprise licensing needs.
Perfect for ML & Analytics
Query archived data via SQL for ML training, anomaly detection, and analytics. Open Parquet format prevents vendor lock-in.
Flexible Storage Options
Works with AWS S3/Directory Buckets, MinIO, Ceph, GCS, or NAS. 100% on-premise or hybrid deployment options.
How Historian Works
Simple 4-step process to archive your InfluxDB data while keeping it queryable via SQL.
Archive & Query
Keep your InfluxDB cluster small and cost-effective for recent data, while archiving historical data to cold storage. Perfect for ML training, anomaly detection, and long-term analytics with SQL access.
- 1. Define how far back to archive (e.g. older than 2 years)
- 2. Historian exports data automatically to Parquet
- 3. Files saved to your storage, partitioned by time
- 4. Query anytime via SQL REST API
Cold Storage
S3 / MinIO / Ceph / NAS
SQL Engine
Query Processor
Live data & recent queries
Historical data (Parquet format)
Processes SQL queries on Parquet
REST API for external access
Simple Flat Pricing
Based on your InfluxDB footprint. No per-GB fees, no storage markup, no limits on query volume.
Features include:
- InfluxDB 1.x & 2.x support
- Parquet format export
- SQL query API
- S3, GCS, MinIO, NAS support
- Basic retention policies
- Email support & updates
Everything in Basic, plus:
- Multi-node support (up to 3)
- Advanced scheduling & automation
- Custom retention policies
- Data compression optimization
- Priority support
Everything in Pro, plus:
- Unlimited nodes & clusters
- Custom integrations & APIs
- Dedicated support engineer
- On-site training & setup
Real Savings Example
See how much you could save by moving historical data to cold storage.
Without Historian
- • 2TB SSD storage in Google Cloud = $4,488/year
- • Frequent RAM stress & cluster slowdowns
- • Complex cleanup jobs & maintenance
- • Lost data = lost business intelligence
With Historian
- • Data moves to cold storage = $48/year @ S3 rates
- • InfluxDB runs leaner, faster, cheaper
- • Automated archival & retention
- • Queries on demand — no vendor lock-in
Net Savings
~$1,440/year
Plus improved performance, reduced maintenance, and preserved historical data
Frequently Asked Questions
Does it support InfluxDB 1.x and 2.x?
Yes, Historian connects to both versions via their respective APIs.
Is this cloud-based?
No. Historian runs 100% on-prem or in your private cloud. You control where the data lives.
What storage backends are supported?
Any object storage compatible with S3 (AWS, GCS, MinIO, DigitalOcean Spaces), or even mounted NAS.
How is the data queried?
Via a built-in SQL API. You can also read the Parquet files directly from tools like Pandas, Spark, Dremio, or Presto.
Have more questions?
.png)
![AI algorithms is making all products look the same (2021) [video]](https://www.youtube.com/img/desktop/supported_browsers/opera.png)
