Skip to content

Conversation

@Sleuth56
Copy link
Contributor

No description provided.

Co-authored-by: Mathias Palmersheim <mathias@victoriametrics.com>
@Sleuth56 Sleuth56 changed the title Draft: docs/oss vs enterprise downsampling docs/oss vs enterprise downsampling Oct 28, 2025
@Sleuth56 Sleuth56 requested a review from makasim October 28, 2025 13:36
@hagen1778 hagen1778 requested a review from kirillyu October 30, 2025 08:39
Copy link
Contributor

@kirillyu kirillyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very important information! Let's clarify a few things.

via [retention filters](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#retention-filters).
[VictoriaMetrics Enterprise](https://docs.victoriametrics.com/victoriametrics/enterprise/) supports multiple retention periods natively on both the [cluster](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#retention-filters) and the [single node](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#multiple-retentions) versions.
You can filter which metrics a retention filter applies to. Below you can see 3 retention filters. The first one matches any metrics with the `juniors` tag and will keep those for 3 days. The second filter says anything with `dev` or `staging` should be kept for 30 days. And finally, the last filter is the default filter of 1 year.
```bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

label
or directly team="juniors"

This approach requires storing the index for Victoriametrics once per retention period instead of having 1 index for all metrics, which means that this approach requires more disk space than enterprise retention filters.
The index can be quite large on systems where they have time series that change frequently. In some cases, the index size can be larger than the space you're saving with separate retention periods. See [What is high churn rate](https://docs.victoriametrics.com/victoriametrics/faq/#what-is-high-churn-rate)

Configuration complexity is also a concern; each retention period would have its own storage nodes and unique configurations. Adding a new retention policy requires a multi-step process. First stop writes going to the cluster, create a new cluster, backup and restore data from an other cluster, change the configuration of all storage nodes to the new retention policy, restart the cluster, than re-add the cluster to the write path. While the cluster is down reads will not contain complete results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from other cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants