Skip to content

Commit 79e0024

Browse files
committed
storing Prometheus Metrics
1 parent 0230976 commit 79e0024

File tree

1 file changed

+12
-10
lines changed
  • docs/user-guide/Infrastructure-monitoring/introduction-to-prometheus

1 file changed

+12
-10
lines changed

docs/user-guide/Infrastructure-monitoring/introduction-to-prometheus/store-metrics.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,43 +4,45 @@ title: How Logz.io Stores Prometheus Metrics
44
description: Learn how Logz.io stores Prometheus metrics
55
image: https://dytvr9ot2sszz.cloudfront.net/logz-docs/social-assets/docs-social.jpg
66
keywords: [metrics, infrastructure monitoring, Prometheus, monitoring, observability, logz.io, downsampling, data retention, query performance]
7+
noindex: true
78
---
89

9-
Logz.io stores your Prometheus metrics in multiple resolutions to ensure fast performance for long time ranges while keeping the precision needed for troubleshooting.
10+
Logz.io stores your Prometheus metrics in multiple resolutions to ensure fast performance for long time ranges while preserving the precision needed for troubleshooting.
1011

1112
## Why use multiple resolutions?
1213

13-
Storing every single data point (raw metrics) over a long period can slow down queries and increase storage requirements. To avoid this, Logz.io automatically summarizes older data using aggregation techniques. This provides fast queries for historical views while maintaining accuracy for shorter time ranges.
14+
Storing every single data point (raw metrics) over long periods can slow down queries. To avoid this, Logz.io automatically summarizes older data using aggregation techniques. This allows for fast historical queries without compromising accuracy for recent data.
1415

1516
## What is downsampling?
1617

17-
Downsampling means storing fewer data points over time by summarizing raw metrics. Instead of keeping every scrape, Logz.io keeps statistical summaries like count, sum, min, max, and counter at set intervals.
18+
Downsampling means storing fewer data points over time by summarizing raw metrics. Instead of keeping every scrape, Logz.io stores statistical summaries like count, sum, min, max, and counter at set intervals.
1819

1920
This process happens in two phases:
2021

21-
2222
| Metric age | Stored resolution | Available aggregations |
2323
| -- | -- | -- |
2424
| 0–40 hours | Raw (scrape rate) | All raw samples |
2525
| 40h–10 days | 5 minutes | Count, sum, min, max, counter |
2626
| 10 days and up | 1 hour | Count, sum, min, max, counter |
2727

2828
:::note
29-
This process doesn’t save storage space; it actually creates additional summarized datasets. The goal is to improve query speed for large time ranges.
29+
Downsampling is automatic and helps improve query performance for large time ranges.
3030
:::
3131

3232
## What does this mean for your queries?
3333

3434
The resolution used depends on your selected time range:
3535

36-
* **Last 40 hours or less** - Uses raw data
37-
* **40 hours to 10 days** - Uses 5-minute summaries
38-
* **10 days or more** - Uses 1-hour summaries
36+
* Last 40 hours or less - Uses raw data
37+
* 40 hours to 10 days - Uses 5-minute summaries
38+
* 10 days or more - Uses 1-hour summaries
3939

4040
For example, if you view a chart showing 30 days of data, the system will use the 1-hour resolution. If you zoom in to the last 6 hours, it will automatically switch to raw data.
4141

42-
This ensures a balance between detailed insights and high performance.
42+
This ensures a balance between detailed visibility and high-speed queries.
4343

4444
## Can I disable downsampling?
4545

46-
Downsampling is always enabled in Logz.io. If you want access to raw data from older time ranges (For example, a spike from 30 days ago), make sure your raw metric retention covers that period. Otherwise, only the summarized version will be available.
46+
Downsampling is always enabled in Logz.io to ensure optimal performance across different time ranges. Raw data is retained for 30 days. After that, only the downsampled summaries are available.
47+
48+
If you need to investigate a specific event (like a spike) that occurred more than 30 days ago, the summarized data will provide a high-level view, but raw-level granularity won’t be available.

0 commit comments

Comments
 (0)