Commit 78757d69 authored by Julien Pivotto's avatar Julien Pivotto

Remove links to docs/prometheus/latest/querying/rules

Fixes prometheus/prometheus#3451
Signed-off-by: 's avatarJulien Pivotto <roidelapluie@inuits.eu>
parent 7bcf395e
...@@ -28,7 +28,7 @@ performing as well as the rest, such as responding with increased latency. ...@@ -28,7 +28,7 @@ performing as well as the rest, such as responding with increased latency.
Let us say that we have a metric `instance:latency_seconds:mean5m` representing the Let us say that we have a metric `instance:latency_seconds:mean5m` representing the
average query latency for each instance of a service, calculated via a average query latency for each instance of a service, calculated via a
[recording rule](/docs/prometheus/latest/querying/rules/) from a [recording rule](/docs/prometheus/latest/configuration/recording_rules/) from a
[Summary](/docs/concepts/metric_types/#summary) metric. [Summary](/docs/concepts/metric_types/#summary) metric.
A simple way to start would be to look for instances with a latency A simple way to start would be to look for instances with a latency
......
...@@ -142,7 +142,7 @@ to finish within a reasonable amount of time. This happened to us when we wanted ...@@ -142,7 +142,7 @@ to finish within a reasonable amount of time. This happened to us when we wanted
to graph the top 5 utilized links out of ~18,000 in total. While the query to graph the top 5 utilized links out of ~18,000 in total. While the query
worked, it would take roughly the amount of time we set our timeout limit to, worked, it would take roughly the amount of time we set our timeout limit to,
meaning it was both slow and flaky. We decided to use Prometheus' [recording meaning it was both slow and flaky. We decided to use Prometheus' [recording
rules](/docs/prometheus/latest/querying/rules/) for precomputing heavy queries. rules](/docs/prometheus/latest/configuration/recording_rules/) for precomputing heavy queries.
precomputed_link_utilization_percent = rate(ifHCOutOctets{layer!='access'}[10m])*8/1000/1000 precomputed_link_utilization_percent = rate(ifHCOutOctets{layer!='access'}[10m])*8/1000/1000
/ on (device,interface,alias) / on (device,interface,alias)
......
...@@ -128,7 +128,7 @@ However, if your dashboard query doesn't only touch a single time series but ...@@ -128,7 +128,7 @@ However, if your dashboard query doesn't only touch a single time series but
aggregates over thousands of time series, the number of chunks to access aggregates over thousands of time series, the number of chunks to access
multiplies accordingly, and the overhead of the sequential scan will become multiplies accordingly, and the overhead of the sequential scan will become
dominant. (Such queries are frowned upon, and we usually recommend to use a dominant. (Such queries are frowned upon, and we usually recommend to use a
[recording rule](https://prometheus.io/docs/prometheus/latest/querying/rules/#recording-rules) [recording rule](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
for queries of that kind that are used frequently, e.g. in a dashboard.) But for queries of that kind that are used frequently, e.g. in a dashboard.) But
with the double-delta encoding, the query time might still have been with the double-delta encoding, the query time might still have been
acceptable, let's say around one second. After the switch to varbit encoding, acceptable, let's say around one second. After the switch to varbit encoding,
...@@ -147,7 +147,7 @@ encoding. Start your Prometheus server with ...@@ -147,7 +147,7 @@ encoding. Start your Prometheus server with
`-storage.local.chunk-encoding-version=2` and wait for a while until you have `-storage.local.chunk-encoding-version=2` and wait for a while until you have
enough new chunks with varbit encoding to vet the effects. If you see queries enough new chunks with varbit encoding to vet the effects. If you see queries
that are becoming unacceptably slow, check if you can use that are becoming unacceptably slow, check if you can use
[recording rules](https://prometheus.io/docs/prometheus/latest/querying/rules/#recording-rules) [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
to speed them up. Most likely, those queries will gain a lot from that even to speed them up. Most likely, those queries will gain a lot from that even
with the old double-delta encoding. with the old double-delta encoding.
......
...@@ -12,7 +12,7 @@ vector elements at a given point in time, the alert counts as active for these ...@@ -12,7 +12,7 @@ vector elements at a given point in time, the alert counts as active for these
elements' label sets. elements' label sets.
Alerting rules are configured in Prometheus in the same way as [recording Alerting rules are configured in Prometheus in the same way as [recording
rules](/docs/prometheus/latest/querying/rules). rules](/docs/prometheus/latest/configuration/recording_rules).
### Defining alerting rules ### Defining alerting rules
......
...@@ -107,7 +107,7 @@ The two approaches have a number of different implications: ...@@ -107,7 +107,7 @@ The two approaches have a number of different implications:
|---|-----------|--------- |---|-----------|---------
| Required configuration | Pick buckets suitable for the expected range of observed values. | Pick desired φ-quantiles and sliding window. Other φ-quantiles and sliding windows cannot be calculated later. | Required configuration | Pick buckets suitable for the expected range of observed values. | Pick desired φ-quantiles and sliding window. Other φ-quantiles and sliding windows cannot be calculated later.
| Client performance | Observations are very cheap as they only need to increment counters. | Observations are expensive due to the streaming quantile calculation. | Client performance | Observations are very cheap as they only need to increment counters. | Observations are expensive due to the streaming quantile calculation.
| Server performance | The server has to calculate quantiles. You can use [recording rules](/docs/prometheus/latest/querying/rules/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost. | Server performance | The server has to calculate quantiles. You can use [recording rules](/docs/prometheus/latest/configuration/recording_rules/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost.
| Number of time series (in addition to the `_sum` and `_count` series) | One time series per configured bucket. | One time series per configured quantile. | Number of time series (in addition to the `_sum` and `_count` series) | One time series per configured bucket. | One time series per configured quantile.
| Quantile error (see below for details) | Error is limited in the dimension of observed values by the width of the relevant bucket. | Error is limited in the dimension of φ by a configurable value. | Quantile error (see below for details) | Error is limited in the dimension of observed values by the width of the relevant bucket. | Error is limited in the dimension of φ by a configurable value.
| Specification of φ-quantile and sliding time-window | Ad-hoc with [Prometheus expressions](/docs/prometheus/latest/querying/functions/#histogram_quantile). | Preconfigured by the client. | Specification of φ-quantile and sliding time-window | Ad-hoc with [Prometheus expressions](/docs/prometheus/latest/querying/functions/#histogram_quantile). | Preconfigured by the client.
......
...@@ -5,7 +5,7 @@ sort_rank: 6 ...@@ -5,7 +5,7 @@ sort_rank: 6
# Recording rules # Recording rules
A consistent naming scheme for [recording rules](/docs/prometheus/latest/querying/rules/) A consistent naming scheme for [recording rules](/docs/prometheus/latest/configuration/recording_rules/)
makes it easier to interpret the meaning of a rule at a glance. It also avoids makes it easier to interpret the meaning of a rule at a glance. It also avoids
mistakes by making incorrect or meaningless calculations stand out. mistakes by making incorrect or meaningless calculations stand out.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment