Commit 78757d69 authored by Julien Pivotto's avatar Julien Pivotto

Remove links to docs/prometheus/latest/querying/rules

Fixes prometheus/prometheus#3451
Signed-off-by: 's avatarJulien Pivotto <roidelapluie@inuits.eu>
parent 7bcf395e
......@@ -28,7 +28,7 @@ performing as well as the rest, such as responding with increased latency.
Let us say that we have a metric `instance:latency_seconds:mean5m` representing the
average query latency for each instance of a service, calculated via a
[recording rule](/docs/prometheus/latest/querying/rules/) from a
[recording rule](/docs/prometheus/latest/configuration/recording_rules/) from a
[Summary](/docs/concepts/metric_types/#summary) metric.
A simple way to start would be to look for instances with a latency
......
......@@ -142,7 +142,7 @@ to finish within a reasonable amount of time. This happened to us when we wanted
to graph the top 5 utilized links out of ~18,000 in total. While the query
worked, it would take roughly the amount of time we set our timeout limit to,
meaning it was both slow and flaky. We decided to use Prometheus' [recording
rules](/docs/prometheus/latest/querying/rules/) for precomputing heavy queries.
rules](/docs/prometheus/latest/configuration/recording_rules/) for precomputing heavy queries.
precomputed_link_utilization_percent = rate(ifHCOutOctets{layer!='access'}[10m])*8/1000/1000
/ on (device,interface,alias)
......
......@@ -128,7 +128,7 @@ However, if your dashboard query doesn't only touch a single time series but
aggregates over thousands of time series, the number of chunks to access
multiplies accordingly, and the overhead of the sequential scan will become
dominant. (Such queries are frowned upon, and we usually recommend to use a
[recording rule](https://prometheus.io/docs/prometheus/latest/querying/rules/#recording-rules)
[recording rule](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
for queries of that kind that are used frequently, e.g. in a dashboard.) But
with the double-delta encoding, the query time might still have been
acceptable, let's say around one second. After the switch to varbit encoding,
......@@ -147,7 +147,7 @@ encoding. Start your Prometheus server with
`-storage.local.chunk-encoding-version=2` and wait for a while until you have
enough new chunks with varbit encoding to vet the effects. If you see queries
that are becoming unacceptably slow, check if you can use
[recording rules](https://prometheus.io/docs/prometheus/latest/querying/rules/#recording-rules)
[recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules)
to speed them up. Most likely, those queries will gain a lot from that even
with the old double-delta encoding.
......
......@@ -12,7 +12,7 @@ vector elements at a given point in time, the alert counts as active for these
elements' label sets.
Alerting rules are configured in Prometheus in the same way as [recording
rules](/docs/prometheus/latest/querying/rules).
rules](/docs/prometheus/latest/configuration/recording_rules).
### Defining alerting rules
......
......@@ -107,7 +107,7 @@ The two approaches have a number of different implications:
|---|-----------|---------
| Required configuration | Pick buckets suitable for the expected range of observed values. | Pick desired φ-quantiles and sliding window. Other φ-quantiles and sliding windows cannot be calculated later.
| Client performance | Observations are very cheap as they only need to increment counters. | Observations are expensive due to the streaming quantile calculation.
| Server performance | The server has to calculate quantiles. You can use [recording rules](/docs/prometheus/latest/querying/rules/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost.
| Server performance | The server has to calculate quantiles. You can use [recording rules](/docs/prometheus/latest/configuration/recording_rules/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost.
| Number of time series (in addition to the `_sum` and `_count` series) | One time series per configured bucket. | One time series per configured quantile.
| Quantile error (see below for details) | Error is limited in the dimension of observed values by the width of the relevant bucket. | Error is limited in the dimension of φ by a configurable value.
| Specification of φ-quantile and sliding time-window | Ad-hoc with [Prometheus expressions](/docs/prometheus/latest/querying/functions/#histogram_quantile). | Preconfigured by the client.
......
......@@ -5,7 +5,7 @@ sort_rank: 6
# Recording rules
A consistent naming scheme for [recording rules](/docs/prometheus/latest/querying/rules/)
A consistent naming scheme for [recording rules](/docs/prometheus/latest/configuration/recording_rules/)
makes it easier to interpret the meaning of a rule at a glance. It also avoids
mistakes by making incorrect or meaningless calculations stand out.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment