Commit 6d4a3fdc authored by John Rees's avatar John Rees

make absolute links relative in docs

parent 78cac315
......@@ -16,7 +16,7 @@ Some of these exporters are maintained as part of the official [Prometheus GitHu
those are marked as *official*, others are externally contributed and maintained.
We encourage the creation of more exporters but, cannot vet all of them for
[best practices](https://prometheus.io/docs/instrumenting/writing_exporters/).
[best practices](/docs/instrumenting/writing_exporters/).
Commonly, those exporters are hosted outside of the Prometheus GitHub
organization.
......
......@@ -198,13 +198,13 @@ circumstances, but can take quite long under certain circumstances. See
### My Prometheus server runs out of memory.
See [the section about memory usage](https://prometheus.io/docs/operating/storage/#memory-usage)
See [the section about memory usage](/docs/operating/storage/#memory-usage)
to configure Prometheus for the amount of memory you have available.
### My Prometheus server reports to be in “rushed mode” or that “storage needs throttling”.
Your storage is under heavy load. Read
[the section about configuring the local storage](https://prometheus.io/docs/operating/storage/)
[the section about configuring the local storage](/docs/operating/storage/)
to find out how you can tweak settings for better performance.
## Implementation
......
......@@ -162,7 +162,7 @@ in the next section.
Case (3) depends on the targets you monitor. To mitigate an unplanned explosion
of the number of series, you can limit the number of samples per individual
scrape (see `sample_limit` in the
[scrape config](https://prometheus.io/docs/operating/configuration/#scrape_config)).
[scrape config](/docs/operating/configuration/#scrape_config)).
If the number of active time series exceeds the number of memory chunks the
Prometheus server can afford, the server will quickly throttle ingestion as
described above. The only way out if this is to give Prometheus more RAM or
......
......@@ -42,7 +42,7 @@ example, a batch job that deletes a number of users for an entire service).
Such a job's metrics should not include a machine or instance label to decouple
the lifecycle of specific machines or instances from the pushed metrics. This
decreases the burden for managing stale metrics in the Pushgateway. See also
the [best practices for monitoring batch jobs](https://prometheus.io/docs/practices/instrumentation/#batch-jobs).
the [best practices for monitoring batch jobs](/docs/practices/instrumentation/#batch-jobs).
## Alternative strategies
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment