Commit b4d20810 authored by Julius Volz's avatar Julius Volz

Merge pull request #100 from prometheus/improve-config-docs

Improve Configuration / SD documentation.
parents 25e19ac5 8d0a3418
......@@ -15,8 +15,8 @@ load](/docs/querying/rules/#configuring-rules).
To view all available command-line flags, run `prometheus -h`.
Prometheus can reload its configuration at runtime. If the new configuration
is not well-formatted the changes will not be applied.
A configuration reload is triggered by sending `SIGHUP` to the Prometheus process.
is not well-formed, the changes will not be applied.
A configuration reload is triggered by sending a `SIGHUP` to the Prometheus process.
This will also reload any configured rule files.
......@@ -40,7 +40,7 @@ The other placeholders are specified separately.
A valid example file can be found [here](https://github.com/prometheus/prometheus/blob/master/config/testdata/conf.good.yml).
The global configuration specifies parameters valid in all other configuration
The global configuration specifies parameters that are valid in all other configuration
contexts. They also serve as defaults for other configuration sections.
......@@ -71,21 +71,22 @@ scrape_configs:
### Scrape configurations `<scrape_config>`
The scrape config specifies a set of targets, which might dynamically change, and
parameters describing how to scrape them.
In the general case one scrape configuration specifies a single job. In advanced
configurations this might change.
A `scrape_config` section specifies a set of targets and parameters describing how
to scrape them. In the general case, one scrape configuration specifies a single
job. In advanced configurations, this may change.
Static targets can be configured via the `target_groups` parameter. The other
configs allow dynamic target discovery. Additionally, the `relabel_configs` allow
advanced modifications to any target belonging to the scrape config.
Targets may be statically configured via the `target_groups` parameter or
dynamically discovered using one of the supported service-discovery mechanisms.
Additionally, `relabel_configs` allow advanced modifications to any
target and its labels before scraping.
```
# The job name assigned to scraped metrics by default.
job_name: <name>
# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
# Per-target timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]
......@@ -94,7 +95,7 @@ job_name: <name>
[ metrics_path: <path> | default = /metrics ]
# The URL scheme with which to fetch metrics from targets.
[ scheme: <scheme> | default = http ]
[ scheme: <scheme> | default = http ]
# HTTP basic authentication information.
basic_auth:
......@@ -129,12 +130,12 @@ regex `[a-zA-Z_][a-zA-Z0-9_-]`.
### Target groups `<target_group>`
Target Groups collect a group of targets and specify a common label set for them.
They are the canoncial way to specify static targets in a scrape config.
A `target_group` allows specifying a list of targets and a common label set for them.
They are the canoncial way to specify static targets in a scrape configuration.
```
# The targets specified by the target group.
targets:
targets:
[ - '<host>' ]
# Labels assigned to all metrics scraped from the targets.
......@@ -142,43 +143,44 @@ labels:
[ <labelname>: <labelvalue> ... ]
```
Where `<host>` is a valid string consisting of a hostname or IP followed by a port
Where `<host>` is a valid string consisting of a hostname or IP followed by a port
number.
### DNS-SD configurations `<dns_sd_config>`
A DNS-SD configuration allows to specify a set of hosts for which DNS SRV records are
queried. The DNS servers to be contacted are read from `/etc/resolv.conf`.
A DNS-SD configuration allows specifying a set of DNS SRV record names which
are periodically queried to discover a list of targets (host-port pairs). The
DNS servers to be contacted are read from `/etc/resolv.conf`.
The label `__meta_dns_srv_name` is attached to discovered targets with the queried
SRV name as its value.
During the [relabeling phase](#relabeling-relabel_config), the meta label `__meta_dns_srv_name` is
available on each target and is set to the SRV record name that produced the
discovered target.
```
# A list of host names to be queried.
# A list of DNS SRV record names to be queried.
names:
[ - <host> ]
[ - <record_name> ]
# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]
```
Where `<host>` is a valid hostname.
Where `<record_name>` is any DNS SRV record name.
### Consul SD configurations `<consul_sd_config>`
Consul SD configurations allow to retrieve scrape targets from [Consul's](https://www.consul.io)
Catalog API.
Consul SD configurations allow retrieving scrape targets from [Consul's](https://www.consul.io)
Catalog API.
The following labels are attached to targets:
The following meta labels are available on targets during relabeling:
* `__meta_consul_node`: the node name defined for the target
* `__meta_consul_tags`: the list of tags of the target joined by the tag separator
* `__meta_consul_service`: the name of the service the target belongs to
* `__meta_consul_dc`: the datacenter string for the target
* `__meta_consul_dc`: the datacenter name for the target
```
```
# The information to access the Consul API. It is to be defined
# as the Consul documentation requires.
server: <host>
......@@ -192,19 +194,19 @@ server: <host>
services:
[ - <string> ]
# The string by which consul tags are joined into the tag label.
# The string by which Consul tags are joined into the tag label.
[ tag_separator: <string> | default = , ]
```
### File based SD configurations `<file_sd_config>`
### File-based SD configurations `<file_sd_config>`
File based service discovery provides a more dynamic way to configure static targets
File-based service discovery provides a more generic way to configure static targets
and serves as an interface to plug in custom service discovery mechanisms.
It reads a set of files containing a list of zero or more `<target_group>`s. Changes to
all defined files are detected via disk watches and applied immediately. Files may be
provided in YAML or JSON format. Only changes resulting in well-formatted target groups
provided in YAML or JSON format. Only changes resulting in well-formed target groups
are applied.
The JSON version of a target group has the following format:
......@@ -218,10 +220,11 @@ The JSON version of a target group has the following format:
}
```
As a fallback, the file contents are re-read in the specified refresh interval.
As a fallback, the file contents are also re-read periodically at the specified
refresh interval.
Each target has the `__meta_filepath` label attached. Its value is set to the filepath
from which the target was extracted.
Each target has a meta label `__meta_filepath` during the [relabeling phase](#relabeling-relabel_config).
Its value is set to the filepath from which the target was extracted.
```
# Patterns for files from which target groups are extracted.
......@@ -239,23 +242,25 @@ may contain a single `*` that matches any character sequence, e.g. `my/path/tg_*
### Relabeling `<relabel_config>`
Relabeling is a powerful tool to dynamically rewrite the label set of a target before
its gets scraped. Multiple relabeling steps can be configured per scrape config.
They are applied to the label set of each target in order of their configuration.
Initially, aside from the configured labels, the `job` label is set to the `job_name` value
of the surrounding scrape configuration. The `__address__` label is set to the `<host>:<port>`
value of the target.
After relabeling the `instance` label is set to the value of `__address__` by default if
it gets scraped. Multiple relabeling steps can be configured per scrape configuration.
They are applied to the label set of each target in order of their appearance
in the configuration file.
Initially, aside from the configured global and per-target labels, a target's `job`
label is set to the `job_name` value of the respective scrape configuration.
The `__address__` label is set to the `<host>:<port>` address of the target.
After relabeling, the `instance` label is set to the value of `__address__` by default if
it was not set during relabeling.
Additional labels prefixed with `__meta_` may be available for relabeling. They are set
by the service discovery mechanism that provided the target and vary between mechanisms.
Additional labels prefixed with `__meta_` may be available during the
relabeling phase. They are set by the service discovery mechanism that provided
the target and vary between mechanisms.
Labels starting with `__` will be removed from the label set after relabeling is completed.
```
# The source labels select values from existing labels. Their content is concatenated
# by the configured separator and matched against the configured regular expression.
# using the configured separator and matched against the configured regular expression.
source_labels: '[' <labelname> [, ...] ']'
# Separator placed between concatenated source label values.
......@@ -276,5 +281,12 @@ regex: <regex>
[ action: <relabel_action> | default = replace ]
```
Where `<relabel_action> = drop | keep | replace` and `<regex>` is a valid
regular expression.
`<regex>` is any valid [RE2 regular expression](https://github.com/google/re2/wiki/Syntax).
`<relabel_action>` determines the relabeling action to take:
* `replace`: Match `regex` against the concatenated `source_labels`. Then, set
`target_label` to `replacement`, with match group references
(`${1}`, `${2}`, ...) in `replacement` substituted by their value.
* `keep`: Drop targets for which `regex` does not match the concatenated `source_labels`.
* `drop`: Drop targets for which `regex` matches the concatenated `source_labels`.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment