Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
docs
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Administrator
docs
Commits
62a473a8
Commit
62a473a8
authored
May 07, 2015
by
Fabian Reinartz
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Change config docs to YAML, adjust to changes.
parent
8ddb1680
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
175 additions
and
52 deletions
+175
-52
configuration.md
content/docs/operating/configuration.md
+175
-52
No files found.
content/docs/operating/configuration.md
View file @
62a473a8
...
@@ -6,66 +6,189 @@ nav_icon: sliders
...
@@ -6,66 +6,189 @@ nav_icon: sliders
# Configuration
# Configuration
Prometheus is configured via command-line flags and a configuration file. While
Prometheus is configured via command-line flags and a configuration file. While
the command-line flags configure
general
system parameters (such as storage
the command-line flags configure
immutable
system parameters (such as storage
locations, amount of data to keep on disk and in memory, etc.), the
locations, amount of data to keep on disk and in memory, etc.), the
configuration file defines everything related to scraping
[
jobs and their
configuration file defines everything related to scraping
[
jobs and their
instances](/docs/concepts/jobs_instances/), as well as which
[
rule files to
instances](/docs/concepts/jobs_instances/), as well as which
[
rule files to
load](/docs/querying/rules/#configuring-rules).
load](/docs/querying/rules/#configuring-rules).
To view all available command-line flags, run Prometheus with the
`-h`
option.
To view all available command-line flags, run
`prometheus -h`
.
## Configuration file
To specify which configuration file to load, use the
`-config.file`
flag.
To specify which configuration file to load, use the
`-config.file`
flag.
A configuration file is written in ASCII protocol buffer form, and every
The configuration file (including rule files) can be reloaded at runtime by
available option is explained in detail in the protocol buffer schema
sending SIGHUP to the Prometheus process.
definition at
https://github.com/prometheus/prometheus/blob/master/config/config.proto.
The file is written in
[
YAML format
](
http://en.wikipedia.org/wiki/YAML
)
,
defined by the scheme described below.
Brackets indicate that a parameter is optional. For non-list parameters the
value is set to the specified default.
Generic placeholders are defined as follows:
*
`<duration>`
: a duration matching the regular expression
`[0-9]+[smhdwy]`
*
`<labelname>`
: a string matching the regular expression
`[a-zA-Z_][a-zA-Z0-9_]*`
*
`<labelvalue>`
: a string of unicode characters
*
`<filename>`
: a valid path in the current working directory
The other placeholders are specified separately.
A valid example file can be found
[
here
](
https://github.com/prometheus/prometheus/blob/fabxc/sd_yamlcfg/config/testdata/conf.good.yml
)
.
The global configuration specifies parameters valid in all other configuration
contexts. They also serve as defaults for other configuration sections.
```
global:
# How frequently to scrape targets by default.
[ scrape_interval: <duration> | default = 1m ]
# How long until a scrape request times out.
[ scrape_timeout: <duration> | default = 10s ]
# How frequently to evaluate rules by default.
[ evaluation_interval: <duration> | default = 1m ]
# The labels to add to any timeseries that this Prometheus instance scrapes.
labels:
[ <labelname>: <labelvalue> ... ]
# Rule files specifies a list of files from which rules are read.
rule_files:
[ - <filepath> ... ]
# A list of scrape configurations.
scrape_configs:
[ - <scrape_config> ... ]
```
### Scrape configurations `<scrape_config>`
The scrape config specifies a set of targets, which might dynamically change, and
parameters describing how to scrape them.
In the general case one scrape configuration specifies a single job. In advanced
configurations this might change.
Static targets can be configured via the
`target_groups`
parameter. The other
configs allow dynamic target discovery. Additionally, the
`relabel_configs`
allow
advanced modifications to any target belonging to the scrape config.
```
# The job name assigned to scraped metrics by default.
job_name: <name>
# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
# Per-target timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]
# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]
# The URL scheme with which to fetch metrics from targets.
[ scheme: <scheme> | default = http ]
# List of DNS service discovery configurations.
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of labeled target groups for this job.
target_groups:
[ - <target_group> ... ]
# List of relabel configurations.
relabel_configs:
[ - <relabel_config> ... ]
```
Where
`<scheme>`
may be
`http`
or
`https`
and
`<path>`
is a valid URL path.
`<job_name>`
must be unique across all scrape configurations and adhere to the
regex
`[a-zA-Z_][a-zA-Z0-9_-]`
.
### Target groups `<target_group>`
Target Groups collect a group of targets and specify a common label set for them.
They are the canoncial way to specify static targets in a scrape config.
```
# The targets specified by the target group.
targets:
[ - '<host>' ]
# Labels assigned to all metrics scraped from the targets.
labels:
[ <labelname>: <labelvalue> ... ]
```
Where
`<host>`
is a valid string consisting of a hostname or IP followed by a port
number.
### DNS-SD configurations `<dns_sd_config>`
A DNS-SD configuration allows to specify a set of hosts for which DNS SRV records are
queried. The DNS servers to be contacted are read from
`/etc/resolv.conf`
.
The label
`__meta_dns_srv_name`
is attached to discovered targets with the queried
SRV name as its value.
```
# A list of host names to be queried.
names:
[ - <host> ]
# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]
```
Where
`<host>`
is a valid hostname.
### Relabeling `<relabel_config>`
Relabeling is a powerful tool to dynamically rewrite the label set of a target before
its gets scraped. Multiple relabeling steps can be configured per scrape config.
They are applied to the label set of each target in order of their configuration.
Initially, aside from the configured labels, the
`job`
label is set to the
`job_name`
value
of the surrounding scrape configuration. The
`__address__`
label is set to the
`<host>:<port>`
value of the target.
After relabeling the
`instance`
label is set to the value of
`__address__`
by default if
it was not set during relabeling.
Below is an example configuration which explains the most common options with
Additional labels prefixed with
`__meta_`
may be available for relabeling. They are set
some comments:
by the service discovery mechanism that provided the target and vary between mechanisms.
Labels starting with
`__`
will be removed from the label set after relabeling is completed.
```
```
# Global settings and defaults.
# The source labels select values from existing labels. Their content is concatenated
global {
# by the configured separator and matched against the configured regular expression.
# By default, scrape targets every 30 seconds.
source_labels: '[' <labelname> [, ...] ']'
scrape_interval: "30s"
# By default, evaluate alerting and recording rules every 30 seconds.
# Separator placed between concatenated source label values.
evaluation_interval: "30s"
[ separator: <string> | default = ; ]
# Add the label service="api" to all time series scraped by this Prometheus.
labels {
# Label to which the resulting value is written in a replace action.
label {
# It is mandatory for replace actions.
name: "service"
[ target_label: <labelname> ]
value: "api"
}
# Regular expression against which the extracted value is matched.
}
regex: <regex>
# Load two files that define recording or alerting rules.
rule_file: "prometheus_base.rules"
# Replacement value against which a regex replace is performed if the
rule_file: "api_service.rules"
# regular expression matches.
}
[ replacement: <string> | default = '' ]
# Monitor Prometheus itself.
# Action to perform based on regex matching.
job {
[ action: <relabel_action> | default = replace ]
# This job will be named "prometheus", so a job="prometheus" label will be
# added to all time series scraped from it.
name: "prometheus"
# Scrape this job every 15s, overriding the global default.
scrape_interval: "15s"
# Configure a group of static HTTP targets
target_group {
target: "http://localhost:9090/metrics"
}
}
# Monitor a set of a API servers.
job {
# This job will be named "api-server", so a job="api-server" label will be
# added to all time series scraped from it.
name: "api-server"
# Discover targets for this job via service discovery (DNS-SD). The sd_name
# provided here needs to resolve to a DNS SRV record containing a set of
# IP:PORT pairs.
sd_name: "telemetry.server.prod.api.srv.my-domain.org"
# The SRV records do not have information about the endpoint to scrape, so it
# needs to be configured separately when discovering targets dynamically.
metrics_path: "/metrics"
}
```
```
Where
`<relabel_action> = drop | keep | replace`
and
`<regex>`
is a valid
regular expression.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment