Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
docs
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Administrator
docs
Commits
6244bb14
Commit
6244bb14
authored
May 11, 2015
by
Fabian Reinartz
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #76 from prometheus/fabxc/cfg
Change config docs to YAML, adjust to changes.
parents
4fe7b71e
62a473a8
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
175 additions
and
52 deletions
+175
-52
configuration.md
content/docs/operating/configuration.md
+175
-52
No files found.
content/docs/operating/configuration.md
View file @
6244bb14
...
...
@@ -6,66 +6,189 @@ nav_icon: sliders
# Configuration
Prometheus is configured via command-line flags and a configuration file. While
the command-line flags configure
general
system parameters (such as storage
the command-line flags configure
immutable
system parameters (such as storage
locations, amount of data to keep on disk and in memory, etc.), the
configuration file defines everything related to scraping
[
jobs and their
instances](/docs/concepts/jobs_instances/), as well as which
[
rule files to
load](/docs/querying/rules/#configuring-rules).
To view all available command-line flags, run Prometheus with the
`-h`
option.
To view all available command-line flags, run
`prometheus -h`
.
## Configuration file
To specify which configuration file to load, use the
`-config.file`
flag.
A configuration file is written in ASCII protocol buffer form, and every
available option is explained in detail in the protocol buffer schema
definition at
https://github.com/prometheus/prometheus/blob/master/config/config.proto.
The configuration file (including rule files) can be reloaded at runtime by
sending SIGHUP to the Prometheus process.
The file is written in
[
YAML format
](
http://en.wikipedia.org/wiki/YAML
)
,
defined by the scheme described below.
Brackets indicate that a parameter is optional. For non-list parameters the
value is set to the specified default.
Generic placeholders are defined as follows:
*
`<duration>`
: a duration matching the regular expression
`[0-9]+[smhdwy]`
*
`<labelname>`
: a string matching the regular expression
`[a-zA-Z_][a-zA-Z0-9_]*`
*
`<labelvalue>`
: a string of unicode characters
*
`<filename>`
: a valid path in the current working directory
The other placeholders are specified separately.
A valid example file can be found
[
here
](
https://github.com/prometheus/prometheus/blob/fabxc/sd_yamlcfg/config/testdata/conf.good.yml
)
.
The global configuration specifies parameters valid in all other configuration
contexts. They also serve as defaults for other configuration sections.
```
global:
# How frequently to scrape targets by default.
[ scrape_interval: <duration> | default = 1m ]
# How long until a scrape request times out.
[ scrape_timeout: <duration> | default = 10s ]
# How frequently to evaluate rules by default.
[ evaluation_interval: <duration> | default = 1m ]
# The labels to add to any timeseries that this Prometheus instance scrapes.
labels:
[ <labelname>: <labelvalue> ... ]
# Rule files specifies a list of files from which rules are read.
rule_files:
[ - <filepath> ... ]
# A list of scrape configurations.
scrape_configs:
[ - <scrape_config> ... ]
```
### Scrape configurations `<scrape_config>`
The scrape config specifies a set of targets, which might dynamically change, and
parameters describing how to scrape them.
In the general case one scrape configuration specifies a single job. In advanced
configurations this might change.
Static targets can be configured via the
`target_groups`
parameter. The other
configs allow dynamic target discovery. Additionally, the
`relabel_configs`
allow
advanced modifications to any target belonging to the scrape config.
```
# The job name assigned to scraped metrics by default.
job_name: <name>
# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]
# Per-target timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]
# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]
# The URL scheme with which to fetch metrics from targets.
[ scheme: <scheme> | default = http ]
# List of DNS service discovery configurations.
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of labeled target groups for this job.
target_groups:
[ - <target_group> ... ]
# List of relabel configurations.
relabel_configs:
[ - <relabel_config> ... ]
```
Where
`<scheme>`
may be
`http`
or
`https`
and
`<path>`
is a valid URL path.
`<job_name>`
must be unique across all scrape configurations and adhere to the
regex
`[a-zA-Z_][a-zA-Z0-9_-]`
.
### Target groups `<target_group>`
Target Groups collect a group of targets and specify a common label set for them.
They are the canoncial way to specify static targets in a scrape config.
```
# The targets specified by the target group.
targets:
[ - '<host>' ]
# Labels assigned to all metrics scraped from the targets.
labels:
[ <labelname>: <labelvalue> ... ]
```
Where
`<host>`
is a valid string consisting of a hostname or IP followed by a port
number.
### DNS-SD configurations `<dns_sd_config>`
A DNS-SD configuration allows to specify a set of hosts for which DNS SRV records are
queried. The DNS servers to be contacted are read from
`/etc/resolv.conf`
.
The label
`__meta_dns_srv_name`
is attached to discovered targets with the queried
SRV name as its value.
```
# A list of host names to be queried.
names:
[ - <host> ]
# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]
```
Where
`<host>`
is a valid hostname.
### Relabeling `<relabel_config>`
Relabeling is a powerful tool to dynamically rewrite the label set of a target before
its gets scraped. Multiple relabeling steps can be configured per scrape config.
They are applied to the label set of each target in order of their configuration.
Initially, aside from the configured labels, the
`job`
label is set to the
`job_name`
value
of the surrounding scrape configuration. The
`__address__`
label is set to the
`<host>:<port>`
value of the target.
After relabeling the
`instance`
label is set to the value of
`__address__`
by default if
it was not set during relabeling.
Below is an example configuration which explains the most common options with
some comments:
Additional labels prefixed with
`__meta_`
may be available for relabeling. They are set
by the service discovery mechanism that provided the target and vary between mechanisms.
Labels starting with
`__`
will be removed from the label set after relabeling is completed.
```
# Global settings and defaults.
global {
# By default, scrape targets every 30 seconds.
scrape_interval: "30s"
# By default, evaluate alerting and recording rules every 30 seconds.
evaluation_interval: "30s"
# Add the label service="api" to all time series scraped by this Prometheus.
labels {
label {
name: "service"
value: "api"
}
}
# Load two files that define recording or alerting rules.
rule_file: "prometheus_base.rules"
rule_file: "api_service.rules"
}
# Monitor Prometheus itself.
job {
# This job will be named "prometheus", so a job="prometheus" label will be
# added to all time series scraped from it.
name: "prometheus"
# Scrape this job every 15s, overriding the global default.
scrape_interval: "15s"
# Configure a group of static HTTP targets
target_group {
target: "http://localhost:9090/metrics"
}
}
# Monitor a set of a API servers.
job {
# This job will be named "api-server", so a job="api-server" label will be
# added to all time series scraped from it.
name: "api-server"
# Discover targets for this job via service discovery (DNS-SD). The sd_name
# provided here needs to resolve to a DNS SRV record containing a set of
# IP:PORT pairs.
sd_name: "telemetry.server.prod.api.srv.my-domain.org"
# The SRV records do not have information about the endpoint to scrape, so it
# needs to be configured separately when discovering targets dynamically.
metrics_path: "/metrics"
}
# The source labels select values from existing labels. Their content is concatenated
# by the configured separator and matched against the configured regular expression.
source_labels: '[' <labelname> [, ...] ']'
# Separator placed between concatenated source label values.
[ separator: <string> | default = ; ]
# Label to which the resulting value is written in a replace action.
# It is mandatory for replace actions.
[ target_label: <labelname> ]
# Regular expression against which the extracted value is matched.
regex: <regex>
# Replacement value against which a regex replace is performed if the
# regular expression matches.
[ replacement: <string> | default = '' ]
# Action to perform based on regex matching.
[ action: <relabel_action> | default = replace ]
```
Where
`<relabel_action> = drop | keep | replace`
and
`<regex>`
is a valid
regular expression.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment